13. Faizan-OA-Why-Rec-AI-Oped thumbnail-December-2025-APP

Ever since ChatGPT rolled out in late 2022, the global conversation about Artificial Intelligence (AI) became fixated on the threats posed by deepfakes, synthetic propaganda and AI systems that could create entire universes with a single prompt. In this frenzy about generative AI, most of the debates moved away from a far quieter and far more prevalent form of AI that has been shaping online opinions, creating divisions and even fuelling violence in the real world for years. Long before chabots started writing student essays and creating music symphonies on prompts, algorithms designed and deployed by Google, Meta and TikTok were deciding what billions of people watch, believe and share. These AI systems are still curating the online information ecosystem through Facebook’s News Feed, YouTube’s recommendation engine and Tiktok’s ‘For You’ feed. The scale and speed at which these systems manipulate digital space cannot be matched by human editors or even entire media organisations. This class of AI systems may pose even a bigger threat to the public discourse, political stability and democratic processes in the world, especially in the Global South.

In 2009, Facebook launched an ‘algorithmic ranking system’ to personalise content that was shown to the users on its platforms. This system was designed to increase user engagement and interest on the platform. However, in the coming years, this system ended up promoting hateful and violent content in pursuit of increased views and engagements, which ultimately translated into big profits for the tech company. In 2017, Rohingya Muslims were subjected to ethnic cleansing in Myanmar, resulting in thousands of deaths and forcing 700,000 into exile. A UN fact finding mission in 2018 found out that Facebook was a ‘useful instrument’ for running systematic hate campaign against the Rohingya and creating conducive conditions for ethnic cleaning.  In 2022, Amnesty International reported that Facebook algorithms ‘proactively amplified’ anti-Rohingya content. In the same report, while commenting on the content moderation policies of Meta, a former employee of the company was quoted stating, “Different countries are treated differently. If 1,000 people died in Myanmar tomorrow, it is less important than if 10 people in Britain die.”

Then there is the case of political polarisation and violence in Ethiopia where Facebook algorithms played a similar role. In 2023, at a time when Ethiopia faced a brutal civil war that resulted in intense human rights abuses and ethnic cleansing, a professor was murdered by a vigilante group in front of his house. This murder resulted from a coordinated campaign of disinformation and hate against him that ran on Facebook for months. This killing was described as ‘death by design’ by a UK based law firm Foxglove as the whereabouts of the victim could not have been known by the mob without the Facebook posts. An investigation by Business Insider even labelled Facebook as complicit in this killing as the platform failed to respond in a timely manner. However, Facebook isn’t the only platform where algorithms promote hateful content for views and profits. A study was conducted by Mozilla Foundation in 2021 to analyse the recommendation algorithms of YouTube and the nature of content promoted by it. The study found that around 71 percent of content containing hate speech, misinformation and violence was recommend to the users by YouTube. Moreover, the rate of recommending such content was 60 percent higher in non-English speaking countries as compared to English speaking ones.  Similarly, TikTok’s ‘For You’ feed is one of the most efficient AI systems that captures the attention of its users by serving highly personalised content. The algorithms of these platforms are optimised to enhance engagement even at the cost of truth and human safety. Thus such AI systems could pose a bigger threat to society than Generative AI and this challenge becomes even greater for a region in the Global South such as South Asia.

South Asia is especially vulnerable to the dangers of the recommendation AI algorithms used by the social media platforms. The information ecosystems in South Asia are already weaker with low digital literacy and hyper-political media landscape. Moreover, with 340 million adolescents, South Asia is home to the largest youth population of the world. This younger demographic prefers to rely primarily on social media for news consumption rather than traditional media. The recommendation algorithms of social media thrive on outrage and sensationalism as such content drives bigger number of views. The often charged political environment of South Asian countries including India, Bangladesh and Pakistan offers plenty of sensational content. During moments of national significance such as elections, security incidents and even wars, the perception of users online is often shaped by algorithm-driven content that keeps them emotionally charged.

There are increasing concerns about the potential of Generative AI to be used as a tool to produce fake and hateful content at scale and rightly so. However, Generative AI still needs a prompt to produce such content, while the recommendation AI systems used by social media platforms have had the agency to distribute hateful and fake content for years. For countries in the Global South, there is an urgent need to build context-specific and localised content moderation capacity. There is a clear lack of strict moderation for content in non-English languages. Moreover, social media platforms largely pay less attention to the privacy and safety of users from non-Western countries. As a result of this, hateful and misleading content often goes unnoticed under the safety protocols of these platforms. Moreover, improving digital literacy is also the need of the hour. Internet has become a basic need in the modern digital world as it provides information and connectivity needed for every sphere of life including business, finance, education, entertainment and news. However, without basic digital skills, people in the global south become acutely vulnerable to online misinformation, propaganda, scams and social exclusion. Lastly, social media platforms can no longer be seen as objective and unbiased as their recommender algorithms are inherently driven by engagements and profits rather than truth and objectivity. Thus, it is important for countries in the Global South to regulate these social media platforms with appropriate auditing and accountability mechanisms without infringing on personal freedoms and privacy.

Muhammad Faizan Fakhar is a Senior Research Associate at the Centre for Aerospace & Security Studies (CASS), Islamabad, Pakistan. This article already published  Modern Diplomacy. He can be reached at: [email protected]


Share this article

Facebook
Twitter
LinkedIn

Recent Publications

Browse through the list of recent publications.

The Cover-up: IAF Narrative of the May 2025 Air Battle

Even after one year since the India-Pakistan May war of 2025, the Indian discourse regarding Operation Sindoor remains uncertain under its pretence of restraint. The Pahalgam attack on 22 April, which killed 26 people, triggered an escalatory spiral. New Delhi quickly accused Pakistan-linked elements, while Islamabad refuted the allegation and demanded an independent investigation. On 7 May, India launched attacks deep inside Pakistan under what it later termed as Operation Sindoor. The political motive was intended to turn the crisis into coercive signalling by shifting the blame onto the enemy and projecting a sense of military superiority.
This episode, however, began to fray immediately as war seldom follows the intended script. Within minutes PAF shot down 7 IAF aircraft including 4 Rafales. On 8 May, Reuters reported that at least two Indian aircraft were shot down by a Pakistani J-10C, while the local government sources reported other aircraft crashes in Indian-occupied Jammu and Kashmir

Read More »

Why the IAF’s Post-Sindoor Spending Surge is a Sign of Panic

After Operation Sindoor, India is spending billions of dollars on new weapons. This is being taken by many people as an indication of military prowess. It is not. This rush to procure weapons is in fact an acknowledgement that the Air Force in India had failed to do what it was meant to do. The costly jets and missiles that India had purchased over the years failed to yield the promised results.

Sindoor was soon followed by India in sealing the gaps which the operation had exposed. It was reported that Indian Air Force (IAF) is looking to speed up its purchases of more than 7 billion USD. This will involve other Rafale fighter jets with India already ordering 26 more Rafales to the Navy in 2024 at an estimated cost of about 3.9 billion USD. India is also seeking long-range standoff missiles, Israeli loitering munitions and increased drone capabilities. Special financial powers of the Indian military were activated to issue emergency procurement orders. The magnitude and rate of these purchases speak volumes.

Indian media and defence analysts have over the years considered the Rafale as a game changer. When India purchased 36 Rafales aircrafts at an approximate cost of 8.7 billion USD, analysts vowed that the aircraft would provide India with air superiority over Pakistan. Operation Sindoor disproved all those allegations. Indian aircraft did not even fly in Pakistani airspace when the fighting started. India solely depended on standoff weapons that were launched at a safe distance. The air defence system of Pakistan, comprising of the HQ-9 surface-to-air missile system and its own fighters, stood its ground.

Read More »

May 2025: Mosaic Warfare and the Myth of Centralised Air Power

Visualise a modern-day Air Force commander sitting in the operations room, miles away from the combat zone, overseeing every friendly and enemy aircraft and all assets involved in the campaign. In a split second, he can task a fighter, reposition a drone, and authorise a strike. In today’s promising technological era, he does not even need an operations room; a laptop on his desktop will suffice. The situation looks promising as it offers efficiency, precision, and control. The term used for such operational control is ‘centralisation’, which has been made possible with advanced networking, integrating space, cyber, surveillance, artificial intelligence, and seamless communication, enabling a single commander to manage an entire campaign from a single node. Centralised command and control, championed by the Western air forces and then adopted by many others, has thus been seen as a pinnacle of modern military power.
The concept of centralisation, enabled by state-of-the-art networking, may seem promising, but it is nothing more than a myth.

Read More »