Ever since ChatGPT rolled out in late 2022, the global conversation about Artificial Intelligence (AI) became fixated on the threats posed by deepfakes, synthetic propaganda and AI systems that could create entire universes with a single prompt. In this frenzy about generative AI, most of the debates moved away from a far quieter and far more prevalent form of AI that has been shaping online opinions, creating divisions and even fuelling violence in the real world for years. Long before chabots started writing student essays and creating music symphonies on prompts, algorithms designed and deployed by Google, Meta and TikTok were deciding what billions of people watch, believe and share. These AI systems are still curating the online information ecosystem through Facebook’s News Feed, YouTube’s recommendation engine and Tiktok’s ‘For You’ feed. The scale and speed at which these systems manipulate digital space cannot be matched by human editors or even entire media organisations. This class of AI systems may pose even a bigger threat to the public discourse, political stability and democratic processes in the world, especially in the Global South.
In 2009, Facebook launched an ‘algorithmic ranking system’ to personalise content that was shown to the users on its platforms. This system was designed to increase user engagement and interest on the platform. However, in the coming years, this system ended up promoting hateful and violent content in pursuit of increased views and engagements, which ultimately translated into big profits for the tech company. In 2017, Rohingya Muslims were subjected to ethnic cleansing in Myanmar, resulting in thousands of deaths and forcing 700,000 into exile. A UN fact finding mission in 2018 found out that Facebook was a ‘useful instrument’ for running systematic hate campaign against the Rohingya and creating conducive conditions for ethnic cleaning. In 2022, Amnesty International reported that Facebook algorithms ‘proactively amplified’ anti-Rohingya content. In the same report, while commenting on the content moderation policies of Meta, a former employee of the company was quoted stating, “Different countries are treated differently. If 1,000 people died in Myanmar tomorrow, it is less important than if 10 people in Britain die.”
Then there is the case of political polarisation and violence in Ethiopia where Facebook algorithms played a similar role. In 2023, at a time when Ethiopia faced a brutal civil war that resulted in intense human rights abuses and ethnic cleansing, a professor was murdered by a vigilante group in front of his house. This murder resulted from a coordinated campaign of disinformation and hate against him that ran on Facebook for months. This killing was described as ‘death by design’ by a UK based law firm Foxglove as the whereabouts of the victim could not have been known by the mob without the Facebook posts. An investigation by Business Insider even labelled Facebook as complicit in this killing as the platform failed to respond in a timely manner. However, Facebook isn’t the only platform where algorithms promote hateful content for views and profits. A study was conducted by Mozilla Foundation in 2021 to analyse the recommendation algorithms of YouTube and the nature of content promoted by it. The study found that around 71 percent of content containing hate speech, misinformation and violence was recommend to the users by YouTube. Moreover, the rate of recommending such content was 60 percent higher in non-English speaking countries as compared to English speaking ones. Similarly, TikTok’s ‘For You’ feed is one of the most efficient AI systems that captures the attention of its users by serving highly personalised content. The algorithms of these platforms are optimised to enhance engagement even at the cost of truth and human safety. Thus such AI systems could pose a bigger threat to society than Generative AI and this challenge becomes even greater for a region in the Global South such as South Asia.
South Asia is especially vulnerable to the dangers of the recommendation AI algorithms used by the social media platforms. The information ecosystems in South Asia are already weaker with low digital literacy and hyper-political media landscape. Moreover, with 340 million adolescents, South Asia is home to the largest youth population of the world. This younger demographic prefers to rely primarily on social media for news consumption rather than traditional media. The recommendation algorithms of social media thrive on outrage and sensationalism as such content drives bigger number of views. The often charged political environment of South Asian countries including India, Bangladesh and Pakistan offers plenty of sensational content. During moments of national significance such as elections, security incidents and even wars, the perception of users online is often shaped by algorithm-driven content that keeps them emotionally charged.
There are increasing concerns about the potential of Generative AI to be used as a tool to produce fake and hateful content at scale and rightly so. However, Generative AI still needs a prompt to produce such content, while the recommendation AI systems used by social media platforms have had the agency to distribute hateful and fake content for years. For countries in the Global South, there is an urgent need to build context-specific and localised content moderation capacity. There is a clear lack of strict moderation for content in non-English languages. Moreover, social media platforms largely pay less attention to the privacy and safety of users from non-Western countries. As a result of this, hateful and misleading content often goes unnoticed under the safety protocols of these platforms. Moreover, improving digital literacy is also the need of the hour. Internet has become a basic need in the modern digital world as it provides information and connectivity needed for every sphere of life including business, finance, education, entertainment and news. However, without basic digital skills, people in the global south become acutely vulnerable to online misinformation, propaganda, scams and social exclusion. Lastly, social media platforms can no longer be seen as objective and unbiased as their recommender algorithms are inherently driven by engagements and profits rather than truth and objectivity. Thus, it is important for countries in the Global South to regulate these social media platforms with appropriate auditing and accountability mechanisms without infringing on personal freedoms and privacy.
Muhammad Faizan Fakhar is a Senior Research Associate at the Centre for Aerospace & Security Studies (CASS), Islamabad, Pakistan. This article already published Modern Diplomacy. He can be reached at: [email protected]

