megaphone-360x260


Share this article

Facebook
Twitter
LinkedIn

Social media platforms gained popularity by ensuring open access, promoting impartial values, the right to free speech, and highlighting the need for reforms related to governance and accountability in the public domain. However, over the past few years, a number of international investigations/reports have revealed a markedly opposite trend. States and non-state actors have been exploiting these platforms as tools to achieve their vested interests. Given such manipulation by multiple actors, there is a need to address lacunae in social media management at three different levels – regulatory, technological, and political as highlighted in the Human Rights Watch Group report titled ‘Video Unavailable: Social Media removing Evidence of War Crimes.’

Regulatory flaws can be identified in terms of content moderation. The primary responsibility of social media companies is content moderation as they are not content producers. Hence, they decide the fate of the content, i.e., the type of content that is going to be published and the conditions under which certain content is prohibited/removed. However, the content consumed by says one group within the United States could be considered offensive or problematic, as compared to its perception in another part of the world, where it could be perceived much more positively.

For example, last year, Zoom, Facebook, and YouTube refused to host San Francisco State University’s roundtable on Palestinian rights, ‘Whose Narratives? Gender, Justice and Resistance’ because pro-Israeli lobbyists disagreed with the political views of the main speaker Leila Khalid. According to bill Ottoman, CEO of minds.com, ‘there is a growing body of evidence that content policies on the big networks are fueling the cultural divide and a lot of the polarization and civil unrest.’

Similarly, technological flaws can be identified in terms of applying AI-based machine learning algorithms to deal with Terrorism, Violence, and Extremism Content (TVEC). The ability of such AI systems to make accurate judgments weakens in situations where definitions about certain content are relative or vague (of TVEC e.g). This has helped states such as Israel, India, Russia and authoritarian regimes to cover up their inhumane practices of war crimes during conflicts under the garb of TVEC removal policy. Moreover, it has been reported that the AI-based algorithmic approach of deleting TVEC obliterates evidence of war crimes.

Another technological flaw is the absence of a digital archiving mechanism to ensure that the content removed is conserved, archived, and accessible to international investigators given its human and legal dimensions. For understanding the mechanism of content removal, Human Rights Watch wrote letters to digital media companies like Facebook and Twitter to which most did not respond, and the ones that responded failed to address the technical queries. Furthermore, these social media platforms also did not share the details about the mechanism that allows media or civil society organizations to call into evidence removed content in criminal investigations.

Political flaws include the use of pressure tactics and economic coercion wielded by powerful states on social media companies to enforce regulatory policies of their choice. For instance,  Russian authorities put pressure on social media companies to censor online content deemed illegal or anti-state. Moreover, Russia has issued a number of warnings that include potential blocking and imposition of fines on digital platforms in case they fail to comply with its rapidly growing oppressive internet legislation.

A need for better policies framework
Some international scholars are of the view that this attitude of states to compel private companies to enforce their own regulatory goals often results in the framing of discriminatory policies in favor of the powerful. Thereby, providing more leverage to the powerful and make users in the conflict-ridden areas and authoritarian regimes susceptible to bad governance and violence by censoring their voices.

Digital media platforms and states need to come up with a collaborative, comprehensive, inclusive and transparent policy framework of ensuring free speech, freedom of expression and voicing rights of the disenfranchised, while sticking to community guidelines of dealing with illegal, fake and violent content.  However, there is no such framework as the assumption is that one size will fit all. What is required is an approach that is universally acceptable.

In this regard, a framework that is more human security-oriented would serve the purpose. For this, states and international organizations should create an independent body with equal participation from all stakeholders well versed in human rights and social media communication. The body’s primary task would be to ensure greater transparency by framing community guidelines that are unbiased, impartial and apolitical.
 
Amna Tauhidi is a researcher at the Centre for Aerospace & Security Studies (CASS). The article was first published in Global Village Space (GVS). She can be reached at cass.thinkers@gmail.com. 

Image Source: David L. Sloss, “Weaponization of Social Media by the Authoritarian States.” Markkula Center for Applied Ethics at Santa Clara University, Dec 5, 2019.

Recent Publications

Browse through the list of recent publications.

9 Hypotheses

Having a conceptual framework in mind allows one to apply existing knowledge to unfamiliar scenarios, adjusting the framework as new data comes to light. Since

Read More »