On January 8, 2021, Twitter took the unprecedented action of permanently banning former U.S. President Donald Trump’s account for “repeated and severe violation” of the platform’s Glorification of Violence Policy. Twitter’s policies on “abusive behaviour” and “hateful conduct” were enforced in tandem. Twitter also took several other actions, including banning over 70,000 accounts associated with the QAnon[1] conspiracy theory group.[2] One might ask whether these actions indicate a turning point in how social media companies will handle potentially harmful content shared on their platforms. It seems like an easy fix for stemming inappropriate content, yet it is not really even a band-aid for a gushing wound. It does not begin to address the scale of this problem, which is not the fault of Twitter but of how people use Twitter. The problem is the way in which information spreads through the platform, how lies are perpetuated, how untruths become convictions, and how people can casually take in information with barely a thought about its provenance. The sources are largely unchecked partly because of a vacuum in laws regulating speech on Twitter.
The banning of certain accounts has been criticised because decisions on when and how it is appropriate to “de-platform” people—especially notable public figures such as Trump—are controversial[3]. Some see this act as a late imposition of rules, whereas others see it as a flagrant violation of free speech. German Chancellor Angela Merkel noted that the “right to freedom of opinion is of fundamental importance” and it is “problematic that the President’s accounts have been permanently suspended.” E.U. Commissioner for the Internal Market Thierry Breton argued that “public authorities should have more enforcement powers to ensure that all users’ fundamental rights are safeguarded, in line with the E.U.’s Digital Services Act”[4]; the reason was that it is nearly impossible to know whether decisions are made reasonably, arbitrarily, or discriminatorily. Although this perspective is valid when one is considering the right to free expression, there is indeed a fundamental difference between the outright prohibition of speech, on the one hand, and declining to provide a platform for specific types of speech, on the other hand. No one has a free speech “right” to post something on a particular platform. The First Amendment to the U.S. Constitution protects Twitter’s right to moderate and curate users’ words to reflect their own views, and Section 230 of the U.S. Communications Decency Act, 1996 additionally protects a platform from certain types of liability for users’ speech[5]. Therefore, there is nothing illegal about a private firm’s decision to censor people on its platform. In other words, online intermediaries that host or republish speech are protected against a range of laws that might otherwise be used to hold them legally responsible for what users say and do. The protected intermediaries include Internet service providers and a range of “interactive computer service providers,” including any online service that publishes third-party content. The Electronic Frontier Foundation aptly calls Section 230, “one of the most valuable tools[6] for protecting freedom of expression and innovation on the Internet.”[7] In fact, Section 230 explicitly encourages platforms to moderate and remove “offensive” user content (leaving them to judge what is offensive) and places most of the responsibility for the actual words on the author of them, not the forum or platform that contains them. It can be argued that for a long time, in the absence of any meaningful regulations, Twitter has had little incentive to regulate its massively profitable platform and curb the spread of falsehoods that can lead to improper actions. Jack Dorsey, Twitter’s chief executive officer, said in a post that “offline harm as a result of online speech is demonstrably real, and what drives our policy and enforcement above all.” However, the decision to ban “sets a precedent I feel is dangerous: the power an individual or corporation has over a part of the global public conversation.” The regulation of speech by contract raises serious concerns about the protection of freedom of expression. The terms of service laid down by Twitter seem to apply to some users and not to others. Twitter has been more permissive of some politicians and less permissive of others worldwide. This inconsistency gives the impression that Twitter is picking and choosing who gets to stay. There is no transparency or guarantee that a clear due process is being followed. There should be more transparency in how the rules are applied[8] and how sufficient procedural safeguards are offered against the removal of content on social media[9]. The Black Lives Matter movement had to put intense pressure on platforms to crack down on hate speech. The biggest flashpoint has been content from Trump, who repeatedly posted threats and false information about Black Lives Matter protests. Attempts to regulate hate speech on social media only managed to create more uncertainty. Facebook, Google, and Twitter have long argued that they are only platforms, not content providers. This narrative has helped these companies to avoid both the threat of regulation and any legal liability for posts.
The scale of the problem cannot be overstated. In India, for example, Twitter executives appeared before a parliamentary committee and answered questions on why the Home Minister’s Twitter account was blocked in November 2020. The executives said that the action was taken as per company policy after algorithms flagged a copyright issue for a picture posted. However, no one has an absolute right to post on a platform, and free expression would typically not be considered limited when an institution or company refused to provide a speech platform to an individual. Politics in India is often dominated by prejudice, and there is a long-standing history of employing fear of others. India’s love-hate relationship with Twitter is a classic example of opportunism.[10] Several Indian politicians who have been using divisive political rhetoric on Twitter for political expediency were keen to criticise the move by Twitter, complaining of censorship.
For years, Twitter has been used by supporters of identity politics and ultra-nationalists to spread disinformation[11] and misinformation in India. During the pandemic, there was a rise in Islamophobia that had long been an issue in the country. CoronaJihad hashtag strategically promoted a narrative of intolerance. It even created diplomatic embarrassment for the government. However, it took quite some time for the ultra-nationalists to realise that social media is a double-edged sword: this occurred when singer Rihanna, Swedish environmental activist Greta Thunberg, and lawyer‒author Meena Harris, niece of U.S. Vice President Kamala Harris, tweeted their support for farmers protesting in India. The ultra-nationalists, beaten at their own game, called for the regulation of Twitter, a vehicle they had been using indiscriminately and with impunity to spread hatred. Unfortunately, instead of addressing the core problem of hate speech, a paranoid government resorted to knee-jerk reactions[12] to stop the damage to its reputation. India’s hate speech laws are so broad in scope that they infringe on peaceful speech and fail to meet international standards.[13] Instead of being used to protect the powerless, these laws are often used at the behest of influential individuals or groups who claim they have been offended so they can silence speech they do not like. The State too often acts on such complaints, thereby leaving members of minority groups, writers, artists, and scholars facing threats of violence and legal action.[14] The Indian government has accused Twitter of not complying with the government directive to take action against Twitter handles for allegedly spreading misinformation and provocative content connected with the farmers’ protest. Twitter has argued that it had not blocked all of the content[15] because it believed the government’s orders were not in line with Indian laws. It permanently suspended some accounts and blocked access to many others in India, though the posts could be read outside the country. The Minister for Electronics and Information Technology and Communications said in Parliament that social media companies, including Twitter, were welcome to do business in India, but only if they abided by the laws of India “irrespective of Twitter’s own rules and guidelines.” The Indian government has also threatened Twitter executives with a 7-year jail term and a fine to enforce compliance with the order directing the blocking of accounts alleging “farmer genocide.” Earlier, on January 31, the government issued an order to block Twitter handles for the same reasons. The orders were issued under Section 69A of the Information Technology Act, 2000, which empowers[16] the government to direct any intermediary to block any information for public access “in the interest of [the] sovereignty and integrity of India, defence of India, security of the State, friendly relations with foreign states or public order or for preventing incitement of the commission of any cognisable offence relating to above.” This aggressive posturing of the government hides the fact that the government, for its nationalistic audience, focused on populism in what may be viewed as a revisionist approach in reverse to contain Twitter threats, both real and imagined.
There is cause for scepticism. It seems the Indian government is trying to coerce Twitter to silence protesters or moving towards blocking it from operating in India.[17] “The world’s largest democracy has turned into an electoral autocracy”.[18]It would not be wrong to accuse the Indian government of “double standards” regarding the regulation of disseminating false or misleading information[19]. Ultra-nationalists have politicised the freedom of expression. The ongoing polarisation of Indian society poses overarching challenges because of a constant flux of disinformation and a way of playing into the hands of anti-democratic actors in India. If a state and its official representatives do not consistently and pro-actively condemn hate speech, it leads to a violation of the rule of law.
The pandemic is the cause of a divisive and dysfunctional world, and it is also a symptom of a dysfunctional world. On May 8, 2020, the Secretary-General of the United Nations said that the pandemic continues to unleash a tsunami of hate, xenophobia, scapegoating, and scaremongering. It seems we now live in a world where the truth does not matter, where many people believe they are entitled to the “facts” they want to believe, and where partisan divides are based on maintaining or creating one’s facts relative to any opposing partisan view. Our prejudices are confirmed by algorithms developed by social media platforms based on our previous searches. With every search, we find our own biases confirmed. They lock us into “perpetual tribalism.” They shield us from diverse perspectives. The problem of disinformation and hate speech and its effects on Twitter resembles an aberration of an individual’s psychology and its effects on that individual.
To understand why it is such a problem, it is essential to understand that Twitter is not a powerful mechanism simply because it is grounded in technology. It is powerful because it is grounded in people. The problem is that we are trying to find our way around contradictions. Is it a problem of regulating Twitter, or is it a problem of regulating human behaviour? The human expression of hate, whether directed at an individual, community, or set of ideas, comes from something deep inside the human psyche. Hate offers some people an emotional escape route because it promises a simple explanation for what is wrong with the world. Hyper-nationalism devoid of logic creates a divisive narrative, and its followers are mobilised around a political agenda. Twitter has become a vehicle for politicians who use hate speech as a tool for identity politics. We often do not understand the manufactured quality of the hate spin, especially if the boundary line between hate speech and free speech is blurred. Discussions about regulating online speech are hopelessly entangled in the debate over whether the platform is doing too much regulating—sometimes called censorship—or not enough.
The question of policy has been a significant but much-understated point in the regulation of social media. A proportionate and risk-based regulation is essential to develop a system of controls, but this critical aspect is often overlooked when it comes to determining and streamlining policies that should shape the laws that govern social media. Hence, a more flexible and context-sensitive approach to Twitter policy review should be carried out. In reality, Twitter’s self-regulation is not strong enough to keep users safe and that statutory regulations should be introduced. The U.K. government has proposed a new regulatory framework for online companies; it targets a wide range of illegal or harmful content affecting individual users. Online Safety Bill 2021 would mark the end of the “era of self-regulation” that has “not gone far or fast enough” to keep users safe[20]. The bill places significant legal and practical responsibility on social media companies through its “duty of care” clause for Internet companies, including social media platforms. An independent regulator would oversee and enforce compliance with the duty. Underlining the calls for technologic accountability, France[21] and Germany[22] have passed laws explicitly targeting disinformation. The E.U. has orchestrated the industry self-regulatory Code of Practice on Disinformation and developing a comprehensive regulatory framework that demands fairness and responsibility from online platforms in the Digital Markets Act and Digital Services Act. However, the Internet does not end at the borders of the E.U.
The regulation of harmful speech requires drawing a line between legitimate freedom of speech and hate speech. Freedom of speech is protected in most countries around the world, but many countries do not actually provide adequate protection for it. One of the dangers of regulating hate speech is that it can become a pretext for oppressive regimes to limit their citizens’ rights. The Indian government has published rules to regulate all forms of social media, streaming or “over-the-top” platforms, and news-related websites. The rules questionably portrayed as a self-regulatory mechanism includes a code of ethics and regular compliance reports. This hegemonic set of rules could also be seen as restricting the right to express dissent or a legitimate concern or pose questions that would only trigger more disinformation. It can be argued that the Indian government is using this as a context for cracking down on free speech and silencing opponents. Indian government’s proposed oversight mechanism is nothing more than “censorship” by the backdoor. These regulations have already forced Amazon Prime and Netflix to change their course as the government decides to impose its version of “objectionable”. Even though India’s mainstream media rarely criticises the government for fear of repercussions, the Editors Guild of India said the rules “fundamentally alter how publishers of news operate over the internet and have the potential to undermine media freedom in India seriously”. From the perspective of Article 19, a human rights organisation with a specific focus on the defence and promotion of freedom of expression worldwide, many of these regulatory models are deeply problematic.
The problem of regulating social media, particularly Twitter, might well become the defining problem for the information society. The regulation of the Twitter debate is clouded by appeals to simplistic notions of free speech. What will be the rules that determine what is okay and what is not? Moreover, how will courts make these judgments until a body of law exists that addresses the proper use of social media? It is an error to think that Twitter is the source of the problem; it is one cog in the wheel of a much bigger social problem. No one seems to have found a comprehensive solution based on a legislative framework or found a way to eradicate the noxious influences of Twitter without also suppressing the very features that make the platform popular.
Against this backdrop, perhaps it is time to think about a global regulatory framework that can be applied to the social objectives of social media. What is needed is a jurisprudence of positive liberties that values the rights of speakers to speak and the development of a Twitter environment where hate speech is regulated. This kind of jurisprudence must recognise the complex effects of laws that serve to reallocate speech opportunities. However, another goal is to ensure that a legal framework is developed that mandates that Twitter (and other social media platforms) carry out an extensive “risk assessment of their algorithms” to ensure that the algorithms direct users to balanced and fact-checked sources. This will enable people to develop viewpoints[23] and political discourse with institutional guarantees.[24] The Secretary-General of the United Nations has called for global rules to regulate powerful social media companies such as Twitter and Facebook. Indeed, in the absence of any clear principles, fragmented regulation will lead to censorship by authoritarian regimes that are willing to exploit the weaknesses in disjointed and contradictory regulations and may sometimes result in a legal “arms race” or even more conflicts.[25] However, achieving optimal success would be challenging due to the difficulty of reaching an international agreement. Such agreements are often unenforceable, and getting key players from different cultures and with different priorities to agree on a standard may be difficult. The Covid-19 pandemic and the Black Lives Matter movement have highlighted that social media companies lack institutional accountability and a due process governing their power to influence and restrict online hate speech. In whichever way we look at this, regulating speech could create “jurisdictional mayhem.” The question, therefore, goes beyond which regulation should apply. If there is any merit in the argument that such changes go beyond the mere adaptation of traditional rules, then it is also arguable that changes can only come about through an overarching consideration of policy dynamics.
Acknowledgement
The author would like to thank Garima Saxena, Rajiv Gandhi National University of Law, India, for research assistance in the early stages of this article.
[1] The QAnon conspiracy theory emerged in 2017 and was identified as a possible domestic terrorism threat by the Federal Bureau of Investigation in 2019 because of its potential to incite extremist violence. QAnon-related conspiracy theories have been exported to Latin America and Europe, contributing to anti-lockdown protests in the United Kingdom and Germany.
[2] Facebook banned posts containing the phrase “stop the steal.”
https://blog.twitter.com/en_us/topics/company/2021/protecting-the-conversation-following-the-riots-in-washington-.html
[3] Stjernfelt F., Lauritzen A.M. (2020) Facebook and Google as Offices of Censorship. In: Your Post has been Removed. Springer, Cham. https://doi.org/10.1007/978-3-030-25968-6_12
[4] https://www.politico.eu/article/thierry-breton-social-media-capitol-hill-riot/
[5] Ehrlich, P. (2002). Communications Decency Act § 230. Berkeley Technology Law Journal, 17(1), 401-419. http://www.jstor.org/stable/24120113
[6] The law requires only that restrictions be imposed in “good faith.” https://www.eff.org/issues/cda230
[7] Section 230 of the Communications Decency Act | Electronic …. https://www.eff.org/issues/cda230
[8] Anil Vij, an Indian politician and Cabinet Minister in the Government of Haryana, posted a controversial tweet that unfairly accused an individual of having “seeds of anti-nationalist thought.” Twitter briefly deleted it after a complaint from Germany. Later, the social media giant said the post and complaint had been investigated, and the post was “not subject to removal.” https://www.ndtv.com/india-news/disha-ravi-farmers-protest-meant-anti-national-thought-haryana-minister-anil-vij-explains-destroy-tweet-2371758?pfrom=home-ndtv_topscroll
[9] Floridi, L. (2021). Trump, Parler, and Regulating the Infosphere as Our Commons. Philosophy & Technology. https://doi.org/10.1007/s13347-021-00446-7
[10] In 2020, India and four other countries accounted for 96% of the global legal requests for removing content. The site also received 5,900 requests from the Indian government for information relating to accounts. https://blog.twitter.com/en_us/topics/company/2020/ttr-17.html
[11] Basu, S. (2019). WhatsApp, India’s favourite chat app: A threat to democracy? Risk Group. https://riskgroupllc.com/whatsapp-indias-favourite-chat-app-a-threat-to-democracy/
[12] India launched an unprecedented campaign on Twitter. Nationalist trolls pulled out old images of the assault on Rihanna by her ex-boyfriend and heaped misogynistic abuse on her. The Indian government did not condemn this. https://www.bbc.co.uk/news/world-asia-india-55931894
[13] The Indian government is evidently concerned about the reach of platforms such as Netflix and Amazon Prime because these platforms, perhaps unwittingly, have become spaces for dissent and critiques. Basu, S. (2020). Tanishq Advertisement: Attack on Creative Expression in …. https://riskgroupllc.com/tanishq-advertisement-attack-on-creative-expression-in-india/
[14] The arrest of climate activist Disha Ravi in the sedition case is a disturbing example of misinterpreting the law. Delhi police were unable to produce any evidence against Ms Ravi, which shows that she had collaborated to “spread disaffection against the Indian State”. The Additional Sessions Court Judge Dharmendra Rana while granting bail, said, “Difference of opinion, disagreement, divergence, dissent, or for that matter, even disapprobation, is recognised as legitimate tools to infuse objectivity in state policies. An aware and assertive citizenry, in contradistinction with an indifferent or docile citizenry, is indisputably a sign of a healthy and vibrant democracy.” https://www.livelaw.in/pdf_upload/disha-ravi-bail-order-asj-dharmender-rana-389605.pdf
[15] Twitter had first blocked some 250 accounts in response to legal notice by the government, citing objections based on public order. However, six hours later, Twitter restored the accounts, citing “insufficient justification” for continuing the suspension. https://blog.twitter.com/en_in/topics/company/2020/twitters-response-indian-government.html
[16] All about Section 69A of IT Act under which Twitter had …. https://theprint.in/theprint-essential/all-about-section-69a-of-it-act-under-which-twitter-had-withheld-several-posts-accounts/597367/
[17] The government is also actively promoting a microblogging app called Koo as a replacement for Twitter. https://theprint.in/opinion/pov/i-spent-48-hours-on-atmanirbhar-bharats-own-koo-heres-what-i-found/604269/
[18] A report published by Varieties of Democracy Institute has said that India’s autocratisation process has “largely followed the typical pattern for countries in the ‘Third Wave’ over the past ten years: a gradual deterioration where freedom of the media, academia, and civil society were curtailed first and to the greatest extent”. https://www.v-dem.net/files/25/DR%202021.pdf
[19] Unlawful Activities (Prevention) Act, amended in 2019, is being used to “harass, intimidate, and imprison political opponents” as well as against people protesting the government’s policies. https://www.v-dem.net/files/25/DR%202021.pdf
[20] Regulating online harms – House of Commons Library. https://commonslibrary.parliament.uk/research-briefings/cbp-8743/
[21] On May 13, 2020, despite strong disagreements between the chambers of the French Parliament, a law was adopted by the National Assembly to tackle the spread of hate speech on the Internet. https://www.euronews.com/2018/11/22/france-passes-controversial-fake-news-law
[22] Germany introduced its Network Enforcement Act in 2017, obliging large social networks to put in place robust reporting systems for certain illegal content. https://www.loc.gov/law/help/social-media-disinformation/compsum.php
[23] Shirky, C. (January/February 2011). The political power of social media: Technology, the public sphere, and political change. Foreign Affairs. https://www.cc.gatech.edu/~beki/cs4001/Shirky.pdf
[24] McNair, B. (2017). An introduction to political communication (6th ed., p. 25). Routledge
[25] See Internet & Jurisdiction Global Status Report 2019, p. 14.https://www.internetjurisdiction.net/uploads/pdfs/GSR2019/Internet-Jurisdiction-Global-Status-Report-2019-Key-Findings_web.pdf