Home International Law Global Freedom & Civil Liberties Social media accountability for hate speech

Social media accountability for hate speech

10 min read
0
169

This week many UN-appointed experts of human rights requested the heads of various social media to change their business model for enhancing accountability in relation to online hate speech. The CEOs of, among the others, Twitter, Meta and Apple were called out by name to address discriminative posts while being at the same time in line with international standards for freedom of expression, following a human rights-based approach that takes into account corporate social responsibility.

Recently, it was demonstrated that the use of the N-word increased almost of 500 percent on Twitter in the 12 hours following the takeover of Elon Musk. [1] This phenomenon was aggravated by the firing campaign conducted by the recently appointed CEO against persons holding key position in the field of human rights protection and ethics of the AI. As a matter of fact, this is in contrast with Musk’s stated reason to acquire Twitter, namely to defend the right of free speech. The head of OHCHR Volker Türk issued an open letter to Musk, reminding him that freedom of expression as per human rights law does not cover hatred that incites to discrimination or violence. For what regards Meta, notwithstanding the creation in 2020 of an oversight board that is a positive step, the fact that is still possible to publish posts in favour of conspiracy theories or electoral disinformation, but it is also possible to find ‘inflammatory ads’ (it is interesting to notice that an independent research centre from the US revealed the inability of Meta to block certain advertisements) is an issue of concern. Despite the numerous recommendations issued in response to the appeals submitted before the Meta oversight board are encouraging, these are time consuming procedures that to become effective require a continuous commitment at the highest level to keep abreast of modern tools to combat online disinformation.

The current business model adopted by social media focuses on attracting users’ attention in order to keep them engaged online: because violent and offensive speech often catches attention, they may become more audible on these platforms. In general terms, as recognised by human rights experts, the way in which social networks regulate themselves may entail issues of arbitrariness and profit interests. [2] Theoretically, for a business to be held accountable for racial justice is a social responsibility: it upholds provisions set by the ICCPR, the CERD, and the UN Guiding Principles on Business and Human Rights. But as a matter of fact, although the major social media companies have adopted standards and policies prohibiting hate speech, there is a gap between announced policies and their enforcement. To close this gap, since providers may profit by belligerent posts, it is up to States the need to act more robustly: when current methods of self-policing enforced by social media platforms have proven to be ineffective, a government enforcement is urgently needed. Nevertheless, regulations on this matter may incur in some legal barriers when legislating on media monitoring, especially due to concerns related to freedom of expression.

The case of the Social Media Hate Speech Accountability Act, a bill amending the general business law passed by the New York State Legislature that entered into force in December 2022, is emblematic. Designed to tackle episodes such as the Buffalo mass shooting, where the perpetrator promoted and streamed the attack online to spread hateful ideology and amplify the massacre, the bill represented an attempt to address flaws of social media policies against hate speech, imposing a more consistent and transparent monitoring process for companies operating under the NY jurisdiction. The Act requires all the social media platforms that operate in New York State to create a public channel for users to report alleged hateful conduct, and obliges the providers to maintain an effective procedure for handling complaints: whenever a content is deemed as amounting to hate speech, defined as public expression that intentionally makes an insulting statement about a targeted group, the social media provider shall remove or block access to it, otherwise it may be fined up to one million dollars. [3] It is particularly relevant in the US context because section 230 of the 1996 Communications Decency Act prevents social media companies from being sued in private litigations, due to the fact that providers shall not be treated as publishers or speakers of any information that is posted on their social network. However, US public opinion and some legal scholars consider the law unconstitutional, an attempt to undermine the balance of rights to the displeasure of free speech in breach of the First Amendment, mainly because there is no definition of hate speech in the US constitutional framework.

Respecting human rights, thus combatting hate speech, should be a long-run interest of both States and social media companies. Hate speech through the media have historically resulted in actual harm against targeted groups, for instance in the case of Radio Mille Collines in Rwanda, with the radio station playing a pivotal role in instigating racial hatred that led to the genocide; in the current context, despite not being institutionalised, the threat posed by unmonitored social media posts is that they can reach an extremely vast public and can be reposted thus amplified. Initiative aimed to tackle this phenomenon should be endorsed and not criticised, with the international community advocating for improvements in the transparency and accuracy of social media monitoring.

[1] https://twitter.com/ncri_io/status/1586007698910646272?lang=en

[2] https://news.un.org/en/story/2023/01/1132232

[3] https://www.nysenate.gov/legislation/bills/2019/s7275

By The European Institute for International Law and International Relations.

Check Also

Silencing Justice: LGBTQI+ Rights Suppression in Russia

In November 2023, the Supreme Court of Russia ruled that the international LGBTQI+ movemen…