Twitter and Elon Musk: Why free speech absolutism threatens human rights
Any moves by Elon Musk to remove content moderation on Twitter risk breaching corporate human rights obligations. Moderating content is a box that still needs to be ticked.
Any moves by Elon Musk to remove content moderation on Twitter risk breaching corporate human rights obligations. Moderating content is a box that still needs to be ticked.
First published: Nov 2022.
F or a man who made a fortune from electric cars, the Twitter takeover has turned into a fairly bumpy ride so far. Soon after buying the social media company for US$44 billion (£38 billion), Elon Musk said he had “no choice” about laying off a large proportion of the company’s staff.
He has already faced a backlash over his move to charge Twitter users a monthly fee for their “blue tick” verified status. And those users should also be concerned about plans from the self-proclaimed “free speech absolutist” to reduce content moderation.
Moderation, the screening and blocking of unacceptable online content, has been in place for as long as the internet has existed. And after becoming an increasingly important and sophisticated feature against a rising tide of hate speech, misinformation and illegal content, it should not be undone lightly.
Anything which weakens filters, allowing more harmful content to reach our screens, could have serious implications for human rights, both online and offline.
For it is not just governments which are responsible for upholding human rights – businesses are too. And when different human rights clash, as they sometimes do, that clash needs to be managed responsibly.
Social media has proved itself to be an extremely powerful way for people around the world to assert their human right to freedom of expression – the freedom to seek, receive and impart all kinds of information and ideas.
But freedom of expression is not without limits. International human rights law prohibits propaganda for war, as well as advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence. It also allows for restrictions necessary to ensure that rights or reputations are respected.
So Twitter, in common with other online platforms, has a responsibility to respect freedom of expression. But equally, it has a responsibility not to allow freedom of expression to override other human rights completely.
After all, harmful online content is often used to restrict the freedom of expression of others. Sometimes, online threats spill over to the offline world and cause irreparable physical and emotional harm.
Any moves to remove content moderation, therefore, risk breaching corporate human rights obligations. Unlimited freedom of expression for some almost inevitably results in the restriction elsewhere of that exact same freedom. And the harm is unlikely even to stop there.
Will Donald Trump get his Twitter account back? | Unsplash / John Cameron
Musk claims that Twitter will now become a more democratic “town square”. But without content moderation, his privately owned version of a town square could become dysfunctional and dangerous.
Twitter – again, like most other social media platforms – has long been linked to overt expressions of racism and misogyny, with a flood of racist tweets even surfacing after Musk closed his deal.
And while Musk reassures us that Twitter will not become a “hellscape”, it is important to remember that content moderation is not the same as censorship. In fact, moderation may facilitate genuine dialogue by cracking down on the spam and toxic talk which often disrupt communication on social media.
Elon Musk said Twitter will not become a “hellscape”. | Flickr / Daniel Oberhaus (2018)
User friendly?
Moderation also offers reassurance. Without it, Twitter risks losing users who may leave for alternative platforms considered safer and a better ideological fit.
Valuable advertisers are also quick to move away from online spaces they consider divisive and risky. General Motors was one of the first big brands to announce a temporary halt on paid advertising on Twitter after Musk took over.
Of course, we do not yet know exactly what Musk’s version of Twitter will eventually look like. But there have been suggestions that content moderation teams may be disbanded in favour of a “moderation council”.
If it is similar to the “oversight board” at Meta (formerly Facebook), content decisions are set to be outsourced to an external party representing diverse views. But if Twitter has less internal control and accountability, harmful content may become a harder beast to tame.
Such abdication of responsibility risks breaching Twitter’s human rights obligations, and having a negative impact both on individuals affected by harmful content, and on the overall approach to human rights adopted by other online platforms.
So as one (extremely) wealthy businessman claims to “free” the blue Twitter bird for the sake of humanity, he also gains commercial control of what has been conceived as being a relatively democratic social space until now. What he does next will have serious ramifications for our human rights in a digital age.
Content moderation is by no way a panacea and the claim that social media platforms are “arbiters of the truth” is problematic for many reasons. We must also not forget the emotional and psychological toll of human content moderators having to view “the worst of humanity” to protect our screens. Yet, the sanitisation of social platforms is also not the answer. The internet is a better place when the most successful platforms engage in human rights-focused screening – for everyone’s benefit.
— AUTHORS —
Sources
- Text: This piece was originally published in The Conversation and re-published in PMP Magazine on 15 November 2022. | The author writes in a personal capacity.
- Cover: Adobe Stock/Koshiro.
[Read our Comments Guidelines]