Why Elon Musk’s Twitter might be (more) lethal
We seek the best ways to blunt the power of dangerous speech, online and offline. Musk taking over Twitter doesn’t look like one of them, to say the least.
Just before he clinched a deal to buy Twitter for about $44bn last week, Elon Musk suggested that he might jettison the platform’s limits on speech including “hateful content” and abuse. “Is someone you don’t like allowed to say something you don’t like?” he asked sardonically. “It’s damn annoying, but that is the sign of a healthy, functioning free speech situation.”
Musk didn’t specify which kinds of speech he meant, but he gave us an idea two weeks later by retweeting far-right critiques of two of Twitter’s executives. One contained images of Vijaya Gadde, the company’s general counsel and chief of content moderation, with text implying that she was biased against conservatives.
Some of Musk’s more than 84 million Twitter followers piled on, with vitriol and threats aimed at Gadde. Many of the hostile posts referred to her Indian background. Musk also replied approvingly to a tweet in which the far-right conspiracy theorist Mike Cernovich accused Twitter’s deputy general counsel of facilitating fraud.
Though Musk’s followers seemed to love those posts, others found them much worse than “damn annoying”. Dick Costolo, who was the company’s CEO from 2010 to 2015, tweeted after Musk derided Gadde, “What’s going on? You’re making an executive at the company you just bought the target of harassment and threats.”
Yet Costolo is no stranger to the say-what-you-like credo. About a decade ago, Costolo began referring to the platform as the “free speech wing of the free speech party,” meaning that Twitter wanted to let people post as they wished, and the company avoided removing tweets or accounts.
In subsequent years, Twitter backed away from that hands-off position, as users protested at how much people were being attacked on the platform. Among innumerable examples, Robin Williams’s 25-year-old daughter Zelda received graphic, horrifying tweets blaming her for his suicide in 2014.
During that period, Twitter staff began rewriting the platform’s rules to prohibit “targeted harassment” and “glorification of violence”. It was under the latter that former President Donald Trump was finally removed from the platform, after the January 6, 2021 riot at the US Capitol.
Still, Twitter continues to be used to attack individuals across the globe. And there’s another ocean of harmful content: frightening, usually false information about groups of people.
I study this sort of content, which I have dubbed “dangerous speech,” for its capacity to inspire violence between groups of people. Dangerous speech forms a category because it is strikingly similar from one case to another, across languages, cultures, and even time periods. At the Dangerous Speech Project, an independent research team I founded, we have identified innumerable examples, many of which have been followed by killings and even genocide.
We seek the best ways to blunt the power of dangerous speech, online and offline. Musk taking over Twitter doesn’t look like one of them, to say the least.
Our team has found that when incendiary content goes viral, it can easily reach susceptible audiences open to committing violence offline. On Twitter, a frightening message can be boiled down into a memorable hashtag. Following such hashtags and related tweets, people around the world have been tortured and killed.
In the early days of the COVID-19 pandemic, for example, rumours spread on Twitter and other media in India that Muslims in the country were planning to infect Hindus with the virus. In just over a week, nearly 300,000 tweets carrying the hashtag “CoronaJihad” were viewed 165 million times, according to Equality Labs, a digital human rights group. This led to multiple attacks.
On April 5, 2020, a Hindu mob dragged villager Mehboob Ali to a field in northern India and beat him severely with shoes and sticks. They accused him of trying to spread coronavirus because he had recently attended a Muslim religious gathering. His attackers demanded to know who else formed part of the conspiracy. When he finally made it to a hospital to seek treatment for his wounds, Ali was isolated as a “corona suspect”. A few days later, two babies died when Indian hospitals refused to admit their labouring mothers because they were Muslim and therefore accused of spreading the virus. Such accusations persisted in India and led to the torching of 45 Muslim households near Kolkata in May.
Tweets that inspire aggression also often come from powerful government officials, which adds to their influence. During the early stage of the pandemic, a member of the Kenyan parliament tweeted that people should stone a group of Chinese people who had allegedly violated quarantine. In Brazil, President Jair Bolsonaro and his son have repeatedly posted suggestions that LGBTQ people are pedophiles.
None of these posts were taken down under Twitter’s existing rules. Under Musk, the rules are likely to loosen. Pressed on his plans, Musk has elaborated (on Twitter, of course) that “by free speech, I simply mean that which matches the law”. It should be noted that the law on freedom of speech varies greatly from country to country, and many governments enforce it selectively to their own benefit.
In practice, the relevant law for Twitter will be the platform’s own rules, or rather, the Musk rules, if his deal goes through. Then we must hope that Musk learns – and chooses to care – about speech that is worse than damn annoying.
The views expressed in this article are the author’s own and do not necessarily reflect Al Jazeera’s editorial stance.