Threat to democracy? Tech CEOs in hot seat over liability shield
The CEOs of Twitter, Facebook and Alphabet Inc are testifying before a United States Senate committee on Wednesday over the protections they have under the Communications Decency Act.
Let the grilling begin.
The CEOs of Twitter, Facebook and Alphabet Inc are set to testify virtually before the United States Senate Committee on Commerce, Science and Transportation on Wednesday over whether to repeal a section of the 1996 Communications Decency Act that shields them from legal liability over content that users post on their platforms.
Facebook’s Mark Zuckerberg, Twitter’s Jack Dorsey and Alphabet’s Sundar Pichai are landing in the hot seat less than a week before the US presidential election, which has been rife with reports of online interference and disinformation. Republican lawmakers have also accused the social media platforms of suppressing conservative viewpoints.
Here’s what you need to know about the law that shields the tech giants – and the role it plays in freedom of speech and expression online.
Sooo … what’s this law and who wants to change it?
The law in question is the Communications Decency Act of 1996, but it’s one section of it – Section 230 to be precise – that some Republican and Democratic lawmakers would like to change.
What is it about Section 230 that’s so controversial?
That section states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
Okay, what does that mean in plain English?
It means that social media giants can’t be held legally responsible for objectionable words, photos or videos that people post to their platforms.
And what does that mean in practice?
In practice, it means for example that Yelp can’t be sued by a restaurant that a disgruntled diner accused of having rats in a review, or that Twitter isn’t responsible for the tweet in which someone erroneously claimed to be the person who discovered Mars.
So absolutely anything goes? No matter how awful?
There are exceptions to what’s protected – such as posts that violate criminal laws around child pornography, for example, or copyright and intellectual property statutes.
So why did lawmakers give social media a free pass when the law was passed?
They didn’t. Social media didn’t exist when the law was passed in 1996. But bloggers did. The law was written largely to shield internet service providers from bloggers, and, in turn, bloggers from the people who comment on their sites or write guest posts.
I was born in 1996. I grew up, so why hasn’t the law?
Because technology evolves at a much faster rate than the law. When Twitter and Facebook came on the scene in the early 2000s, Section 230 extended to them as well. Of course, the amount of user-generated content online has grown exponentially since 1996, because even your parents are on Facebook now.
So why bother with testimony? Why not just change the law?
Because not everyone agrees it needs to be changed.
The non-profit Electronic Frontier Foundation says that Section 230 “creates a broad protection that has allowed innovation and free speech online to flourish”. And tech companies argue that they can’t possibly police billions of posts by users around the world without curtailing some users’ freedoms.
How does Section 230 protect free speech?
Those who want to keep the section intact argue that if big tech companies can be held legally responsible for every tweet, post and review that users write on their sites, they could choose to limit what users could publish on their platforms – which would be tantamount to censorship.
Section 230 also gives smaller websites the ability to post different viewpoints without risking being sued, say supporters.
But isn’t curbing free speech bad for democracy?
It is. But free speech is also exploited for nefarious purposes. US intelligence agencies claim foreign governments including those of Russia, China and Iran have been actively using social media to spread disinformation and stoke fear during the 2020 US presidential election. And that is, well, bad for democracy.
But don’t the social media giants have people policing content?
Facebook, Twitter and Alphabet Inc’s Google, which also owns video platform YouTube, have teams of people dedicated to taking down offensive content, like hate speech.
But critics argue that self-policing, especially where democratically-damaging disinformation is concerned, just isn’t cutting it.
Recently, Twitter and Facebook came under scrutiny for taking down a New York Post story based on unverified emails that claimed that US Democratic presidential candidate Joe Biden’s son, Hunter, agreed to introduce his father to a Ukrainian energy executive while he was in the White House.
Twitter’s chief Jack Dorsey later said it was “wrong” to block URLs to the Post’s story without explaining to users why it had been done.
Did the story die?
The story actually ended up being widely shared after US President Donald Trump accused the platforms of “trying to protect Biden” when Twitter prevented users from sharing the story and Facebook attempted to limit its reach.
According to an Axios analysis of data from NewsWhip, a site that tracks stories’ engagement, the New York Post story received 2.59 million likes, comments and shares – more than double the next biggest story about either Trump or Biden that week. So neither outlet succeeded in containing its spread.
So what does Trump think of Section 230?
On May 28, Trump issued an executive order that attempted to limit the protections big tech companies enjoy under Section 230, which they immediately challenged in court. Trump’s executive order accused online platforms of “engaging in selective censorship that is harming our national discourse” and censoring conservative voices.
Biden argued Section 230 “should be revoked” in an interview with The New York Times in January, saying that Facebook “is not merely an internet company. It is propagating falsehoods they know to be false, and we should be setting standards not unlike the Europeans are doing relative to privacy.”
Wow. Is it just political interference we’re worried about here?
Public health is also a concern. The secretary-general of the World Health Organization warned in September that “rumours, untruths and disinformation” spread by social media are hindering the global fight against COVID-19.
A rumour that drinking highly concentrated alcohol called methanol could kill the coronavirus, for example, was linked to the deaths of 800 people and the hospitalisations of 5,876 over the first three months of 2020, according to a study published earlier this month in the American Journal of Tropical Medicine and Hygiene.
Is this the only beef that lawmakers have with big tech?
Hardly. Lawmakers on both sides of the aisle have accused tech giants of monopolising the market – driving wages down in the tech industry and stifling innovation.
And while many users view these platforms as “free” because they don’t charge a fee, it’s users’ data – and the ability to sell that data or make money off of its insights – that keeps them in business, raising privacy concerns.
So what happens next?
Attempts to repeal Section 230 are among several ongoing battles that Alphabet Inc faces after the US Department of Justice filed an antitrust lawsuit against the company, accusing it of using Google’s search engine dominance to quash competition and thwart innovation.
Zuckerberg and Dorsey are also due back before Congress on November 17 to specifically face questions over their handling of the New York Post story about Hunter Biden after Republicans on the Senate Judiciary Committee accused them of censoring conservative viewpoints.