Twitter’s new crisis misinformation policy: All you need to know

Twitter has announced a new plan to fight false claims in times of crisis, including war, health emergencies, and big natural disasters.

Twitter logo
Twitter said it will require verification from multiple credible and publicly available sources to determine what is and isn't misinformation [File: AP Photo]

Twitter has decided to launch a new policy that aims to step up its fight against misinformation and the spread of false allegations on the social media platform during times of crisis.

Here are the changes for users of the platform:

What does the new policy entail?

Twitter defines a crisis as “situations in which there is a widespread threat to life, physical safety, health, or basic subsistence”.

Under the new policy, Twitter will add warning labels to debunk false claims about crises and users will not be able to like, forward, or respond to the posts that violate the new rules.

Some of the tweets that could end up with a warning notice include those that falsely report on events, include false allegations involving weapons or use of force, or spread broader misinformation regarding atrocities or international responses.

Twitter said it will require verification from multiple credible and publicly available sources to determine what is misinformation and what is not.

The platform will start with tweets and information concerning the Russian invasion of Ukraine, but it will expand to include situations of armed conflict, health emergencies, and large-scale natural disasters.

Initially, misleading content regarding the war in Ukraine will be targeted to limit the spread of claims debunked by humanitarian groups or other credible sources.

The policy was announced on Thursday but according to Yoel Roth, Twitter’s head of safety and integrity, the platform has been developing a new framework since last year.

How will it show on Twitter?

Tweets containing content that violates the new policy will be placed under a warning notice. This means the tweets will not be deleted or banned, but the warning will require users to click a link with more details about the crisis misinformation policy at the bottom before the tweet can be displayed.

The new warning notices will alert users that a tweet has violated Twitter’s rules, but will still allow people to view and comment. The platform will not amplify or recommend such tweets and retweeting will also be disabled.

The company will prioritise adding labels to misleading tweets from high-profile accounts such as verified users or official government profiles. It will also prioritise content that could cause harm to people on the ground.

The policy complements a previous rule that forbids digitally manipulated media and false claims about elections, voting and health misinformation.

Why is this taking place now?

These changes come as social media platforms continue to struggle with propaganda and misinformation since Russia’s invasion of Ukraine started.

False claims range from rumours to propaganda amplified by political actors or networks.

“We have seen both sides share information that may be misleading and/or deceptive,” said Roth who detailed the new policy for reporters.

“Our policy doesn’t draw a distinction between the different combatants. Instead, we’re focusing on misinformation that could be dangerous, regardless of where it comes from.”

What challenges could it face?

The new rules could clash with the views of Elon Musk, who is negotiating a deal to acquire the platform with the aim of making it a heaven for “free speech”.

Musk has frequently expressed the opinion that Twitter’s content moderators intervene too much on the platform, and said it should only remove posts violating the law.

He has also previously criticised the algorithms used by the platform.

The new approach could be “a more effective way to intervene to prevent harm, while still preserving and protecting speech on Twitter”, said Roth.

Source: Al Jazeera