Facebook set off fear and anger among Thailand’s social media users after its safety check feature warned of a bomb alert in Bangkok.
The check-in feature, which allows users to signal to friends and family that they are safe after an attack or natural disaster, was triggered at around 9pm local time on Tuesday, reporting an explosion in the Thai capital.
Facebook said the alert was activated by an algorithm after reports that a protester had thrown small explosives near Government House, the prime minister’s office, earlier in the day.
Local media had reported the incident, which caused no injuries or damage.
The safety check feature provided a link to the Bangkok Informer’s news story that referenced a breaking news story by the BBC about the 2015 Erawan Shrine bombing.
After several people marked themselves as safe, the alert was deactivated less than an hour after it had been triggered at 10pm, leaving city residents relieved but also frustrated over the false alarm.
“Safety Check was activated yesterday in Thailand following an explosion,” a Facebook spokesperson said in a statement to the AFP news agency, adding that a “trusted third party” had confirmed the incident.
— Saksith Saiyasombut (@SaksithCNA) December 27, 2016
“Facebook issued false news that has destroyed Thailand’s image,” wrote Thai user Prasit Silhanisong.
“It’s close to the New Year and now tourists might not come,” he added, calling on the social media giant to apologise.
It is not the first time Facebook’s check-in feature has drawn anger.
Critics have accused the San Francisco-based company of valuing the lives of Western victims more than those in other regions after it activated the feature following attacks in Europe, but not the Middle East.
Last month, Facebook said it would no longer manually trigger its safety check tool, instead, the feature would be triggered by an algorithm that monitors posts by community members.
The latest gaffe comes as the company also faces criticism over its fake news problem.
Facebook has been hit by a series of claims that both during and after the US election, its newsfeed algorithm helped spread misinformation about presidential candidates Donald Trump and Hillary Clinton.
Earlier this year, the tech website Gizmodo reported that Facebook had routinely manipulated the platform’s trending topics in order to suppress conservative agendas.
And in September, editors from two Palestinian news publications told Al Jazeera that Facebook had suspended their accounts because of an agreement between the company and the Israeli government aimed at tackling “incitement”.