“Shoot. Pause. Shoot. Pause. I remember thinking it had to be one shooter,” says Ole Martin Juul Slyngstadli.
The then 17 year old had been hiding beneath a cliff with a girl who he had seen get shot four times.
He had wrapped his T-shirt around her chest and placed rocks on the wound to stop the bleeding.
“I remember the shooter saying ‘I’m the police, come out, you’re safe’. One boy walked out and he shot him,” says Slyngstadli, who survived the Norway massacre.
Eight years ago, on July 22, 2011 – a day known as 22/7 – Anders Breivik set off a car bomb at the main office complex of the Norwegian government, leaving eight people dead and several seriously injured.
Less than two hours later, he arrived at the Labour Party’s youth summer camp on the Norwegian island of Utoeya.
In 72 minutes, between the start of the attack and the police arriving, he killed 69 of the 564 people on the island, mostly teenagers.
On the day of the attack, Breivik sent a 1,500-page manifesto to more than 1,000 email addresses, in which he claimed the Nordic race was being replaced by Muslims and accused Norway’s Labour Party of failing the country on immigration.
Breivik was formally charged in March 2012 with mass murder and terrorism, and was given a maximum prison sentence of 21 years, extendable by five-year increments.
Following the attacks, Jens Stoltenberg, then prime minister and Labour Party leader, allocated more resources to police and security services, and established a special security department in the Ministry of Government Administration and Reform.
But survivors worry that the public discussion around the event has been limited – particularly why did Breivik, a Norway-born middle-class citizen, commit the worst atrocity in peacetime Norway.
“We still don’t understand where he came from and why he did this,” says one survivor, Tarjei Jenson Bech.
most-referenced individuals in radical right groups online. They see him as a martyr who sacrificed himself for the same cause.”]
Breivik was convicted in Norway as a “lone wolf”, but he was not alone.
As early as 2008, he has participated in online circles advocating for the forced deportation of migrant communities to create an ethnically and culturally homogeneous society.
Breivik remains one of the “most-referenced individuals in radical right groups online”, amid a rise in far-right hate speech online, says Julia Ebner, a radicalisation expert at the Institute for Strategic Dialogue (ISD).
“They see him as a martyr who sacrificed himself for the same cause.”
A July 2019 study by ISD found that “great replacement”, a conspiracy theory which argues that Europeans are being replaced by Muslims, has gained significant traction across social media.
The number of tweets mentioning the theory nearly tripled in four years from just over 120,000 in 2014 to more than 330,000 in 2018.
The baseless idea was at the centre of Brenton Tarrant’s “manifesto”, which he released minutes before open firing on two mosques in Christchurch, New Zealand, killing more 51 Muslim worshippers last April. The Australian wrote that he took “true inspiration” from Breivik.
On the day of the attack, Tarrant broadcast the shooting via a Facebook live stream. In the next 24 hours, the video was uploaded at a rate of once per second.
It could still be found on YouTube for as long as eight hours after it was first posted.
According to one report, the footage was still circulating across Facebook and Instagram 36 days after the attack.
It was not until Christchurch that governments and major tech companies, including Google, Facebook, Twitter and Amazon, collaborated to produce a “Call to Action“, committing them to do more to “fight the hatred and extremism that lead to terrorist violence”.
But the US government declined to join, citing the need for “freedom of speech”.
Following the May initiative, tech companies created a nine-step plan to try to counter online “extremism”.
Facebook has permanently banned the accounts of some individuals and groups, including the far-right British National Party in the UK.
YouTube said it launched changes to its recommendations systems in January 2019, which it claims have reduced the spread of harmful misinformation and what it calls “borderline content” by 50 percent.
A 2018 report by the UK advocacy group Hope Not Hatefound that while social media companies are increasingly removing leading far-right propagandists figures from their platforms, there remains an upward trend in online hate.
Russell Foster, a lecturer in Britain and European Integration from King’s College London, said the Call to Action amounts to “nothing” and only “plays into the hands” of the extreme right.
Far-right organisations are adept at portraying themselves as victims of an uncaring, international elite, he said, adding that they exploit the debate around freedom of speech.
“Not only does it not eliminate them, it justifies their sense of [being aggrieved] and pushes them further into underground forums,” says Foster.
ISD writes that although the extreme right traditionally uses larger platforms such as Facebook, YouTube and Twitter to disseminate material to broader audiences, fringe websites remain safe havens for them to “radicalise” people further.
Ebner says policy-makers must focus on algorithms as they haven’t been “tackled anywhere close to the extent [that they should]”.
Algorithms, says Ebner, play an important role in directing users on mainstream social media to more worrying content on smaller platforms.
“The architecture of recommended content is playing into the hands of extremists’ groups. They automatically have an advantage and it’s really hard to counter that.”
While Ebner says she has flagged dangerous far-right content to tech firms, they often do not respond.
She says this is because most resources of the security forces are still dedicated to combating “jihadist extremism”.
There have been far more policies on the removal of propaganda associated with groups behind this movement, in part because they feature on United Nations-designated terrorist lists, she says.
Even though far-right groups are similarly internationally networked, they do not feature on such lists.
As a result, there are more legal grey zones in terms of removing online content by dangerous elements of the far right.
“Far-right groups are a loose network and don’t have the same strategies as some of the defined traditional terrorist groups,” says Ebner.
In her view, the UN may need to revisit its definition of terrorism to incorporate the growing threat of online groups.
Foster’s main concern, meanwhile, is that far-right “extremist” ideas are becoming increasingly popular.
“Traditionally, we used to think that the alt-right were motivated by poverty or sheer racism, by people who felt abandoned – ‘the left behinds’. That’s no longer the case,” he says.
Breivik and Tarrant, economically comfortable, middle-class people with job security, show how the movement has a “broader demographic” than ever, says Foster.
“And what we don’t know is why. That’s what alarms me. Without this, we can’t form a strategy to counteract it.”