Young Britons exposed to online radicalisation following Hamas attack

Young Britons exposed to online radicalisation following Hamas attack
News Desk

By News Desk


Published: 06/01/2024

There has been an unprecedented 12-fold increase in hateful social media content being referred to specialist police officers since Hamas attacked Israel on 7 October, according to the UK's Counter Terrorism Internet Referral Unit.

Once focused on propaganda shared by the Islamic State group (IS) and the fall-out online following UK-based attacks, much of the unit's focus has shifted to assessing whether hateful and extreme social media posts breach anti-terror legislation.

The team says it has received more than 2,700 referrals from the public - shared via an online form - since Hamas attacked Israel, and Israel launched waves of air strikes on the Gaza Strip in return.

It is an intensification of hate that leaves young Britons increasingly exposed to radicalisation by algorithm.

The OceanNewsUK was given exclusive access to the team's work. Officers told me they are being referred mainly antisemitic content being posted and shared by young Britons who have not been on their radar before.

They described a real "intensification" in hate, especially from "youngsters" behaving in what they describe as a reckless way online.

One said the period from 7 October stands out because of the "sustained volume" of content. The OceanNewsUK is not naming the officers because of the nature of their work.

Their overall boss, Matt Jukes, head of Counter Terror Policing, fears that while his team is tasked with tackling the most extreme content, there is a failure by social media companies to deal with the "overall climate" of hate created by algorithms.

Algorithms are recommendation systems that promote new content to a user based on posts they engage with. That means they can drive some people to more extreme ideas related to posts they've shown an interest in, which they might otherwise never have encountered - rather than simply reflecting their existing views.

"The people who in the past needed to seek this material out are getting it pushed to them," says Jukes. "[Before] you had to go to a place or sites and forums and now material which certainly might meet a definition of hateful extremism is being driven to them."

'Reckless and emotional' posts

Members of the public who submit posts to the team, have little idea of what happens next.

I'm told the team reviews posts and online material to decide whether they could be in breach of terrorism or other UK terror laws.

They are looking out for the most extreme posts and ones that are shared repeatedly, rather than posted "in isolation". The focus is on terrorism-related content that could lead to violence offline or risk radicalising other people into terror ideologies on social media.

Right now, that includes posts demonstrating "expression of support for Hamas" and glorifying or showing support for acts of terror, one of the officers tells me.

They show me several different screen grabs - with handles blurred - from X, TikTok and a messaging channel. They include messages of support for Hamas and requests for funds to travel to join the group, which is considered a terrorist organisation by the UK and other governments. There are also hateful posts directed at Jewish people.

"The platforms people are using are X, Instagram and TikTok. A lot of the posts are text-based," one officers says. "Posts are often reckless, reactive and emotional - made by youngsters very comfortable using these social media sites."

What stands out, they say, is how many of the profiles have never posted this type of content before. They believe unsuspecting people are becoming "swept up" in sharing "naked antisemitism".

The profile of those posting in this way appears to be mixed, skewing younger but otherwise from a range of places and backgrounds.

"There's been much more antisemitic material [referred to us] than Islamophobia material. It's quite marked," another officer says. "We've had material in from far-right groups, which has tended to be very pro-Israel."

I've spotted and investigated Islamophobic and racist posts on social media since 7 October - including AI-generated clips - from accounts that oppose the pro-Palestinian movement, as well as antisemitic abuse coming from anti-Israel accounts. What I've been seeing online matches up with what several human rights groups and campaigners have said about a recent rise in both Islamophobic and antisemitic hate on social media.

In 2017, there was a spike in content online glorifying a series of terror attacks in the UK, including at London Bridge, Manchester Arena, in Westminster, and at a mosque in London's Finsbury Park. But officers say referrals since the latest Israel-Gaza War began have been "far more sustained" than all of that put together, and the conversation has generated lots more "heat".

So far, they say they've found 630 cases identified as being possibly in breach of terror or hate crime legislation.

I'm told 150 of those cases have been passed on for further police investigation or action. That includes about 10 being passed to investigation teams within the Met's counter terrorism branch, while others have been passed on to local police forces or regional counter terrorism units.

The officers say that that TikTok, X and Meta, which owns Instagram and Facebook, have been cooperative and quick to remove the most extreme content they flag. However, they say its been trickier with more borderline posts, where its unclear whether they're in breach of the social media sites' guidelines.

"People are saying some pretty horrible things. But a lot of what we're dealing with sits right on the threshold," one officer says.

"You've got this space where there might be content and commentary and material that is very unpalatable. At which point does that tip into a criminal space? It's this team who are having to make these judgments."

According to TikTok's policies, since 7 October the social media company "has mobilised significant resources to help maintain the safety of our community". TikTok has said several times how it stands firmly against hateful behaviour and hate speech, and continues to "invest in new ways to diversify recommendations and interrupt repetitive patterns".

Meta, which owns Instagram, outlines in its "community standards" how it uses a mix of "automated technology and human review to identify and remove content" that violates its guidelines, including content "that attacks people based on their protected characteristics - which includes their religion, ethnicity or national origin". The social media company has also said: "We continue to remove any imagery that is produced by a Dangerous Organization or Individual, unless it is clear that the user is sharing it in a news reporting or condemnation context."

X did not reply to the OceanNewsUK's request for comment.

Algorithms - a tool for radicalisation?

So what about all of the hate that sits in the middle? It's not extreme enough to be illegal, but it still poisons the public discourse and risks pushing some people further towards extremes.

"What's felt extraordinary from the last period is this severely polarised online environment," says Matt Jukes.

"People are validated by what they see online. Lots of other people in their echo chamber feel the same."

That risk is perhaps elevated for younger people who turn to social media more than ever before for updates. There's a benefit to that; they're arguably better connected than ever before, they're exposed to alternative view points and content and a lot of them are more engaged and energised. For some, though - it carries risk of encountering extremist content.

Responsibility for dealing with hateful posts - as of now - lies with the social media companies. It also lies, to some extent, with policy makers looking to regulate the sites, and users themselves.

New legislation like the Online Safety Act does force the social media companies to take responsibility for illegal content, too.

But the biggest question remains over how to deal with algorithms that stand accused of effectively pushing people towards hate, and normalising harmful rhetoric.

"It seems extraordinary to me that we have this convergence of terrorism risks, experience of hate crimes online and offline and the interest in this conflict from state actors as we go into the year ahead," says Jukes.

These polarised and toxic conversations - which don't cross any legal threshold - risk having a serious impact on public discourse. Not just in relation to this war as it rages on, but on the many elections happening across the world this year.

You may like