Crossing the line: Seven in ten Premier League footballers face Twitter abuse

2 Aug 2022 03:04 PM

As the new season warms up for kick-off, Ofcom reveals the scale of personal attacks suffered by Premier League footballers every day on Twitter, and sets out what must be done collectively to tackle the issue.

Ofcom, which is preparing to regulate tech giants under new online safety laws, teamed up with The Alan Turing Institute to analyse more than 2.3 million tweets directed at Premier League footballers over the first five months of the 2021/22 season.[1]

The study created a new machine-learning technology that can automatically assess whether tweets are abusive.[2] A team of experts also manually reviewed a random sample of 3,000 tweets.[3]

What we found

We also asked the public about their experiences of players being targeted online through a separate poll. More than a quarter of teens and adults who go online (27%) saw online abuse directed at a footballer last season. This increases to more than a third of fans who follow football (37%) – and is higher still among fans of the women’s game (42%).

Among those who came across abuse, more than half (51%) said they found the content extremely offensive, but a significant proportion didn't take any action in response (30%). Only around one in every four (26%) used the flagging and reporting tools to alert the abusive content to the platform, or marked the content as junk.

Ofcom is holding an event today (2 August) to discuss these findings. Hosted by broadcast journalist and BT Sport presenter, Jules Breach, the event will hear from:

What needs to be done

These findings shed light on a dark side to the beautiful game. Online abuse has no place in sport, nor in wider society, and tackling it requires a team effort.

Social media firms needn’t wait for new laws to make their sites and apps safer for users. When we become the regulator for online safety, tech companies will have to be really open about the steps they’re taking to protect users. We will expect them to design their services with safety in mind.

Supporters can also play a positive role in protecting the game they love. Our research shows the vast majority of online fans behave responsibly, and as the new season kicks off we’re asking them to report unacceptable, abusive posts whenever they see them.

Kevin Bakhurst, Ofcom’s Group Director for Broadcasting and Online Content 

These stark findings uncover the extent to which footballers are subjected to vile abuse across social media. Prominent players receive messages from thousands of accounts daily on some platforms, and it wouldn’t have been possible to find all the abuse without these innovative AI techniques.

While tackling online abuse is difficult, we can’t leave it unchallenged. More must be done to stop the worst forms of content to ensure that players can do their job without being subjected to abuse.

Dr Bertie Vidgen, lead author of the report and Head of Online Safety at The Alan Turing Institute 

What will online safety laws mean?

The UK is set to introduce new laws aimed at making online users safer, while preserving freedom of expression. The Online Safety Bill will introduce rules for sites and apps such as social media, search engines and messaging platforms – as well as other services that people use to share content online.

The Bill does not give Ofcom a role in handling complaints about individual pieces of content. The Government recognises – and we agree – that the sheer volume of online content would make that impractical. Rather than focusing on the symptoms of online harm, we will tackle the causes – by ensuring companies design their services with safety in mind from the start. We will examine whether companies are doing enough to protect their users from illegal content, as well as content that is harmful to children.

Notes to editors

  1. From the start of the 2021/2022 season (13 August 2021) to the winter break (24 January 2022).
  2. The Alan Turing Institute is the UK’s national institute for data science and artificial intelligence. The Institute is named in honour of Alan Turing, whose pioneering work in theoretical and applied mathematics, engineering and computing is considered to have laid the foundations for modern-day data science and artificial intelligence. The Institute’s goals are to undertake world-class research in data science and artificial intelligence, apply its research to real-world problems, driving economic impact and societal good, lead the training of a new generation of scientists, and shape the public conversation around data and algorithms. Part of The Alan Turing Institute’s Public Policy Programme, the Online Safety Team provides objective, evidence-driven insight into online safety, supporting the work of policymakers and regulators, informing civic discourse and extending academic knowledge. They are working to tackle online hate, harassment, extremism and mis/disinformation. The AI model used to identify the abusive tweets was developed as part of The Alan Turing Institute’s Online Harms Observatory, led by their Online Safety Team.
  3. Online abuse is a problem across platforms, and this research is not intended as a reflection, or commentary, on Twitter’s trust and safety practices. We chose Twitter for this study because it is a widely-used platform on which many Premier League football players are active; because several players have reported being abused on Twitter before, such as during the Euro 2020 finals; and because, unlike most platforms, Twitter makes data available for academic research.
  4. Definitions of positive, neutral, critical and abusive tweets: 
    • Abusive: The tweet threatens, insults, derogates, dehumanises, mocks or belittles a player. This can be implicit or explicit, and includes attacks against their identity. We include use of slurs, negative stereotypes and excessive use of profanities.
    • Critical: The tweet makes a substantive criticism of a player’s actions, either on their pitch or off. It includes critiquing their skills, their attitude and their values. Often, criticism is less aggressive and emotive.
    • Positive: The tweet supports, praises or encourages the player. It includes expressing admiration for a player and their performance, and wishing them well.
    • Neutral: The tweet does not fall into the other categories. It does not express a clear stance. neutral statements include unemotive factual statements and descriptions of events.