Ofcom
Printable version

How Ofcom is building our evidence base around online fraud and illegal harms

In preparation for our new online safety powers, Ofcom has been developing our understanding of user-generated fraud and illegal harms.

The Online Safety Bill is currently at Committee Stage in the House of Lords, where many details and proposed amendments are still being debated. But we know the UK Government proposes to include ‘priority’ criminal offences that regulated services must consider as part of their overall safety duties.

In preparation for Ofcom's new duties as the online safety regulator, we’ve been building our evidence base to inform our policy thinking in this area. Last week we published two reports commissioned from market research consultancy, Accelerated Capability Environment (ACE): User-generated content-enabled frauds and scams (PDF, 274.8 KB) and Mitigating illegal harms: a snapshot (PDF, 429.0 KB).

ACE looked at some of the types of harm we expect to be caused by priority offences:

  • online fraud and financial crimes;
  • illegal immigration and human trafficking;
  • the promotion of suicide and self-harm; and
  • the sale of illegal drugs, psychoactive substances and weapons.

As part of the research, ACE interviewed 15 tech platforms and spoke to civil society organisations and other experts to understand what is already being done to mitigate these harms, how effective this is, and what could be improved.

What the interviews discussed – in summary

The interviews discussed a number of issues.

  • Most platforms say mitigating the risk of fraud enabled by user-generated content on their service is a priority. However, in instances of this fraud, following initial contact with the victim, payments made to fraudsters will usually happen off-platform.
  • Account verification differs from one platform to the next. Some, such as online marketplaces, require mandatory two-factor authentication, while others simply ask for an email address. Some services are reluctant to introduce more robust verification measures (for example beyond email verification) as they believe users would find these intrusive and negatively impact their experience when using the service.
  • Some industry respondents highlighted the value of approaches which enable cross-platform identification of bad actors in combination with data about suspicious patterns of behaviour for more accurately identifying fraudulent activity. Some platforms said they would welcome guidance around sharing such data (in compliance with GDPR) to support the implementation of behaviour-based mitigations.
  • Some services find it difficult to create moderation policies that are specific enough to work across different jurisdictions, such as those that apply to the sale of weapons and knives.
  • The research suggests that platforms have done more work on some harm areas than others.
  • Platforms say livestreaming and other ephemeral content presents moderation challenges that are distinct from other types of content.
  • Platforms and services said they could potentially better tackle some of these harms with more consistent knowledge-sharing between industry peers. This includes pooling knowledge of emerging threats and trends.

We intend to publish our first consultation document on these harms soon after the Online Safety Bill receives Royal Assent and our powers commence.

 

Channel website: https://www.ofcom.org.uk/

Original article link: https://www.ofcom.org.uk/news-centre/2023/building-evidence-base-around-online-fraud-and-illegal-harms

Share this article

Latest News from
Ofcom

Recruiters Handbook: Download now and take the first steps towards developing a more diverse, equitable, and inclusive organisation.