One year on from the Online Safety Act - techUK members are introducing new features and product changes to create a safer internet

25 Oct 2024 10:09 AM

In the year since the Online Safety Act received Royal Assent, techUK members have made wide ranging product updates and introduced new safety features to improve their services and better serve users. 

The passing of the Online Safety Act after years of debate, both inside Parliament and out, represented a significant moment for the regulation of online services.  

The Act gave broad new powers to Ofcom and created new duties for online service providers with the aim of creating a safer online experience for UK internet users. 

The Act is an incredibly complex piece of legislation, and one that is the product of significant time and effort on the parts of Government, officials, and regulators. These efforts have continued at pace as the Act continues to be implemented.  

The tech industry however is not just waiting for the implementation milestones in the Act to get to work and has already moved to innovate. Developing new technologies and making changes to services to better protect users. 

Industry action: 

techUK members are already taking their own steps to deliver on the ultimate goal of the legislation, to make the online world safer, more user friendly, and more reliable.  

As well as the changes that are being made by platforms techUK members are also developing a raft of new safety tech solutions. A recent report from the Department for Science, Innovation and Technology noted that the value of the safety tech sector in the UK could reach a value of over £1 billion in turnover by 2025/26. With the UK already a global leader in this space, techUK’s members are already playing a significant role in the development of these new technologies.  

In a range of areas from online safety to fraud, misinformation and deepfakes, techUK members are operating at the forefront of online safety regulation, often grappling with some of the most difficult and sensitive issues as they develop new ways to protect users and build trust online. 

Staying safe online 

Over the past year, YouTube has collaborated with the independent experts on its Youth and Families Advisory Committee to enhance content recommendation safeguards for teen users. This includes limiting repeated recommendations of sensitive videos, such as those comparing physical features, idealising body types, or displaying intimidation.  

YouTube has also expanded its crisis resource panels to interrupt users searching for topics related to suicide, self-harm, or eating disorders. These panels prompt users to pause and slowdown in moments of distress, redirecting them towards helpful resources.  

To incorporate the experience of younger users, TikTok has also launched its Youth Council, a new initiative in partnership with specialist online safety agency Praesidio Safeguarding and made up of young people between the ages of 15-18 representing a range of communities and countries.  

The platform also operates private by default accounts for younger users, which prohibits under-18s from livestreaming and under-16s from sending or receiving private messages (DMs). The platform has also previously introduced 60-minute daily screen time limits for young users, and has also committed to investing $2 billion in trust and safety in 2024. 

Earlier this year, Meta also announced Teen Accounts for young Instagram users, featuring built-in protections that limits the content they can see and restricts who can contact them. All teens using Instagram in the UK are being automatically placed into Teen Accounts, which are private by default, and teens under 16 will need a parent’s permission to change any of these settings to be less strict.   

New tech innovations 

To keep pace with other requirements in the Act, age verification providers such as Yoti have been partnering closely with social media platforms and other organisations to ensure user safety. Yoti has partnered with Meta to help Instagram verify the ages of its users, using facial recognition software and trained AI to estimate ages whilst protecting privacy. Yoti’s facial age estimation technology has performed over 570 million checks worldwide, and is being used by a range of businesses and industries around the world including social mediagaming and age-restricted e-commerce. 

The same technology has been used by Avakin Life, a 3D life-simulation game, to protect users. Players who verify that they are over 18 using Yoti’s age assurance technology, they will be able to unlock additional content including chats with fewer restrictions and in-game spaces not accessible to unverified users. This ensures that players over the age of 18 can play with confidence with other adult players, while also enhancing the safety and player experience for younger audiences. 

Tik Tok launched a dedicated STEM feed, featuring content from selected experts in their fields. Users under the age of 18 will have the feed enabled by default, but older users can also opt-in. The feed is designed to be education, helping encourage learning and discovering new topics. 

Tackling misleading information 

Aware of the sensitivities surrounding the UK General Election, TikTok was the first platform to launch a dedicated election centre, connecting TikTok users to trusted information from the Electoral Commission and working alongside fact-checking partner Logically Facts to provide helpful advice on media literacy.  

X has developed and expanded Community Notes, which aim to empowering users to collaborate and add context to potentially misleading posts. Contributors who sign up to write and rate notes can leave them on any post, and if enough contributors from different points of view rate that note as helpful, the note will be publicly shown on a post.  

There are now over 750,000 contributors in 197 countries to add helpful context to posts on X, including ads, and especially on highly engaged content such as key news events. A recent study found that, across the political spectrum, Community Notes were perceived as significantly more trustworthy than traditional, simple misinformation flags. It also found that Community Notes had a greater effect on improving people’s identification of misleading posts. Importantly, independent studies show that posts with notes are shared 50-61% less, and deleted 80% more. 

X has also Community Notes to be automatically shown on posts that feature AI-generated images and other out-of-context media. The approximately 7,800 media notes that have been written are now showing on over 600,000 distinct posts and have been seen over 2.5 billion times. Anyone can now request a Community Note, and with enough requests, top contributors will be alerted and can propose notes. The program is built on transparency: the Community Notes algorithm is open source and publicly available on GitHub, along with the data that feeds it so that anyone can audit, analyse or suggest improvements.  

25 leading technology companies also came together to sign the AI Elections Accord, a set of commitments intended to combat the deceptive use of AI in 2024 elections. These measures include proactively mitigating risks from deceptive AI election content and detecting its distribution across platforms.  

Addressing deepfakes and the impact of generative AI 

TikTok has likewise started to automatically label AI-generated content on its platform, ensuring that the context behind a video is clear to viewers, a tool it has since expanded to also cover content created on some other platforms and subsequently re-shared, and which has since been used by over 37 million creators.  

Google has enhanced its search functions to remove non-consensual, sexually-explicit deepfakes and demote websites hosting high volumes of removed content, whilst also making it easier for victims to request the removal of sexually explicit deepfake content and working to reduce the prominence of such content in search results. Updates made in the past year have reduced exposure to explicit image results on these types of queries by over 70%.   

YouTube has implemented a tool requiring creators to disclose when realistic content has been altered using AI, and X’s Synthetic and Manipulated Media Policy, ensures that media that could deceive or mislead users and cause harm is labelled. 

Meta has made changes to the way they handle manipulated media based on feedback from the independent Oversight Board and a policy review process with public opinion surveys and expert consultations. It now adds “AI info” labels to a wider range of video, audio and image content when they detect industry standard AI image indicators or when people disclose that they’re uploading AI-generated content. Transparency and additional context are increasingly seen as the better way to address manipulated media to avoid the risk of unnecessarily restricting freedom of speech, so this approach keeps content on their platforms so they can add labels and context. 

Combatting fraud 

To combat fraud, Google has partnered with the Global Anti-Scam Alliance and the DNS Research Federation to launch Global Signal Exchange (GSE), a new project with the ambition to become a “global clearinghouse for online scams”, with Google as its first Founding Member. By combining forces, the GSE aims to make it easier to share key signals of fraud, enabling faster identification and disruption of malicious activity. This includes online shopping scams, with the initial pilot of the project enabling Google to share over 100,000 URLs of suspicious or fraudulent merchants. 

Meta has expanded a first-of-its-kind information sharing partnership with banks. The Fraud Intelligence Reciprocal Exchange (FIRE) programme allows banks to share intelligence with Meta directly to combat scams on their platforms. The early stage of this pilot has already led Meta to take action against thousands of accounts run by scammers, with approximately 20,000 accounts removed based on data shared. They are continuing to onboard more banks and strengthen their fraud detection capabilities, creating a safer digital environment for users in the UK and globally. 

What comes next:  

The clear roadmap set out by Ofcom and the commitment from Government to the implementation of the Act, as passed, has given welcome certainty to the industry.  

This has allowed members to move forward and begin making the changes to their products and services that will help make the internet safer place. Additionally, this has created good market conditions for providers of safety tech. As the Government and Ofcom move forward with the process of completing the rollout of the Act, we encourage them to continue to provide as much certainty as possible to industry.  

The Online Safety Act took too long to make its way through Parliament, and we believe it is everyone’s shared objective to ensure it is fully implemented on schedule, while creating the space to allow online service providers to move early where they can.   

Once the Act is fully in force, Government, Industry and the regulator can assess the effectiveness of the Act based on evidence and then work collaboratively to improve the regime over time. 

techUK and our members are committed to this process and look forward to continuing to work with our partners as the UK’s Online Safety regime is fully established.