techUK
|
|
Children Online: What Good Policy Looks Like
The Government's Growing Up in the Online World consultation closes on 26 May. techUK sets out its positions ahead of submitting a formal response.
A legal minimum age of 16 for social media will dominate the headlines around the Government’s ‘Growing Up in the Online World’ consultation - but it is not the question that will most shape children's experience online. The questions that matter are about scope, functionality, AI, and age assurance, and the answers need to be risk-based, proportionate, and grounded in evidence. techUK's response sets out how.
Britain already has one of the most comprehensive frameworks for protecting children online anywhere in the world. The Online Safety Act 2023 imposes serious obligations on platforms - risk assessments, age-appropriate design, proactive removal of harmful content - with Ofcom as a regulator that can enforce with huge fines. It is still being implemented and is already producing results.
None of which means the conversation should end there. Technology moves fast, harms evolve, and the Government is right to keep asking whether the framework is keeping pace. But it would be unwise to reach for unworkable solutions lacking a proper evidence base. The risk with this consultation is that it produces measures that feel decisive while shifting harm rather than reducing it. techUK is not opposed to age-appropriate access, nor to targeted new obligations where genuine gaps exist. What we are opposed to is poorly designed regulation that mistakes visible action for actual progress. This insight lays out where we stand.
Regulate by risk, not by service label
techUK’s position: scope should be determined by the features and functionalities a service offers and whether it is accessed by children - not platform labels, brand recognition, or size. We are calling on Government to adopt universal standards: all services with the relevant features and functionalities, accessed by children, held to the same expectations. That is a stronger, fairer and more future-proof basis for regulation than category-by-category definitions that will be out of date before the ink is dry.
The most consequential decision in this consultation will not be where to set the age limit, but rather which services that limit applies to.
"Social media" is not a coherent regulatory category. The features people are worried about - user-to-user interaction, content recommendations, messaging, livestreaming - appear across gaming platforms, educational tools, and messaging apps that nobody would recognise as social media. Regulatory lines drawn too narrowly around a handful of well-known platforms simply redirect children to less moderated alternatives. Drawn too broadly, they sweep up low-risk services and imposes compliance costs that protect nobody.
The criteria that actually matter are the nature and scale of user interaction, the degree of content exposure, the moderation complexity involved, and the underlying incentive structure of the service. A gaming platform that pairs children with strangers and allows unmoderated voice chat is a meaningfully different proposition from a closed educational tool, even if both technically permit "user-to-user interaction." Regulation that cannot draw that distinction will not make children safer - it will just provide more work for compliance lawyers.
Size-based thresholds are no better. Some of the most harmful online spaces are not the largest ones. A workable scope definition needs to be built around functional characteristics, not fixed lists of named services that will be out of date before the ink is dry.
Strengthen the OSA — don't bypass it
techUK's position: Ofcom-led, service-specific risk assessment under the Online Safety Act is the right mechanism for tackling harmful features and design. We are calling on Government to strengthen and accelerate that approach rather than replace it with prescriptive feature bans that cannot keep pace with how technology works in practice.
The consultation asks whether specific features - livestreaming, disappearing messages, algorithmic recommendations, push notifications, infinite scroll - should be restricted or banned for children. The concern is understandable; however, the solutions need to be proportionate and workable.
Features are the right unit of regulatory analysis – but they have to be assessed in context. The same feature can be benign on one service and high-risk on another, depending on how itis configured by default, what friction exists around it, the moderation systems supporting it, and the incentive structures of the service it sits within. A livestream on a carefully moderated children's creative platform is not the same risk as one on a network with minimal safety investment. A recommendation algorithm that serves a child relevant learning content is not the same thing as one that is designed purely to maximise session length. Statutory bans imposed in primary legislation cannot make those distinctions; Ofcom-led risk assessment under the OSA is built to. Policy that treats them identically will disrupt the services doing the right things while leaving the incentive structures of the worst offenders largely intact.
On recommendation algorithms specifically, the policy debate too often treats them as inherently risky. They are not. Recommendation systems are how modern platforms deliver age-appropriate experiences at scale - filtering inappropriate content, demoting harmful material, surfacing safer alternatives, and enforcing age-based standards across billions of pieces of content. Blanket restrictions on algorithmic recommendations for children would not make their experiences safer; in many cases it would make them meaningfully less safe. The question is not whether algorithms should serve children, but how they are designed, what they optimise for, and what oversight applies.
Many services already configure teen experiences differently - restricting engagement mechanics, reconfiguring defaults, building supervised account models. This is not universal, and it is clearly not sufficient everywhere. But blanket feature bans would cut across that progress without replacing it with anything more effective.
The stronger argument in the consultation, and the one techUK finds more persuasive, focuses on business models rather than individual features. Where a platform's design is fundamentally oriented around keeping children online for very long periods of time – that warrants some kind of intervention. Ofcom-led, service-specific risk assessment is built for exactly that kind of nuanced judgment.
There is a reasonable case for harder requirements on the clearly high-risk functions - direct messaging with strangers, live image-sharing - where the harm evidence is concrete and where some platforms have simply declined to act. Where individual platforms have reduced trust and safety investment, the right response is targeted regulatory action against those specific failures, not blanket restrictions on services that have moved significantly in the right direction.
Distinguish AI companions from AI tools
techUK's position: AI regulation in a children's safety context must be service-specific and feature-specific. We are calling on Government to scope new AI obligations narrowly to companion services and to expressly exclude embedded AI features in non-companion products – the risks are different, and treating them the same will produce worse outcomes for children, not better.
The concerns raised about AI chatbots are real. Internet Matters research finds that sixty-four per cent of 9 to 17-year-olds already use them, with a proportion saying they turn to chatbots because they have no one else to talk to. The risks of emotional dependence, particularly where services are designed to feel like relationships, are real.
Extending Online Safety Act duties to chatbots not currently in scope is, in principle, reasonable. But it matters enormously that we get this right.
A companion chatbot designed to simulate emotional intimacy is a categorically different proposition from an AI tutoring tool embedded in a school platform. Treating them as equivalent would over-regulate services with genuine educational value - AI's potential to support personalised learning, accessibility, and creativity is substantial and should not be casually constrained - while potentially under-scrutinising the services that actually warrant attention.
The questions that should determine the scope of new obligations are whether a service encourages strong emotional attachment, whether it personalises in ways that deepen dependence, and whether it is used in a context where children are already vulnerable. Those questions produce very different answers for different services, and the framework should reflect that.
The Department for Education's product safety standards for AI tutoring tools are instructive here: no anthropomorphisation, no manipulative design, robust safeguarding protocols. That is the right kind of approach.
Proportionate age assurance — not blanket verification
techUK's position: age assurance should function as an enabler of differentiated, proportionate access calibrated to the features and content involved - not a uniform restriction applied across all services regardless of what they offer. We are calling for verification requirements to be explicitly calibrated to the sensitivity of the features and content being accessed, and for Government to prioritise a national approach to interoperability.
Age assurance is a legitimate tool within an online safety regime. On its own, it is not sufficient - and applied without proportionality, it can just create more of its own problems.
The technology has real limits that the consultation at least acknowledges honestly. Distinguishing a 14-year-old from a 16-year-old is significantly harder than distinguishing a child from an adult. Facial age estimation is a useful tool but not something that can be a precise threshold within this debate. Circumvention through shared devices, borrowed accounts, and VPNs is persistent.
Early data from Australia's eSafety Commissioner shows a substantial proportion of under-16s retained access after the ban came into force, with enforcement uneven and largely platform-initiated, and outcomes varying significantly between services. The regulator itself acknowledges ongoing compliance issues. The level of verification required should match the level of risk involved. Requiring document-level ID to access a content feed is not proportionate. Requiring it to access explicit content clearly is. A framework without that calibration will burden ordinary users - adults as much as children - without delivering meaningfully better safety outcomes for the children it is supposed to protect.
The structural problem that the consultation underweights is interoperability. Users currently verify their age separately across multiple services through multiple providers. The result is unnecessary friction, inconsistent outcomes, and repeated exposure of personal data. A coherent national standard built around data minimisation and layered verification would be substantially more effective, and substantially less intrusive, than the current patchwork.
On VPNs specifically: the evidence cited in the consultation does not support age-restricting them. The spike in VPN usage after the Online Safety Act's age assurance requirements came in was not driven by children trying to circumvent the rules. Many children use VPNs for legitimate privacy and safety reasons, which the consultation itself acknowledges. Restricting access would harm those children while doing little to stop anyone determined to circumvent the rules.
The test that matters
techUK supports what this consultation is trying to do. Children should have safe, enriching online lives, and the industry has a genuine responsibility for the environments it creates.
But every measure that comes out of this process should be held to a straightforward question: will it actually reduce harm to children, or will it move harm somewhere less visible while giving the appearance of action?
Age limits without effective enforcement, feature restrictions without contextual judgment, and age assurance requirements without proportionality all struggle to pass that test.
The Online Safety Act is a good foundation that is already working to protect children online. The right approach is to build on it with targeted, evidence-based additions where real gaps exist - not to duplicate, contradict, or undermine what is already there.
That is the case techUK will make in its formal response, and we welcome engagement from members and policymakers who want to get this right.
Original article link: https://www.techuk.org/resource/children-online-what-good-policy-looks-like.html


