techUK
|
|
New ICO ‘Tech Futures’ report on agentic AI
You can read the full report here.
Last week, the Information Commissioner’s Office (ICO) published its Tech Futures report on agentic AI, setting out its thoughts on how the technology may develop and be used in the coming years.
To develop this report, the ICO analysed current agentic AI capabilities and expected technical developments, identified data protection risks specific to agentic systems beyond those seen in generative AI, and explored privacy-friendly innovation opportunities.
The report examines the data protection considerations organisations will need to address as they explore deploying agentic AI, highlighting both the risks and the potential opportunities. It also outlines four future scenarios (outlined below) around how agentic AI capabilities could evolve and how organisations might adopt them over the next two to five years.
What are AI agents and agentic AI?
The ICO report describes an AI agent as a software or a system that can carry out processes or tasks with varying levels of sophistication and automation. Whereas agentic AI is defined as systems that go beyond generative AI by combining language capabilities with tools and autonomous decision-making. These systems can complete open-ended tasks like booking travel, writing and testing code, or handling customer transactions with minimal human oversight.
Agentic AI adoption opportunities for organisations and consumers
The report sets out potential opportunities, with a particular focus on ‘agentic commerce’ where personal AI agents could help customers by anticipating their shopping needs and making purchases on their behalf. The ICO explains how agents may be able to learn from customers' preferences, behaviours, and even upcoming events, all of which could make shopping much easier and create new opportunities for businesses to engage with customers more effectively.
Potential data protection challenges
The report highlights novel risks that agentic systems may pose:
- Responsibility and ownership: working out who is the data controller and who is the processor may be more challenging when multiple companies provide different parts of an agentic system.
- Data concentration: personal AI assistants in particular may accumulate significant amounts of personal information in one place.
- Scaled-up automation: these systems can automate much more complex decisions, triggering stricter rules on automated decision-making.
- Purpose creep: the open-ended nature of agentic tasks makes it tempting to define data processing purposes too broadly.
- Data minimisation challenges: autonomous systems might process more personal information than necessary to complete their tasks.
- Sensitive data risks: higher chance that systems will accidentally use or infer sensitive personal information.
- Transparency issues: more complex systems may make it harder to explain how they work and enable individuals to exercise their rights.
- New security risks: as autonomous systems become more advanced and more independent, they may pose new ways for hackers to attack
Data protection opportunities:
The ICO identified innovation opportunities with agentic AI that have the potential to support data protection and information rights and contribute to privacy-positive outcomes. Potential areas include:
- Data protection compliant agents
- Agentic controls
- Privacy management agents
- Information governance agents
- Ways to benchmark and evaluate agentic systems
Four future scenarios
To explore how agentic AI might develop in practice, the ICO have set out four possible future scenarios based on two factors. These are the capability of AI agents and the extent to which they are adopted. These scenarios are intended to illustrate how different combinations of capability and uptake could shape both the benefits of agentic AI and the data protection risks that may arise.
1. Low capability and low adoption – scarce and simple agents: in this future, agentic AI is not significantly more sophisticated than current chatbots and is not widely used.
2. Low capability and high adoption – just good enough to be everywhere: agentic AI is widespread, but its limited capabilities may lead to harms from failures, such as misinterpreted tasks or failures on edge cases.
3. High capability, and low adoption – agents in waiting: the technology is highly capable but not widely adopted, with potential harms arising from agents working as intended but accessing large amounts of personal data and diminishing privacy.
4. High capability and high adoption – ubiquitous agents: in this scenario, highly capable agents are widely adopted across various aspects of life and work, presenting both opportunities and significant data protection challenges.
What comes next?
The ICO will host workshops to gather more information on agentic capabilities, adoption patterns, and how industry is managing data protection risks. They are updating guidance on automated decision-making and profiling, taking account of the Data (Use and Access) Act, with public consultations due to start this year.
Through the Digital Regulation Cooperation Forum (DRCF), the ICO is working with its partner regulators to understand wider implications. They have launched a Thematic Innovation Hub on agentic AI, which innovators are invited to participate in. The ICO is encouraging organisations to use their innovation services and, for public interest applications, the Regulatory Sandbox.
techUK's work on agentic AI
Agentic AI is a live and rapidly evolving area that techUK are exploring in depth this year. We are particularly keen to learn from organisations developing and adopting agentic AI systems, to better understand emerging use cases, challenges, and implications.
If you would like to share insights or discuss how you can get involved in our agentic AI work, please get in touch with Kir Nuthi and Usman Ikhlaq.
Original article link: https://www.techuk.org/resource/new-ico-tech-futures-report-on-agentic-ai-opportunities-and-considerations.html


