A call for participation: Building the ICO’s auditing framework for Artificial Intelligence

19 Mar 2019 01:28 PM

Blog posted by: Simon McDougall, 18 March 2019.

Simon McDougall, Executive Director for Technology Policy and Innovation, invites comment from organisations on the development of an auditing framework for AI.

Applications of Artificial Intelligence (AI) are starting to permeate many aspects of our lives. I see new and innovative uses of this technology every day: in health care, recruitment, commerce . . . the list goes on and on.

We know the benefits that AI can bring to organisations and individuals. But there are risks too. And that’s what I want to talk about in this blog post. 

The General Data Protection Regulation (GDPR) that came into effect in May was a much-needed modernisation of data protection law.

Its considerable focus on new technologies reflects the concerns of legislators here in the UK and throughout Europe about the personal and societal effect of powerful data-processing technology like profiling and automated decision-making. 

The GDPR strengthens individuals’ rights when it comes to the way their personal data is processed by technologies such as AI. They have, in some circumstances, the right to object to profiling and they have the right to challenge a decision made solely by a machine, for example.

The law requires organisations to build-in data protection by design and to identify and address risks at the outset by completing data protection impact assessments. Privacy and innovation must sit side-by-side. One cannot be at the expense of the other.

That’s why AI is one of our top three strategic priorities. 

And that’s why we’ve added to our already expert tech department by recruiting Dr. Reuben Binns, our first Postdoctoral Research Fellow in AI. He will head a team from my Technology Policy and Innovation Directorate to develop our first auditing framework for AI.

The framework will give us a solid methodology to audit AI applications and ensure they are transparent, fair; and to ensure that the necessary measures to assess and manage data protection risks arising from them are in place. 

The framework will also inform future guidance for organisations to support the continuous and innovative use of AI within the law. The guidance will complement existing resources, not least our award winning Big Data and AI report.

But we don’t want to work alone. We’d like your input now, at the very start of our thinking. 
Whether you’re a data scientist, app developer or head up a company that relies on AI to do business, whether you’re from the private, public or third sector, we want you to join our open discussion about the genuine challenges arising from the adoption of AI. This will ensure the published framework will be both conceptually sound and applicable to real life situations. 

We welcome your thoughts on the plans and approach we set out in this post. We will shortly publish another article here to outline the proposed framework structure, its key elements and focus areas.

On this new blog site you will be able to find regular updates on specific AI data protection challenges and on how our thinking in relation to the framework is developing. And we want your feedback. You can leave us a comment or email us direct. 

The feedback you give us will help us shape our approach, research and priorities. We’ll use it to inform a formal consultation paper, which we expect to publish by January 2020. The final AI auditing framework and the associated guidance for firms is on track for publication by spring 2020.

We look forward to working with you!

Simon McDougall is Executive Director for Technology Policy and Innovation at the ICO where he is developing an approach to addressing new technological and online harms. He is particularly focused on artificial intelligence and data ethics.

He is also responsible for the development of a framework for auditing the use of personal data in machine learning algorithms.