techUK
Printable version

ICO launch explainable AI interim report

The Information Commissioner’s Office, in collaboration with the Alan Turing Institute, publish their "Project ExplAIn" interim report.

The ICO, in collboration with the Alan Turing Institute, have published their Project ExplAIn interim report. The finding from this report are based on research, including citizens’ juries and industry roundtables, to gather views from a range of perspectives on explainable AI. 

The report identifies three key themes that emerged from the research: 

  • The importance of context in explaining AI decisions

The strongest message from the research is that context is key. The importance of providing explanations to individuals, and the reasons for wanting them, change dramatically depending on what the decision is about. 

People who took part in the citizen juries felt explaining an AI decision to the individual affected was more important in areas such as recruitment and criminal justice than in healthcare. Jurors wanted explanations of AI decisions made in recruitment or criminal justice settings in order to challenge them, learn from them, and check they have been treated fairly. In healthcare settings, however, jurors preferred to know a decision was accurate rather than why it was made. 

  • The need for education and awareness around AI

The findings showed a need to engage and inform the public in the use, benefits and risks of AI decision-making, through education and raising awareness. Although there wasn’t a clear message on who should be responsible for this, delivering a balanced message is important, suggesting a need for a diverse range of voices. 

  • The various challenges to providing explanations

Industry roundtable participants generally felt confident they could technically explain the decisions made by AI. However, they raised other challenges to ‘explainability’ including cost, commercial sensitivities and the potential for ‘gaming’ or abuse of systems. The lack of a standard approach to establishing internal accountability for explainable AI decision systems also emerged as a challenge. 

Commenting on the report, Sue Daley, Associate Director, Technology and Innovation, yesterday said:

As artificial intelligence (AI) becomes more prevalent throughout society, a key goal for policymakers and industry alike must be to ensure public trust and confidence in the technology. The ICO’s citizen jury study provides a unique snapshot of public opinion on this complex issue and is a key step towards increasing public engagement on digital ethics issues across the UK, which is a key action from techUK’s Digital Ethics in 2019 report.

Companies need to ensure that decisions made by their AI systems are auditable, challengeable and ultimately understandable by the public. techUK stands ready to support the ICO and the Alan Turing Institute develop guidance assisting organisations with explaining AI decisions".

The findings from the Project ExplAIn interim report will directly feed-in to the ICO’s guidance for organisations; a key commitment made in the UK Government’s 2018 AI Sector deal. This guidance will go out for public consultation over the summer, with the final version being published in autumn. The project ExplAIn guidance will also inform the ICO’s AI auditing framework, which is currently being consulted on and which is due to be published in 2020. 

techUK will be hosting a joint roundtable with the ICO and Turing in early September to discuss their proposed guidance on explainable AI for companies. If you’d like to attend this roundtable, please get contact Katherine.

 

Channel website: http://www.techuk.org/

Original article link: https://www.techuk.org/insights/news/item/15484-ico-launch-explainable-ai-interim-report

Share this article

Latest News from
techUK

Facing the Future...find out more