Information Commissioner's Office
When it comes to explaining AI decisions, context matters
Alex Hubbard, Senior Policy Officer at the ICO, looks at some of the key themes identified in the ICO and The Alan Turing Institute’s interim report about explanations of AI decisions.
Explainability of AI decisions is a key area of the AI auditing framework. This guidance from Project ExplAIn will inform our assessment methodology.
If an Artificial Intelligence (AI) system makes a decision about an individual, should that person be given an explanation of how the decision was made? Should they get the same information about a decision regarding criminal justice as they would about a decision concerning healthcare?
These are just two of the issues we have been exploring with public and industry engagement groups over the last few months.
In 2018, the Government tasked the ICO and The Alan Turing Institute (The Turing) to produce practical guidance for organisations, to assist them with explaining AI decisions to the individuals affected. This work has been titled ‘Project ExplAIn’.
The ICO and The Turing conducted research, including citizens’ juries and industry roundtables, to gather views from a range of stakeholders with various, and sometimes competing, interests in the subject.
Today, we published the findings of this research in a Project ExplAIn interim report.
The report identifies three key themes that emerged from the research:
- the importance of context in explaining AI decisions;
- the need for education and awareness around AI; and
- the various challenges to providing explanations.
The strongest message from the research is that context is key. The importance of providing explanations to individuals, and the reasons for wanting them, change dramatically depending on what the decision is about.
People who took part in the citizen juries (jurors) felt explaining an AI decision to the individual affected was more important in areas such as recruitment and criminal justice than in healthcare. Jurors wanted explanations of AI decisions made in recruitment or criminal justice settings in order to challenge them, learn from them, and check they have been treated fairly. In healthcare settings, however, jurors preferred to know a decision was accurate rather than why it was made.
Jurors said they only expected an explanation of an AI decision if they would also expect a human to explain a decision they had made. They also wanted explanations of AI decisions to be similar to human explanations. But the industry roundtables questioned whether AI decisions should be held to higher standards, because human explanations could sometimes misrepresent the truth in order to benefit the explainer or to appease the individual.
The findings showed a need to engage and inform the public in the use, benefits and risks of AI decision-making, through education and raising awareness. Although there wasn’t a clear message on who should be responsible for this, delivering a balanced message is important, suggesting a need for a diverse range of voices.
Industry roundtable participants generally felt confident they could technically explain the decisions made by AI. However, they raised other challenges to ‘explainability’ including cost, commercial sensitivities (eg infringing intellectual property) and the potential for ‘gaming’ or abuse of systems.
The lack of a standard approach to establishing internal accountability for explainable AI decision systems also emerged as a challenge.
The findings set out in the Project ExplAIn interim report will feed directly into our guidance for organisations. This will go out for public consultation over the summer and will be published in full in the autumn.
The ICO has said many times that data protection is not a barrier to the use of innovative and data-driven technologies. But these opportunities cannot be taken at the expense of being transparent and open with individuals about the use of their personal data.
The guidance will help organisations to comply with data protection law but will not be limited to this. It will also promote best practice, helping organisations to foster individuals’ trust, understanding, and confidence in AI decisions.
As well as benefiting the ICO and The Turing on Project ExplAIn, it is hoped that these findings will help inform others in their own thinking, research and development of explainable AI decisions.
All materials and reports generated from the citizens’ juries are freely available to access. The project ExplAIn guidance will also inform the ICO’s AI auditing framework, which is currently being consulted on and which is due to be published in 2020.
Original article link: https://ai-auditingframework.blogspot.com/2019/06/when-it-comes-to-explaining-ai.html
Latest News from
Information Commissioner's Office
ICO reprimands Thames Valley Police for releasing witness details to suspected criminals02/06/2023 12:25:00
The Information Commissioner’s Office (ICO) has issued a reprimand to Thames Valley Police (TVP) after details were released which led to suspected criminals learning the address of a witness.
ICO issues Ministry of Justice with reprimand after confidential personal information left in prison holding area26/05/2023 12:10:00
The ICO has issued a formal reprimand to the Ministry of Justice (MoJ) after confidential waste documents were left in an unsecured prison holding area.
“It’s important not to get caught out.” - New SARs guidance for employers issued24/05/2023 16:05:00
The Information Commissioner’s Office (ICO) has today published new guidance for businesses and employers on responding to Subject Access Requests (SARs)
Information Commissioner John Edwards' opening remarks at the European Parliament’s Committee on Civil Liberties, Justice and Home Affairs (LIBE), delivered on 23 May 2023.23/05/2023 12:25:00
Information Commissioner John Edwards' opening remarks at the European Parliament’s Committee on Civil Liberties, Justice and Home Affairs (LIBE), delivered today.
ICO fines two businesses £180,000 for making unlawful marketing calls17/05/2023 09:10:00
Regulator launches three new videos aimed at helping small businesses navigate electronic communications law.
ICO takes action against both Plymouth City Council and Norfolk County Council for failing to respond to information access requests16/05/2023 09:10:00
The Information Commissioner’s Office (ICO) has reprimanded two councils that have failed to respond to the public when asked for personal information held about them – known as a Subject Access Request (SAR).
ICO takes action against Shropshire Council for failing to respond to Freedom of Information requests10/05/2023 09:10:00
The Information Commissioner’s Office (ICO) has issued an enforcement notice to Shropshire Council for its poor handling of requests made under the Freedom of Information Act (FOIA) 2000.
Necessity and proportionality: questions police must ask when considering sharing personal information with the public09/05/2023 15:25:00
In February, the ICO announced it would be asking Lancashire Police to set out how they reached the decision to include personal information in media statements as they sought to find Nicola Bulley.
Blog: Protecting privacy during a pandemic: our work on the UK’s Covid apps28/04/2023 12:25:00
The ICO’s work is often in the headlines, and our recent enforcement action against TikTok for allowing over a million UK children to use its platform without parental consent brought international media attention.