WGPlus (Archive)
Never ‘blindly’ rely on something you don’t understand |
‘Explainability’ of AI decisions is a key area of the AI auditing framework. This guidance from Project ExplAIn will inform our assessment methodology. If an Artificial Intelligence (AI) system makes a decision about an individual, should that person be given an explanation of how the decision was made? Should they get the same information about a decision regarding criminal justice as they would about a decision concerning healthcare? These are just two of the issues we have been exploring with public and industry engagement groups over the last few months. In 2018, the Government tasked the ICO and The Alan Turing Institute (The Turing) to produce practical guidance for organisations, to assist them with explaining AI decisions to the individuals affected. This work has been titled ‘Project ExplAIn’. The ICO and The Turing conducted research, including citizens’ juries and industry roundtables, to gather views from a range of stakeholders with various, and sometimes competing, interests in the subject. The findings of this research have now been published in a Project ExplAIn interim report. The report identifies three key themes that emerged from the research:
The findings set out in the Project ExplAIn interim report will feed directly into ICO guidance for organisations. This will go out for public consultation over the summer and will be published in full in the autumn. All materials & reports generated from the citizens’ juries are freely available to access. The project ExplAIn guidance will also inform the ICO’s AI auditing framework, which is currently being consulted on and which is due to be published in 2020. |
Researched Links: |
ICO: When it comes to explaining AI decisions, context matters techUK: ICO launch explainable AI interim report RUSI partners with the Centre for Data Ethics & Innovation Human skill sets are not redundant yet! The intelligent merging of human & technical healthcare SFO; An intelligent solution to a major problem |