Interpretable machine learning

6 Oct 2020 12:58 PM

Machine learning (ML, a type of artificial intelligence) is increasingly being used to support decision making in a variety of applications including recruitment and clinical diagnoses. While ML has many advantages, there are concerns that in some cases it may not be possible to explain completely how its outputs have been produced. This POSTnote gives an overview of ML and its role in decision-making. It examines the challenges of understanding how a complex ML system has reached its output, and some of the technical approaches to making ML easier to interpret. It also gives a brief overview of some of the proposed tools for making ML systems more accountable.

Documents to download

Modern machine learning (ML) systems are increasingly being used to inform decision making in a variety of applications. However, for some types of ML, such as ‘deep learning’, it may not be possible to explain completely how a system has reached its output. A further concern is that ML systems are susceptible to introducing or perpetuating discriminatory bias. Experts have warned that a lack of clarity on how ML decisions are made may make it unclear whether the systems are behaving fairly and reliably, and may be a barrier to wider ML adoption. 

In 2018, the Lords Committee on AI called for the development of AI systems that are “intelligible to developers, users and regulators”. It recommended that an AI system that could have a substantial impact on an individual’s life should not be used unless it can produce an explanation of its decisions. In a January 2020 review, the Committee on Standards in Public Life noted that explanations for decisions made using ML in the public sector are important for public accountability and recommended that government guidance on the public sector use of AI should be made easier to use. 

The UK Government has highlighted the importance of ethical ML and the risks of a lack of transparency in ML-assisted decision-making. In 2018 it established the Centre for Data Ethics and Innovation to provide independent advice on measures needed to ensure safe, ethical and innovative uses of AI.  

Key Points

Acknowledgements

POSTnotes are based on literature reviews and interviews with a range of stakeholders and are externally peer reviewed. POST would like to thank interviewees and peer reviewers for kindly giving up their time during the preparation of this briefing, including:

*denotes people and organisations who acted as external reviewers of the briefing.