POST (Parliamentary Office of Science and Technology)
Printable version

Interpretable machine learning

Machine learning (ML, a type of artificial intelligence) is increasingly being used to support decision making in a variety of applications including recruitment and clinical diagnoses. While ML has many advantages, there are concerns that in some cases it may not be possible to explain completely how its outputs have been produced. This POSTnote gives an overview of ML and its role in decision-making. It examines the challenges of understanding how a complex ML system has reached its output, and some of the technical approaches to making ML easier to interpret. It also gives a brief overview of some of the proposed tools for making ML systems more accountable.

Documents to download

Modern machine learning (ML) systems are increasingly being used to inform decision making in a variety of applications. However, for some types of ML, such as ‘deep learning’, it may not be possible to explain completely how a system has reached its output. A further concern is that ML systems are susceptible to introducing or perpetuating discriminatory bias. Experts have warned that a lack of clarity on how ML decisions are made may make it unclear whether the systems are behaving fairly and reliably, and may be a barrier to wider ML adoption. 

In 2018, the Lords Committee on AI called for the development of AI systems that are “intelligible to developers, users and regulators”. It recommended that an AI system that could have a substantial impact on an individual’s life should not be used unless it can produce an explanation of its decisions. In a January 2020 review, the Committee on Standards in Public Life noted that explanations for decisions made using ML in the public sector are important for public accountability and recommended that government guidance on the public sector use of AI should be made easier to use. 

The UK Government has highlighted the importance of ethical ML and the risks of a lack of transparency in ML-assisted decision-making. In 2018 it established the Centre for Data Ethics and Innovation to provide independent advice on measures needed to ensure safe, ethical and innovative uses of AI.  

Key Points

  • ML is increasingly being used to inform decision making in a variety of applications. It has the potential to bring benefits such as increased labour productivity and improved services.  
  • ML relies on large datasets to train its underlying algorithms. Unrepresentative, inaccurate or incomplete training data can lead to risks such as algorithmic bias. 
  • The term ‘algorithmic bias’ is used to describe discrimination against certain groups on the basis of an ML system’s outputs. Bias can be introduced into a ML system in different ways including through a system’s training data or decisions made during development. There have been several recent high-profile examples of algorithmic bias.  
  • Experts have raised concerns about a lack of transparency in decisions made or informed by ML systems. This is a particular issue for certain complex types of ML, such as deep learning, where it may not be possible to explain completely how a decision has been reached. 
  • Complex ML systems where it is difficult or impossible to fully understand how a decision has been reached are often referred to as ‘black box’ ML.  
  • Terminology varies, but ‘interpretability’ is typically used to describe the ability to present or explain an ML system’s decision-making process in terms that can be understood by humans. 
  • Many stakeholders have highlighted that the extent to which ML needs to be interpretable is dependent on the audience and context in which it is used. 
  • Technical approaches to interpretable ML include designing systems using types of ML that are inherently easy to understand and using retrospective tools to probe complex ML systems and obtain a simplified overview of how they function. 
  • Some stakeholders have said that limiting applications to inherently interpretable ML types may limit the capability of ML technology. However, others argue that there is not always a trade-off between performance accuracy and interpretability, and in many cases complex ML can be substituted for a more interpretable method. 
  • Tools for interpreting complex ML retrospectively are in early stages of development and their use is not currently widespread. Some tools aim to interpret a specific ML decision, while others can be used to give a broad understanding of how an ML system behaves. 
  • The ICO and Alan Turing Institute have produced guidance for organisations to help them explain AI-based decisions to affected individuals. 
  • Benefits of interpretable ML including improved understanding of how a system functions and improved user trust in a system. 
  • However, there are also challenges with interpretability such as commercial sensitivities and the risk of gaming. 
  • In addition to technical approaches to interpretable ML, many stakeholders have called for wider accountability mechanisms to ensure that ML systems are designed and deployed in an ethical and responsible way. 
  • Some wider ML accountability mechanisms include detailed documentation of a ML system’s development process, algorithm impact assessments, algorithm audits and use of frameworks and standards. 

Acknowledgements

POSTnotes are based on literature reviews and interviews with a range of stakeholders and are externally peer reviewed. POST would like to thank interviewees and peer reviewers for kindly giving up their time during the preparation of this briefing, including:

  • Alan Dawes, Atomic Weapons Establishment*
  • Anna Bacciarelli, Open Society Foundations
  • Ben Dellot, Centre for Data Ethics and Innovation
  • Christine Henry, Datakind*
  • David Frank, Microsoft*
  • Dr Adrian Weller, Alan Turing Institute & University of Cambridge*
  • Dr Andrew Thompson, National Physical Laboratory & University of Oxford*
  • Dr Bill Mitchell, British Computer Society
  • Dr Brent Mittelstadt, Oxford Internet Institute*
  • Dr Carolyn Ashurst, University of Oxford*
  • Dr Chico Camargo, Oxford Internet Institute*
  • Dr Chris Russell, University of Surrey*
  • Dr David Leslie, Alan Turing Institute*
  • Dr Michael Veale, University College London*
  • Dr Neil Rabinowitz, Deep Mind*
  • Dr Richard Pinch, Institute of Mathematics and its Applications*
  • Dr Silvia Milano, Oxford Internet Institute
  • Dr Ansgar Koene, University of Nottingham*
  • Eleanor Mill, University of Surrey*
  • Fionntán O’Donnell, Open Data Institute*
  • Helena Quinn, Competition and Markets Authority*
  • Jacob Beswick, Office for AI*
  • Jen Boon, NHSx
  • Jeni Tennison, Open Data Institute
  • Jenn Wortmann Vaughan, Microsoft Research*
  • Jenny Brennan, Ada Lovelace Institute*
  • Jessica Montgomery, University of Cambridge*
  • Jim Weatherall , Royal Statistical Society & AstraZeneca*
  • John Midgley, Amazon*
  • Lee Pattison, Atomic Weapons Establishment*
  • Lisa Dyer, Partnership on AI*
  • Magdelena Lis, Centre for Data Ethics and Innovation*
  • Marion Oswald, Northumbria University*
  • Members of the POST board*
  • Michael Birtwistle, Centre for Data Ethics and Innovation
  • Michael Philips, Microsoft*
  • Olivia Varley-Winter
  • Professor James Davenport, British Computer Society & University of Bath*
  • Professor Sandra Wachter, Oxford Internet Institute*
  • Professor Sofia Olhede, University College London
  • Reuben Binns, University of Oxford*
  • Richard Ward, IBM*
  • Sam Pettit, DeepMind*
  • Sébastien Krier, Office for AI
  • Stefan Janusz, Office for AI*

*denotes people and organisations who acted as external reviewers of the briefing.

 

Channel website: https://www.parliament.uk/post

Original article link: https://post.parliament.uk/research-briefings/post-pn-0633/

Share this article
Academic Fellowships Upcoming work POST Publications

The Parliamentary Office of Science and Technology (POST) is Parliament’s in-house source of scientific advice.

 

Latest News from
POST (Parliamentary Office of Science and Technology)

Recruiters Handbook: Download now and take the first steps towards developing a more diverse, equitable, and inclusive organisation.