Information Commissioner's Office
Automated Decision Making: the role of meaningful human reviews
In the first detailed element of our AI framework blog series, Reuben Binns, our Research Fellow in AI, and Valeria Gallo, Technology Policy Adviser, explore how organisations can ensure ‘meaningful’ human involvement to make sure AI decisions are not classified as solely automated by mistake.
This blog forms part of our ongoing work on developing a framework for auditing AI. We are keen to hear your views in the comments below or you can email us.
Artificial Intelligence (AI) systems often process personal data to either support or make a decision. For example, AI could be used to approve or reject a financial loan automatically, or support recruitment teams to identify interview candidates by ranking job applications.
Article 22 of the General Data Protection Regulation (GDPR) establishes very strict conditions in relation to AI systems that make solely automated decisions, ie without human input, with legal or similarly significant effects about individuals. AI systems that only support or enhance human decision-making are not subject to these conditions. However, a decision will not fall outside the scope of Article 22 just because a human has ‘rubber-stamped’ it: human input needs to be ‘meaningful’.
The degree and quality of human review and intervention before a final decision is made about an individual is the key factor in determining whether an AI system is solely or non-solely automated.
Board members, data scientists, business owners, and oversight functions, among others, will be expected to play an active role in ensuring that AI applications are designed, built, and used as intended.
The meaningfulness of human review in non-solely automated AI applications and the management of the risks associated with it are key areas of focus for our proposed AI Auditing Framework and what we will be exploring further in this blog.
What’s already been said?
- Human reviewers must be involved in checking the system’s recommendation and should not “routinely” apply the automated recommendation to an individual;
- reviewers’ involvement must be active and not just a token gesture. They should have actual “meaningful” influence on the decision, including the “authority and competence” to go against the recommendation; and
- reviewers must ‘weigh-up’ and ‘interpret’ the recommendation, consider all available input data, and also take into account other additional factors’.
Are there additional risk factors in complex systems?
The meaningfulness of human input must be considered in any automated decision-making systems however basic (e.g. simple decision trees). In more complex AI systems however, we think there are two additional factors that could potentially cause a system to be considered solely-automated. They are:
- Automation bias
- Lack of interpretability
Latest News from
Information Commissioner's Office
Statement: Live facial recognition technology in King's Cross19/08/2019 15:25:00
Statement from Elizabeth Denham, Information Commissioner, on the use of live facial recognition technology in King's Cross, London.
Statement: Live facial recognition technology in Kings Cross16/08/2019 10:10:00
Statement from Elizabeth Denham, Information Commissioner, on the use of live facial recognition technology in Kings Cross, London.
Blog: Three top issues for town and parish councils15/08/2019 10:15:00
The advent of the GDPR in May 2018 brought new data protection obligations for many organisations. Some of this presented a challenge, particularly for smaller organisations like parish and town councils, who we saw were keen to demonstrate their compliance but needed support to achieve this.
ICO launches consultation on the draft framework code of practice for the use of personal data in political campaigning09/08/2019 14:20:00
The Information Commissioner's Office (ICO) is consulting on a new framework code of practice for the use of personal data in political campaigning.
Blog: Protecting children online: update on progress of ICO code07/08/2019 15:10:00
Blog posted by: Elizabeth Denham, Information Commissioner, 07 August 2019.
Fully automated decision making AI systems: the right to human intervention and other safeguards06/08/2019 10:25:00
Reuben Binns, our Research Fellow in Artificial Intelligence (AI), and Valeria Gallo, Technology Policy Adviser, discuss some of the key safeguards organisations should implement when using solely automated AI systems to make decisions with significant impacts on data subjects.
ICO joins international signatories in raising Libra data protection concerns05/08/2019 16:25:00
The Information Commissioner’s Office (ICO) has joined data protection authorities from around the world in calling for more openness about the proposed Libra digital currency and infrastructure.
ICO fines boiler replacement company for thousands of nuisance calls made to TPS subscribers05/08/2019 09:10:00
Making it Easy Ltd has been fined £160,000 by the Information Commissioner’s Office (ICO) for making spam calls to people registered with the Telephone Preference Service (TPS).