Fully automated decision making AI systems: the right to human intervention and other safeguards

5 Aug 2019 02:31 PM

Reuben Binns, our Research Fellow in Artificial Intelligence (AI), and Valeria Gallo, Technology Policy Adviser, discuss some of the key safeguards organisations should implement when using solely automated AI systems to make decisions with significant impacts on data subjects.

This post is part of our ongoing Call for Input on developing the ICO framework for auditing AI. We encourage you to share your views by leaving a comment below or by emailing us at AIAuditingFramework@ico.org.uk.

The General Data Protection Regulation (GDPR) requires organisations to implement suitable safeguards when processing personal data to make solely automated decisions that have a legal or similarly significant impact on individuals. These safeguards include the right for data subjects:

These safeguards cannot be token gestures. Guidance published by the European Data Protection Board (EDPB) states that human intervention involve a review of the decision, which

“must be carried out by someone who has the appropriate authority and capability to change the decision”.

The review should include a

“thorough assessment of all the relevant data, including any additional information provided by the data subject.”

In this respect, the conditions under which human intervention will qualify as meaningful are similar to those that apply to human oversight in ‘non-solely automated’ systems. However, a key difference is that in solely automated contexts, human intervention is only required on a case-by-case basis to safeguard the data subject’s rights.

Why is this a particular issue for AI systems?

The type and complexity of the systems involved in making solely automated decisions will affect the nature and severity of the risk to people’s data protection rights and will raise different considerations, as well as compliance and risk management challenges.

Basic systems, which automate a relatively small number of explicitly written rules (eg a set of clearly expressed ‘if-then’ rules to determine a customer’s eligibility for a product) are unlikely to be considered AI. It should also be relatively easy for a human reviewer to identify and rectify any mistake, if a decision is challenged by a data subject because of system’s high interpretability.

However other systems, such as those based on machine learning (ML), may be more complex and present more challenges for meaningful human review. ML systems make predictions or classifications about people based on data patterns. Even when they are highly accurate, they will occasionally reach the wrong decision in an individual case. Errors may not be easy for a human reviewer to identify, understand or fix.

While not every challenge on the part of data subject will be valid, organisations should expect that many could be. There are two particular reasons why this may be the case in ML systems:

What should organisations do?

Many of the controls required to ensure compliance with the GDPR’s provisions on solely automated systems are very similar to those necessary to ensure the meaningfulness of human reviews in non-solely automated AI systems.

Organisations should: 

However, there are some additional requirements and considerations organisations should be aware of: 
  1. The use of solely automated systems to make decisions with legal or significant effects on data subjects will always trigger the need for a Data Protection Impact Assessment(DPIA). DPIAs are a compliance requirement, but also a helpful tool for organisations to reflect carefully on the appropriateness of deploying a solely automated process. In the case of AI systems, DPIAs should give particular consideration to the level of complexity and interpretability of the system, and the organisation’s ability to adequately protect individuals and their rights. 
  2. Our ExplAIn project is currently looking at how, and to what extent, complex AI systems might affect an organisation’s ability to provide meaningful explanations to data subjects. However, complex  AI systems can also impact the effectiveness of other mandatory safeguards. If a system is too complex to explain, it may also be too complex to meaningfully contest, to intervene on, to review, or to put an alternative point of view against. For instance, if an AI system uses hundreds of features and a complex, non-linear model to make a prediction, then it may be difficult for a data subject to determine which variables or correlations to object to. Therefore safeguards around solely automated AI systems are mutually supportive, and should be designed holistically and with the data subject in mind. 
  3. The information about the logic of a system and explanations of decisions should give data subjects the necessary context to decide whether, and on what grounds, they would like to request human intervention. In some cases, insufficient explanations may prompt data subjects to resort to other rights unnecessarily. Requests for intervention, expression of views, or contests are more likely to happen if data subjects don’t feel they have a sufficient understanding of how the decision was reached.  
  4. The process for data subjects to exercise their rights should be simple and user friendly. For instance, if the result of the solely automated decision is communicated through a website, the page should contain a link or clear information allowing the individual to contact a member of staff who can intervene, without any undue delays or complications.  Organisations are expected to keep a record of all decisions made by an AI system, as well as whether a data subject requested human intervention, expressed any views, contested the decision, and whether the decision was changed as a result. 
  5. Organisations should monitor and analyse this data. If decisions are regularly changed in response to data subjects exercising their rights, organisations will be expected to amend their systems accordingly. Where the system is based on ML, this might involve including the corrected decisions into fresh training data, so that similar mistakes are less likely to happen in future. More substantially, they may identify a need to collect more or better training data to fill in the gaps that led to the erroneous decision, or modify the model-building process, ie by changing the feature selection.

In addition to being a compliance requirement, this is also an opportunity for organisations to improve the performance of their AI systems and, in turn, build data subjects’ trust in them. However, if grave or frequent mistakes are identified, organisations will need to take immediate steps to understand and rectify the underlying issues and, if necessary, suspend the use of the automated system.

Your feedback

We would like to hear your views on this topic and genuinely welcome any feedback on our current thinking. Please share your views by leaving a comment below or by emailing us at AIAuditingFramework@ico.org.uk.