|Is AI intelligent enough?|
In the first detailed element of our AI framework blog series, we explore how organisations can ensure ‘meaningful’ human involvement to make sure AI decisions are not classified as solely automated by mistake.
This blog forms part of our ongoing work on developing a framework for auditing AI. We are keen to hear your views in the comments below or you can email us.
Artificial Intelligence (AI) systems often process personal data to either support or make a decision. For example, AI could be used to approve or reject a financial loan automatically, or support recruitment teams to identify interview candidates by ranking job applications.
Article 22 of the General Data Protection Regulation (GDPR) establishes very strict conditions in relation to AI systems that make solely automated decisions (i.e. without human input), with legal or similarly significant effects about individuals. AI systems that only support, or enhance, human decision-making are not subject to these conditions. However, a decision will not fall outside the scope of Article 22 just because a human has ‘rubber-stamped’ it: human input needs to be ‘meaningful’.
The degree & quality of human review & intervention before a final decision is made about an individual is the key factor in determining whether an AI system is solely or non-solely automated. Board members, data scientists, business owners, and oversight functions, among others, will be expected to play an active role in ensuring that AI applications are designed, built, and used as intended. Both the ICO and the European Data Protection Board (EDPB) have already published guidance relating to these issues.
|AI can predict survival of ovarian cancer patients ...|