AI risks having bias and discrimination ‘hard-wired’ in - BCS tells Standards in Public Life Report

10 Feb 2020 01:56 PM

A lack of diversity in teams developing artificial intelligence (AI) can lead to in-built bias and discrimination in its decisions says BCS, The Chartered Institute for IT - in a new report by a major government advisory body.

The report, published by the Committee for Standards in Public Life (CSPL), examines whether the existing frameworks and regulations around machine learning are sufficient to ensure high standards of conduct are upheld as technologically assisted decision-making is adopted widely across the public sector.

Dr Bill Mitchell OBE, Director of Policy at BCS, The Chartered Institute for IT said:

“Lack of diversity in product development teams is a concern as non-diverse teams may be more likely to follow practices that inadvertently hard-wire bias into new products or services.”

Sampling errors can also produce discriminatory outcomes. For example – a machine learning tool designed to diagnose skin cancer that has been trained only on white skin could be less accurate on black skin, the report explains. This bias in the training data may not be the result of active human prejudice, but can still result in a discriminatory outcome because the system is more likely to misdiagnose BAME people.

Dr Mitchell added:

“There is a very old adage in computer science that sums up many of the concerns around AI enabled public services: ‘Garbage in, garbage out.’ In other words, if you put poor, partial, flawed data into a computer it will mindlessly follow its programming and output poor, partial, flawed computations. AI is a statistical-inference technology that learns by example. This means if we allow AI systems to learn from ‘garbage’ examples, then we will end up with a statistical-inference model that is really good at producing ‘garbage’ inferences.”

Diverse teams will also make public authorities more likely to identify potential ethical pitfalls of an AI project, the report suggests. Many contributors emphasised the importance of diversity, telling the Committee that diverse teams would lead to more diverse thought, and that in turn this would help public authorities identify any potential adverse impact of an AI system.

Contact the Press Office