Printable version

CDEI publish interim report on algorithmic bias in decision-making

techUK explores the recently published Centre for Data Ethics and Innovation interim report on algorithmic bias in decision-making.

The Centre for Data Ethics and Innovation (CDEI) recently published its interim report on algorithmic bias in decision-making along with a landscape summary, conducted by the Open Innovation Team.

The CDEI’s review focuses on exploring bias in four key sectors: policing, financial services, recruitment and local government. They have taken a phased approach to each sector, starting with policing and then moving onto financial services and recruitment, with work on local government starting in autumn 2019. Included in the interim report is a commitment from the CDEI to produce a briefing paper on Facial Recognition Technology (FRT) later in the autumn which will examine the wider ethical concerns surrounding the technology. This will not be limited to the use of FRT by the police.

The review seeks to answer three sets of questions:

  1. Data: Do organisations and regulators have access to the data they require to adequately identify and mitigate bias?
  2. Tools and techniques: What statistical and technical solutions are available now or will be required in future to identify and mitigate bias and which represent best practice?
  3. Governance: Who should be responsible for governing, auditing and assuring these algorithmic decision-making systems?


The interim report highlights that data itself is often the source of bias but, at the same time, it is a core element of tackling the issue. One issue raised is that some organisations are not collecting diversity information, due to nervousness of a perception that this data might be used in a biased way. This then limits the ability to properly assess whether a system is leading to biased outcomes. There is a tension between the need to create algorithms which are blind to protected characteristics while also checking for bias against those same characteristics.

Tools and techniques

CDEI’s early work suggests that new approaches to identifying and mitigating bias are required and that specific tools are already starting to be developed. However, there is limited understanding of the full range of tools and approaches available and what constitutes best practice. This makes it difficult for organisations that want to mitigate bias in their decision-making processes to know how to proceed and which tools and techniques they should use.


According to the review, there is currently limited guidance and a lack of consensus about how to balance significant trade-offs (for example between different kinds of fairness) or even how to have constructive and open conversations about them. In the policing sector, the CDEI are developing a Code of Practice in collaboration with the sector, to help to address this issue.

The report also highlights that a certain level of transparency about the performance of algorithms will be necessary for customers and citizens to be able to trust that they are fair. Giving developers of algorithms space and opportunity to test algorithms against standard datasets or to benchmark performance against industry standards may enable the development of a consensus about the appropriate definitions of fairness. The CDEI suggest that new functions and actors, such as third-party auditors, may also be required to independently verify claims made by organisations about how their algorithms operate.

The CDEI will submit a final report with recommendations to government in March 2020. If you’d like to find out more about this programme of work, please contact Katherine.


Channel website:

Original article link:

Share this article

Latest News from

Our recent Community Testing success for Local Authorities