Guest blog: AI in healthcare - How economics can help us understand the risks
Guest blog from techUK member Frontier Economics
Federico Cilauro, Manager at Frontier Economics, a leading economics consultancy, writes about the potential of AI to revolutionise medical diagnosis.
AI diagnosis has great potential
Screening for diseases has traditionally been expensive, because of the amount of labour it involves. But AI offers a new frontier, drawing on data to make faster, more cost-effective diagnoses.
For example, in 2020, a deep learning system developed by Google Mind and Moorfields Eye Hospital outperformed five out of six doctors in predicting the onset of macular degeneration. And one recent study showed that smartwatch readings could identify coronavirus infections in 63% of cases, before symptoms were evident.
The risk: AI could lead to unfair healthcare access
But the use of AI in diagnosis is not without its risks. Poorly calibrated algorithms could create bias in healthcare provision, on the basis of sex, race or other characteristics.
How? AI systems depend on the data they use – and certain groups are under-represented in health data, because of previous difficulties in accessing treatment. Another problem is ‘the healthy volunteer effect’: those who volunteer to provide genomic data are typically healthier and better off than average. Different population groups also have different genetic markers for certain diseases.
If healthcare algorithms learn from this biased data, there’s a risk they’ll make biased decisions about who should be treated.
Using economics to understand the trade-offs
There is no single measure of fairness in algorithmic decision making. But an economic approach can shed light on the trade-offs involved by:
- Assessing the risks of getting the diagnosis wrong; and
- Assessing the benefits and costs of using AI for diagnosis given that risk
This article focusses on the risk assessment. One measure is predictive accuracy: how successfully does the algorithm predict the disease in question? We can then look at predictive parity: are the rates of accuracy the same for different population groups?
But also important are the risks of false negatives – incorrectly identifying an ill person as healthy – and false positives – identifying a healthy person as ill.
An example – error rates
Let’s say a hospital uses AI tech to diagnose diabetes in a group of patients. Twelve of the patients are white and sixteen are black.
The algorithm identifies three white patients as having diabetes, and is correct in two of those cases. That’s a predictive accuracy of 66%. It also flags six of the black patients as high risk, of which four do have the disease. The predictive accuracy is again 66%, so the algorithm’s designers say the tech is fair.
But using statistical analysis – like that used in researching fairness in recidivism prediction – we can look closer and find a difference in error rates.
In particular, there’s a worrying disparity in false negative rates. Among white patients it’s 33% (the system fails to detect diabetes in one of three cases), but among black participants, it’s 43% (three out of seven). The implication: among black patients, diabetes may go undetected for longer, with more serious long-term complications.
If parity is impossible, risks must be incorporated
Whenever the prevalence of a disease varies between population groups, it’s statistically impossible to achieve predictive parity and identical error rates. And the greater the difference in prevalence, the bigger the disparity in false negative rates.
Understanding these trade-offs is therefore a vital step in the development of AI diagnosis technology. And once the trade-offs are understood, the next step must be to define and measure biases in a systematic way, and to assess the benefits and costs of using AI given the risk of bias, to produce evidence-based recommendations for the systems of the future.
Latest News from
What does the Government’s R&D People and Culture Strategy mean for the tech sector?23/07/2021 16:25:00
The government has published its R&D People and Culture Strategy, delivering on the commitment that was made in the 2020 R&D Roadmap.
The UK’s Board of Trade publishes a new report-Green Trade23/07/2021 14:38:00
The second report by the Board of Trade – Green Trade – was published recently (21 July 2021).
Money is not the only object: A focus on place is needed to address regional imbalances in R&D23/07/2021 11:25:00
Without an open regional dialogue on the interconnection of place, technology and investment, there is a real danger that the UK will not be able to keep pace and support innovation to be a global leader, writes Laura Foster, Progamme Manager tech & innovation at techUK.
Government announces major overhaul of UK competition policy23/07/2021 09:15:00
The UK Government has opened two consultations on reforms which would radically overhaul the UK’s approach to competition and consumer policy post Brexit.
Digitising Justice: Beyond the COVID-19 Pandemic22/07/2021 14:15:00
Today, 22 July, techUK published a new paper exploring the impact of COVID-19 on the criminal justice system and what comes next for digitising courts, prisons and probation services as lockdown restrictions are lifted.
How FSI organisations can unlock the true potential of their cloud investment and stay ahead of the competition21/07/2021 15:10:00
The Financial Services industry is experiencing unprecedented competition. Not only from traditional providers but also the new entrants from retail, big technology and others.
Transitioning to a net zero energy system: Smart Systems and Flexibility Plan 202120/07/2021 16:20:00
Today Ofgem and BEIS have published the Smart Systems Flexibility Plan and the Digitalisation Strategy 2021
What are the driving forces behind the business service revolution?19/07/2021 16:25:00
Guest Blog by Ting Zhang, the CEO & Founder of Crayfish.io, a digital economy platform that offers businesses the quality services and global connections needed to accelerate their international expansion, with a focus on Asia.