The race against the ‘selfish algorithm’
Blog posted by: Dr Bina Rawal, 12 September 2019.
In a medical ethics class I was once taught that ethical dilemmas were difficult to resolve as they often involve choosing between something that is right and something that is also right.
Which course of action one selects could depend on the weighting given to a consideration of the greater good (utilitarianism) vs one’s moral code or duty as an individual (deontology). ‘Primum non nocere’ is another phrase that’s often used in medical ethics and although it is not precisely stated as such in the Hippocratic Oath, this Oath does encourage us to abstain from doing harm.
On a day to day basis, healthcare staff are applying the four core principles of ethics that are ingrained in them, namely autonomy, justice, beneficence and non-maleficence. However, new technologies and big data bring added complexities to an NHS that is already awash with ethical dilemmas where utilitarian and deontological principles conflict and cause stress for hard pressed staff.
Code of conduct
What then does this mean for the use of artificial intelligence (AI) and data driven technologies in delivering health and social care? The recently published code of conduct for data driven health and care technology attempts to create a framework for this, using ethical principles developed by the Nuffield Council for Bioethics. These centre on respect for persons, human rights, participation and accounting for decisions. The Code sets out ten principles that should be observed by technology providers. At the end of the Code, it is mentioned that ‘the single greatest threat to reliance on data-driven technology is the actual or possible presence of bias’.
The fact that algorithms used in AI driven technologies can be subject to bias is not in dispute. This subject is discussed at length in Cathy O’Neil’s book, ‘Weapons of Math Destruction’ where she cautions that just because an algorithm is implemented by an unemotional machine, this does not mean that it cannot be biased nor result in injustice. This got me thinking about Isaac Asimov’s three laws of robotics:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Algorithms surely can be regarded as robots without arms, legs or faces. Machines using algorithms to perform defined and limited tasks often do so better than humans, something that is referred to as narrow AI. When more and more data is fed to algorithms in machine learning, this enables the machine to make increasingly accurate predictions, which results in the algorithms being continually tweaked. Deep learning is another level of sophistication of machine learning that simulates the way the brain applies pattern recognition so that algorithms can develop their own (new) rules – a more generalised system of AI, more closely mirroring the way the human brain processes information.
Principle seven in the Code stresses the need for transparency on the learning methodologies employed by algorithms and use of current best practice to explain algorithms to those taking actions based on their outputs. However, as the sophistication of these algorithms grows, it may become necessary to apply a set of laws to govern this new incarnation of robots.
Algorithms are data hungry and unless we consciously apply some rules to the development and deployment of deep learning systems, we may be in danger of inadvertently finding ourselves dealing with ‘selfish algorithms’ whose only purpose is to expand their own functionality, regardless of the consequences.
If for example, algorithms are used to blend and learn from data from controlled and uncontrolled sources (eg healthcare, genomics and social media), this could give rise to new insights into disease and correlations with lifestyle, but how easy will it be to establish if serious bias has crept in along the way, leading to unverifiable conclusions? Could these conclusions disadvantage some populations through invisible bias? This type of approach may be very powerful but it could also raise issues of intrusion of privacy without specific consent from data subjects in healthcare as well as social media platforms.
More education needed
The Topol Review published earlier this year stresses that extensive education and training of the clinician workforce as well as the public will be necessary to embrace the new way of practising medicine in the digital age. The delivery of this education for clinicians must keep pace with the introduction of these new technologies into the healthcare system.
If the current crop of newly qualified doctors whose graduation I recently attended are anything to go by, we are still falling short in this area. Meanwhile, the belt and braces approach may be to regularly update the code of conduct for data driven health and care technology, building in more detailed laws or rules governing deep learning AI algorithms that can avoid bias.
Others with more expertise in computer science are better placed to develop these laws, but for now, as a patient, I would want to be reassured that clinical decision-making in healthcare will be enhanced by AI in a way that doesn’t result in automatic override and attrition of clinical skills, judgement and humanity. I also want healthcare to learn from mistakes made in other sectors through more effective interdisciplinary working.
Dr Bina Rawal is a Non Executive Director of the Innovation Agency.
Latest News from
New network for innovators16/10/2019 09:10:00
A new network will help health and care innovators to convert their ideas into reality – and ultimately benefit patients.
Awards for North West digital health pioneers14/10/2019 17:05:00
Ten North West health and care organisations have been presented with awards sponsored by the Innovation Agency.
Supporting women in life sciences – at number 1010/10/2019 10:20:00
Earlier this month I had the privilege to attend the inaugural Women in Life Sciences Forum, facilitated by the American Pharmaceutical Group, with support from 10 Downing Street and the Office for Life Sciences.
New social care database being used by 50 GP practices in Cheshire27/09/2019 12:25:00
A new social care database is being used by 50 GP practices in Cheshire to signpost patients to support, advice and social opportunities – improving local health and wellbeing.
Cheshire and Merseyside Care Home Academy27/09/2019 09:10:00
Care home staff from across Cheshire and Merseyside are celebrating the culmination of six months of shared learning about how they can continue to improve patient care.
Collaboration aims to boost health and care19/09/2019 09:10:00
A three-year collaborative programme has been launched to improve health and care across the Liverpool City Region.
Local business showcase health and care innovations in Lancashire16/09/2019 11:05:00
Healthcare Business Connect Lancashire (HBCL), in which the Innovation Agency is a partner, recently ran their third SME showcase event with local members of the U3A in Ulverston.
Musicians make harmonies with health data16/09/2019 08:10:00
If you could hear your health data, what would it sound like?