Why Data Governance Thinking Must Keep Pace to Secure UK’s AI Future

29 Jun 2017 06:35 PM

techUK is pleased to have contributed to this important report on data governance in the 21st century as part of the project working group

The Royal Society and British Academy have launched their report on Data Management and Use: Governance in the 21st century. techUK welcomes this important forward-thinking report and is pleased to have contributed to its development as a member of the Royal Society and British Academy project working group.

As we all know we are living in a data-driven world that has implications for us all as individuals and society. Given the central role that data plays in all our lives policy makers and regulators have been rightly focused on the importance of having a strong regulatory framework for data protection that ensures individuals data privacy and security. In May 2018 the new European General Data Protection Regulation (GDPR) will enter into force. techUK sees GDPR as a significant step forward in ensuring that data protection law is fit for purpose in the 21st Century

However, we are now entering a new era where machine learning and Artificial Intelligence will be at the heart of future innovation raising issues that go beyond the protection of personal data.

The smart application of machine learning and artificial intelligence (AI) is already creating new exciting opportunities today. For example AI language processing is being used to automate the transcribing of medical notes in healthcare; AI systems are being used to detect fraud in financial services, and AI technologies are at the heart of innovation in driverless vehicles. There will be real benefits from this automation. Money saved by reducing admin costs in healthcare can be redeployed to invest in more doctors and nurses; machines will strengthen consumer protections from fraud; and driverless vehicles will transform our cities, making them cleaner, safer, more efficient and far more enjoyable places to live.

But how do we ensure that these smart machines are doing what they are supposed to be doing? How are we able to verify their safety and ensure they do not malfunction or are vulnerable to cyber attacks? How do we ensure if they do fail there is transparency to understand and fix what went wrong? How do we ensure the decisions that these machines make are auditable, challengeable and ultimately understandable by humans?

Well, in all of the examples above many of these questions have already been addressed and build into to AI solutions that are in place today. AI solutions developed in highly regulated environments such as healthcare, financial services and automotive safety are already required to have a high level of human control and transparency. However, as we look to the future and consider the implications of a world where humans live and work alongside intelligent machines more widely we have to ensure that the best practice that we see today is embedded more widely and that we ensure robust mechanisms are in place to ensure effective governance over intelligent machines.

This is why the Royal Society and British Academy report is so important and timely. Now is the time for the business, academic and research communities to come together to demonstrate that we can navigate these issues effectively. Developing consistent and effective answers to these, and other, important questions will need to be at the heart of data use governance in the 21st century.

A new data stewardship body, as proposed by the report published yesterday, bringing together leading experts from academia, business and other fields, could be a significant step forward in building the capability and capacity we will need to anticipate technological innovations and put in place effective safeguards that put the needs of humans and human values at the heart of technological innovation. In particular, the fundamental principle of promoting human flourishing set out in this report will be essential to ensure that intelligent, machine learning and AI driven machines are developed and act in the interests of humans.

The reality is that the development of AI is no longer science fiction and as a result there are many profound questions that need to be explored and understood. This is why the time is now to put a mechanism in place now to ensure data-driven research and technological innovation in machine learning and AI are on the right path.

As the pace of innovation in machine learning and AI continues to accelerate the time is right for the academia, business community, policy makers and others to come together to ensure that we have an effective mechanism to anticipate the implications of these innovations and mitigate potential risks. Doing this effectively is the best way that we can ensure we can also be at the forefront of developing new technologies that will drive huge benefits for individuals and society.