What’s industry’s role in shaping ethical AI?

11 Jun 2018 01:53 PM

Read the findings of a techUK/Nuffield Foundation workshop exploring the role and responsibilities for industry in embedding ethical AI.

This was originally posted on the Ada Lovelace Institute website. 

As part of the development work of the Ada Lovelace Institute, the Nuffield Foundation are hosting a series of stakeholder roundtables, workshops and events that explore:

All of the insights gathered through these events will be informing our prospectus for the Ada Lovelace Institute, alongside early stage research we are commissioning with the Centre for Future Intelligence.

As part of its work scoping the Institute the Nuffield Foundation recently convened an interdisciplinary workshop in partnership with techUK, the UK’s technology trade association, to better understand  the emerging challenges that the development of AI poses for industry, as well as what role the Ada Lovelace Institute might play in tackling them. This workshop was held under Chatham House rules.

Participants in this workshop included management consultancies, law firms, HR and organisational consultants, AI and tech developers and suppliers. We brought these people together in dialogue with the Institute’s own staff, as well as with researchers we are partnering with at the Cambridge University’s Centre for Future Intelligence.

This note summarises the key themes discussed.

Emerging social and ethical issues:

To identify and to scope out the emerging social and ethical issues industry expects it would need to grapple with, we posed the following thought experiment:

Imagine you are still working in your sector in 10 years’ time. What key emerging social and ethical issues do you think your organisation will need to engage with and respond to both externally and internally?

There was consensus from the workshop there are a series of social and ethical challenges which must be addressed to build trust in AI and data driven technologies. We have grouped these against four core (and interlinked) issues:

A LACK OF PUBLIC UNDERSTANDING AND THE NEED FOR INCLUSIVE SOCIETAL DIALOGUE ABOUT AI SYSTEMS

‘We need better public understanding and more routes to human agency… can we do this through creating more demand for ‘ethical’ AI?’

A lack of public understanding and education on AI was identified as a growing issue. Participants flagged the importance of this going beyond an education campaign: as AI is increasingly used in society it would be improve to have a inclusive dialogue between those directly affected by the technologies and those who developed them. As such, participants argued both that it was important to improve wider social dialogue about the ethical and social implications ‘beyond the developers of the technologies’, as well as a more responsive and effective interface between those who use and are impacted by the technologies, and those who develop and provide it (inclusive of government as well as industry).

A LACK OF CONSIDERATION ON HUMAN AND SOCIAL WELL-BEING BY THOSE  DESIGNING AI/DATA ENABLED SYSTEMS

‘How do you do ‘human accountability’ in this space?’

Connected to the need for a more responsive dialogue between those who use and are impacted by technologies, – participants felt ‘human needs’ failed to be fully considered when designing systems, which were primarily driven by shareholder rather than stakeholder value. Participants viewed the fact that technologies often focused on the maximisation of profit at the exclusion of maximisation of social value:  business models which were solely driven by profit may cause future social issues affecting the industry as a whole. It was felt to be critical to be able to ask and answer whether ‘social value’ (understood broadly as inclusive of building community and social capital, supporting the wellbeing of individuals and communities, and the preservation of the environment) was being delivered with their product, and in what ways it might be detrimental to social value.

Some felt that tech systems often failed to take into account, or meet the needs, of those most excluded from society (such as the poorest).

Many participants acknowledged a tension that would need to be negotiated between governance structures which supported a social mission, value and purpose, as well as effective business models. However, respondents identified an urgent need to address and tackle emerging market dominance by larger AI and data providers and controllers as part of this question: some participants felt that currently the largest tech companies (‘GAFA’) inherently ‘set the standards’ in how society is considered given their market share.

Participants welcomed the idea of developing and applying an ethical code of conduct; as well as creating the conditions in which a range of business models working with AI were able to flourish and to work. It was highlighted that this is an area of work techUK is already progressing.

AN UNEQUAL DISTRIBUTION OF THE BENEFITS AND HARMS FROM TECHNOLOGY

‘To tackle inequality, we need to find ways of distributing the benefits from technology, as well as more global governance’

Inequality emerged as a key issue for many participants,  who felt that technology companies’ development of tech and decisions had broader social consequences which had to be considered. Many participants saw technologists having a key role in understanding their agency within a larger system, and that this required them to:

THE LACK OF AN EFFECTIVE GOVERNANCE FRAMEWORK NATIONALLY AND GLOBALLY

‘How do we build new law and governance structures that can deal with such new and emerging threats and disruptions to society?’

Contributors argued that new ways to think systemically and work collaboratively as businesses, at a global level, would be of critical importance. The new General Data Protection Regulation (GDPR) was seen by some as value in providing the legal basis and foundations for industry to consider ethical questions more holistically.

There was thoughtful discussion about the tension between openness – facilitating innovation in the use of technologies – and growing geopolitical tensions with some seeing a consensus of promoting co-operation across nations is increasingly at risk.

In the longer term (particularly given trends including the rise of nationalism and increasing global tensions), some participants suggested that a global governance framework would be especially helpful; but also flagged the tensions with the rise of populism and nationalism across the globe as potential barriers to enabling this to take place.

‘How can technical and regulatory solutions interact better (e.g to solve algorithmic bias) and how can they better complement one another?’

There was much discussion on the need for competent and smarter regulation that struck an appropriate balance between fostering innovation and protecting human rights – that itself is in service to the mission of building trustworthiness.

Some advocated for a more ‘agile’ form of governance to keep pace with innovation, while others felt governance and regulation was by definition slower and more permanent. Several participants also mentioned the need to have in place new insurance or liability frameworks that could recompense, provide redress or remedy for negative distributional impacts on people.

There was collective recognition here for industry to work together to anticipate emerging issues. Participants acknowledge he need to for the community to think and act beyond legal compliance, with a focus on creating the cultural norms, values and corporate leadership (underpinned by effective regulation) which lend itself to a relationship between technology and society that engenders public legitimacy and trust. It was suggested that organisations such as the Ada Lovelace Institute might be able to work with industry and government to consider ‘the bigger picture’ and look beyond more immediate pressures to provide longer-term thinking to support a society enabled by data and AI.

IDEAS TO IMPROVE ETHICAL PRACTICE

‘How do you measure and enforce more ethical practice? Can we even do that?’

Participants identified a number of skills, tools and capabilities which industry might need to develop or instill to enable them to grapple with some of these ethical issues. These included:

NEXT STEPS AND CONTINUING DIALOGUE

This interdisciplinary workshop was the first of a series of seminars, workshops and roundtables we are hosting in collaboration with partners with a view to engage in an interdisciplinary way with perspectives from industry, academia, think-tanks, civil society and the wider public. This will help inform the work and priorities of the Ada Lovelace Institute, ensuring we reflect diverse viewpoints within its design.

Outcomes from this workshop include:

If you’d be interested in finding out more about the work of the institute or future events, please sign up and subscribe to our mailing list online.