What do we learn from Machine Learning?
Blog posted by: Giovanni Buttarelli, 19 November 2018.
The history of Artificial Intelligence (AI) can be seen as a sequence of increasing expectations and of frustrating disappointments. Unlike the usual hype cycle for new technologies, AI has already experienced several cycles of “peaks of inflated expectations” and “troughs of disillusionment” in the sixty years since the first coining of the term “artificial intelligence” by Stanford professor John McCarthy, considered one of the “fathers” of AI, who in 1956 designated it to be the “science and engineering of making intelligent machines”. When expectations for the possibilities of AI were high, and rapid progress seemed likely, popular culture often reflected the hopes and fears associated with the human fascination in “artificial beings”. One landmark representation of this cultural reflection of scientific and technological advancement is Stanley Kubrick's 1968 film “2001- A Space Odyssey”, the screenplay of which he wrote with Arthur C. Clarke. One of the main characters is the computer HAL 9000, so advanced in applying intelligent reasoning that it has developed a conscience and suffers from what might betermed a psychological conflict and personality disorder - with fatal consequences for all but one of the spaceship crew under its control.
While the story reflected the expectations of scientists at the time of its making, we now know that by 2001 neither space technology nor computer science were advanced enough to send an AI-controlled spaceship to Jupiter. More time will be needed to get there, despite mankind’s remarkable achievements and continued research on the relevant technologies.
One extraordinary feature of HAL-9000, the fictitious computer running the spaceship, was its ability to learn. The authors of the script envisaged HAL as a general-purpose device, which would acquire knowledge and capabilities by learning from its makers and other people. This contrasted with the computers of the time which could only execute precisely designed programs, something that is still the case for most computers today.
Machine learning has been a central discipline in the field of AI for decades, and the recent progress in this discipline has played a central role in the rise of interest in AI. Taking advantage of progress in computer hardware and software, which has enabled faster operations and the processing of larger amounts of data as well as new storage and communications possibilities, makes it possible to apply machine learning technologies to new and bigger tasks and to advance other disciplines of AI. Natural Language Processing, Image Recognition and all kinds of operations based on data analysis are making significant progress thanks to machine learning.
New applications are so significant that they have caught the attention of the public. One of the top-ranked academic conferences on AI, Empirical Methods in Natural Language Processing (EMNLP) 2018 - where 'empirical' may be properly understood to mean data-driven - took place in Brussels a few days after the global data protection and privacy community held their annual meeting here. At EMNLP, researchers from academia and from the big technology firms reported new results in applying machine learning technology to enable computers to communicate better with humans in speech or writing - for example, in conducting dialogues on images, searches or health information. Research might also help create better tools which recognise hate speech or deceptive texts. The speed with which new research results become part of everyday products and services is astonishing, but raises concerns that the urge to be the first to launch a new service and the competition for market shares may overrule considerations about the societal impact of new AI services, or even prevent the proper assessment of this impact on society and the fundamental rights of individuals.
There are few authorities monitoring the impact of new technologies on fundamental rights so closely and intensively as data protection and privacy commissioners. At the International Conference of Data Protection and Privacy Commissioners, the 40th ICDPPC (which the EDPS had the honour to host), they continued the discussion on AI which began in Marrakesh two years ago with a reflection paper prepared by EDPS experts. In the meantime, many national data protection authorities have invested considerable efforts and provided important contributions to the discussion. To name only a few, the data protection authorities from Norway, France, the UK and Schleswig-Holstein have published research and reflections on AI, ethics and fundamental rights. We all see that some applications of AI raise immediate concerns about data protection and privacy; but it also seems generally accepted that there are far wider-reaching ethical implications, as a group of AI researchers also recently concluded. Data protection and privacy commissioners have now made a forceful intervention by adopting a declaration on ethics and data protection in artificial intelligence which spells out six principles for the future development and use of AI - fairness, accountability, transparency, privacy by design, empowerment and non-discrimination - and demands concerted international efforts to implement such governance principles. Conference members will contribute to these efforts, including through a new permanent working group on Ethics and Data Protection in Artificial Intelligence.
The ICDPPC was also chosen by an alliance of NGOs and individuals, The Public Voice, as the moment to launch its own Universal Guidelines on Artificial Intelligence (UGAI). The twelve principles laid down in these guidelines extend and complement those of the ICDPPC declaration.
We are only at the beginning of this debate. More voices will be heard: think tanks such as CIPL are coming forward with their suggestions, and so will many other organisations.
At international level, the Council of Europe has invested efforts in assessing the impact of AI, and has announced a report and guidelines to be published soon. The European Commission has appointed an expert group which will, among other tasks, give recommendations on future-related policy development and on ethical, legal and societal issues related to AI, including socio-economic challenges.
As I already pointed out in an earlier blogpost, it is our responsibility to ensure that the technologies which will determine the way we and future generations communicate, work and live together, are developed in such a way that the respect for fundamental rights and the rule of law are supported and not undermined. Developing such technologies in countries with the least protection for fundamental rights, controlled by authoritarian regimes, will not provide us with a sustainable and viable future infrastructure. The current debate on ethics will point to the ethical principles and values, some perhaps unconscious, to which we must pay particular attention. Policymakers and rule-makers around the globe will have to decide which laws are required to ensure that economic actors adjust their research and development strategies, as well as their business models, to bring them in line with a common understanding of what is morally sustainable in human advancement. We cannot run the risk that pure profit motive leads to all moral standards being ignored and rewards campaigns and practices which harm individuals, groups and wider society. The recent experiences with social media underline the need for a coordinated approach, driven by ethics and law, which is supported by an adapted and enforceable framework. In my own part of the world, I will continue to push the EU legislator to complete the modernisation of the EU’s data protection framework with the rapid adoption of a meaningful regulation on communications privacy.
Only by setting an example in AI and other areas of technological change can we motivate the rest of the world to follow the way of democracy and fundamental rights.
Latest News from
Persistent antisemitism hangs over EU11/12/2018 12:25:00
Antisemitic hate speech, harassment and fear of being recognised as Jewish; these are some of the realities of being Jewish in the EU today. It appears to be getting worse, finds a major repeat survey of Jews from the EU Agency for Fundamental Rights, the largest ever of its kind worldwide.
Regulation on cross border access to e-evidence : Council agrees its position10/12/2018 16:25:00
The EU is taking steps to improve cross-border access to e-evidence by creating a legal framework which will enable judicial orders to be addressed directly to service providers based in another member state.
Council agrees on more effective rules to solve cross border parental responsibility issues10/12/2018 13:25:00
The EU wants to make it easier and faster for decisions on parental responsibility issues and international child abduction to be applied across borders.
Member States and EC to work together to boost artificial intelligence “made in Europe”10/12/2018 12:33:00
This plan proposes joint actions for closer & more efficient cooperation between Member States, Norway, Switzerland and the EC in four key areas: increasing investment, making more data available, fostering talent and ensuring trust.
Migrant smuggling: Council approves a set of measures to fight smuggling networks07/12/2018 15:33:00
The Council has approved a comprehensive & operational set of measures with a law enforcement focus to step up the fight against migrant smuggling networks. This follows a call by EU leaders at their meeting in October
More protection for workers: Council agrees to reduce the exposure to 5 carcinogens07/12/2018 13:25:00
The Council has adopted its position on a proposal which will update the existing rules on the protection of workers from the risks related to exposure to carcinogens or mutagens at work (Directive 2004/37/EC).
Terrorist content online: Council adopts negotiating position on new rules to prevent dissemination07/12/2018 12:37:00
The EU is working to stop terrorists from using the internet to radicalise, recruit and incite to violence. The Council has agreed its negotiating position on the proposed regulation on preventing the dissemination of terrorist content online.
The EU steps up action against disinformation06/12/2018 14:10:00
To protect its democratic systems & public debates and in view of the 2019 European elections as well as a number of national and local elections that will be held in Member States by 2020, the EU has presented an Action Plan to step up efforts to counter disinformation in Europe & beyond.
The EU Budget for 2019: Povisional agreement reached06/12/2018 12:37:00
This agreement is to be confirmed next week during the last plenary session of the European Parliament in Strasbourg.EU funds will continue to be invested in growth & jobs, research & innovation, students & young people – the priorities of the Juncker Commission.
EC calls on Leaders to pave the way for an agreement on a modern, balanced & fair EU budget05/12/2018 16:25:00
Ahead of the European Council meeting on 13 and 14 December 2018, the EC is taking stock of the encouraging progress made so far in the negotiations of the EU's next long-term budget and urging Leaders to keep up the momentum.