Printable version

AI HLEG publishes recommendations for Trustworthy AI


The European Commission’s High-Level Expert Group on AI (AI HLEG) yesterday published their 2nd deliverable, a series of policy and investment recommendations for Trustworthy AI. Building on the group’s first deliverable (Ethics Guidelines for Trustworthy AI), they have put forward 33 recommendations to ‘guide Trustworthy AI towards sustainability, growth and competitiveness, as well as inclusion – while empowering, benefiting and protecting human beings.’

The recommendations focus on four areas where the group felt Trustworthy AI could help achieve the most beneficial impact:

  1. humans and society
  2. private sector
  3. the public sector
  4. and Europe’s research and academia

In addition, they also address the main enablers needed to facilitate those impacts, focusing on the availability of data and infrastructure, skills and education, appropriate governance and regulation, as well as funding and investment.

This list of recommendations is an attempt by the AI HLEG to tackle some of the most pressing areas for action. The group has proposed numerous recommendations, many of which seem generally sensible although they’re arguably so broad that they could risk being subject to multiple interpretations.

techUK welcome the report’s positive focus on using public sector procurement as a tool for increasing the adoption of AI and the positive narrative on empowering people through the development and deployment of human-centric AI systems and encouraging member states to increase digital literacy through courses. UK Government have taken a leading role in this area with their recent announcement of £5 million to drive innovation in adult online learning.

Some of the recommendations, if executed well, could support industry to deploy a consistent approach to Trustworthy AI across Europe. The proposal of an “EU-wide data repository through common annotation and standardisation” which provides consistent and standardised data formats across Europe could be useful for companies. techUK would also welcome a systemic mapping and evaluation of all existing EU laws that are particularly relevant to AI systems.

However, at points the text can be quite negative and arguably not justified about existing technologies, such as cloud computing, that will help enable the benefits of AI technology. For example, the paper mentions “the lack of well performing cloud infrastructure respecting European norms and values may bear risks regarding macroeconomic, economic and security policy considerations, putting datasets and IP at risk, stifling innovation and commercial development of hardware and compute infrastructure of connected IoT devices in Europe”.

In some cases, the recommendations do not consider the existing legislation that is already in place. techUK believe this is important before placing unnecessary additional processes on the private sector, especially SMEs. For example, the AI HLEG suggest a mandatory obligation to conduct a trustworthy AI assessment when the private sector use AI systems that may have potential to significantly impact on human lives. However there is no acknowledgement that under current legislation, private companies cannot legally deploy AI systems that would adversely impact an individual’s human rights. Similarly, the recommendation calling for children to have a ‘clean slate’ of any stored data related to them as they move into adulthood, does not acknowledge that children are already protected under current data protection.

The report states that ‘unnecessarily prescriptive regulation should be avoided’. ‘In contexts characterised by rapid technological change, it is often preferable to adopt a principled-based approach.’ However later the report calls for meaningful oversight mechanisms and new regulation to address the critical concerns listed in the Ethics Guidelines for Trustworthy AI. This section of the report seems to counter the original intention of the AI HLEG Trustworthy AI guidelines as a flexible, evolving and voluntary toolkit. The European Commission should await the outcome of the pilot sessions before making premature decisions about oversight mechanisms. 

In summary, many of the proposed recommendations, although reasonably sensible, are still very broad and therefore could be subject to multiple interpretations. The next European Commission will need to take a proportionate and evidence-based approach if they decide to implement any of these recommendations. techUK would welcome a more transparent, open dialogue on these issues before any new regulation is imposed.


Channel website:

Original article link:

Share this article

Latest News from