techUK
Printable version

Fourth Progress Report Towards Ambitions of the AI Safety Institute

On 20 May 2024, the UK’s AI Safety Institute released its fourth progress report. The following is a short outline of the key announcements made with more detailed information below. 

The Institute is prioritising talent acquisition, having onboarded over 30 technical researchers and appointed Jade Leung as Chief Technology Officer.  

They have also launched an open-sourced AI safety evaluations platform called Inspect and published their first technical blog post revealing vulnerabilities in AI models tested in April 2024.  

The Institute has released the first International Scientific Report on the Safety of Advanced AI, involving 30 countries and chaired by Yoshua Bengio, with a final report set to be released before the France AI Summit.  

Additionally, they have opened a new office in San Francisco, to enable the AISI to hire more top talent, collaborate closely with the US AI Safety Institute, and engage even more with the wider AI research community. This office is opened with the intention to keep building the AISI team globally and to drive international coordination around AI safety. 

Also announced is a partnership with the Canadian AISI to work closely together on AI safety and collaborative work on systemic safety research. The aim is to share expertise to bolster existing testing and evaluation work. The partnership will enable secondments between the two countries and jointly identifying areas for research collaboration. This continues plans to develop a network of AI safety institutes to enhance testing, research, and safety standards. Confirmed by The Rt Hon Michelle Donelan MP and Canada Science and Innovation Minister François-Philippe Champagne, this partnership will serve to deepen existing links between the two nations. 

At its instatement the AISI set three priority areas to achieve its ambitions, including evaluations of advanced AI models, conducting foundational AI Safety research and facilitating information exchange. More details on these key updates and commitments from the fourth progress report can be found below:  

1) Develop and conduct evaluations of advanced AI models   

  • A key priority currently for the institute is recruiting the right talent, it has shared in this report an update of over 30 technical researchers onboarded. They continue their pursuit to recruit on a rolling basis. 

  • Jade Leung appointed as Chief Technology Officer  

  • First technical blog post on the AISI model evaluations, in this inaugural technical blog post you will find an exercise conducted in April 2024 on publicly available frontier models on the institutes focus areas: Cyber, Chem-Bio, Safeguards and Autonomous Systems. In these baseline evaluation exercises it was found that the models were vulnerable to basic ‘jailbreaks’. You can read more about how the AI Safety Institute is approaching evaluations here.  

  • Launch of their open-sourced AI safety evaluations platform called Inspect which is set to run AI safety evaluations. Inspect is a software library that allows users to test specific abilities of individual models. The AISI welcomes use and feedback, viewing open source as a mechanism to coordinate a range of stakeholders.  

2) Foundational AI Safety research   

  • Published the first International Scientific Report on the Safety of Advanced AIalongside 30 countries. This report, chaired by Yoshua Bengio and the Secretariat based in the UK AISI, collates scientific evidence to date on AI risk, differentiating gaps and overlaps to inform future research. This should be seen as an interim report with the final set to be published ahead of the France AI Summit. You can read techUK’s summary here.

3) Facilitating information exchange   

  • Opening a new AISI Office in San Francisco to support the exchange of personnel and continue the agreements announced in the US and UK MOU, you can read more about that announcement here.  

  • New Partnership with Canadian AISI in the similar vain to the US and UK MoU, the update notes this partnership as a continued to work towards interoperable approaches. The Canadian AISI partnership is set with the intention to create pathways to share expertise to support testing and evaluations work and enable secondment routes between institutes. This partnership follows a February notice of the UK-Canada AI safety researcher exchange programme where AI safety researchers in tech UK or Canada to receive funding for a temporary exchange in the other country.  

  • The fourth progress report notes intention to build a network of AI Safety institute and equivalent government organisations which will work on testing, research and safety standards.  

Following this fourth progress report Ian Hogarth continues to share how this progress has been made with the AI Safety Institute which has been in operation for nearly one year, in which he refers to the AI Safety Institute as a startup inside the government. Noting the importance of speed to keep pace with the momentum to start delivering products and iterating. Hogarth also asks ‘what’s next’ noting the progress in AI agents, given the potential harms of such advancements this is a topic which the AISI is focused on internally. You can read his candid reflections here. 

You can read more about the firstthe second, and the third progress report. If you would like to learn more, please email Tess.Buckley@techuk.org.   

Channel website: http://www.techuk.org/

Original article link: https://www.techuk.org/resource/fourth-progress-report-towards-ambitions-of-the-ai-safety-institute.html

Share this article

Latest News from
techUK

How Lambeth Council undertakes effective know your citizen (KYC) / ID checks to prevent fraud