|Human skill sets are not redundant yet!|
As part of the ICO’s AI auditing framework blog series, it looks at how AI can exacerbate known security risks and make them more difficult to manage.
Personal data must always be processed in a manner that ensures appropriate levels of security against unauthorised processing, accidental loss, destruction or damage. There is no “one-size-fits-all” approach to security.
The appropriate security measures organisations should adopt depend on the level & type of risks that arise from specific processing activities. Using AI to process any personal data will have important implications for an organisation’s security risk profile, which need to be assessed and managed carefully. In this post we will focus on the way AI can adversely affect security by making known risks worse & more challenging to control.
Information security is a key component of our AI Auditing Framework, but is also central to our work as the information rights regulator. The ICO is planning to expand its general security guidance to take into account the additional requirements set out in the new GDPR. While this guidance will not be AI-specific, it will cover a range of topics that are relevant for organisations using AI, including software supply chain security and increasing use of open-source software.We are therefore particularly keen to hear your views on this topic so we can integrate them into both the framework & the guidance. We encourage you to use the comments section below, or to email us, to share your thoughts on AI related security challenges, best practices, and any additional guidance you would like the ICO to issue. Our key message for organisations is: review risk management practices to ensure personal data is secure in an AI context.