WGPlus (Archive)
AI; Opening Pandora’s Box? |
The following is an ICO discussion regarding new security risks associated with AI, whereby the personal data initially used to train the system might subsequently be revealed by the system itself. This post is part of the ICO’s ongoing Call for Input on developing the ICO framework for auditing AI. We encourage you to share your views by leaving a comment below or by emailing us at AIAuditingFramework@ico.org.uk. In addition to exacerbating known data security risks, as we have discussed in a previous blog, AI can also introduce new and unfamiliar ones. For example, it is normally assumed that the personal data of the individuals whose data was used to train an AI system cannot be inferred by simply observing the predictions the system returns in response to new inputs. However, new types of privacy attacks on Machine Learning (ML) models suggest that this is sometimes possible. In this update we will focus on two kinds of these privacy attacks – ‘model inversion’ and ‘membership inference’. While the ICO’s overall Security guidelines already apply, as part of our AI auditing framework we are keen to hear your feedback about what you think would be reasonable approaches to threat modelling in relation to these attacks, and other best-in-class organisational and technical controls to address them. |
Researched Links: |
ICO: Privacy attacks on AI models |