Information Commissioner's Office
Blog: Using biometric data in a fair, transparent and accountable manner
As technology takes ever greater strides, so organisations and businesses are harnessing its capabilities to help manage their contact with customers, including using it for means of identification and authentication.
While there are undoubtedly significant benefits in using new technologies, organisations need to be aware of the potential challenges when choosing and using any systems involving biometric data.
In January 2017, HMRC adopted a voice authentication which asked callers to some of its helplines to record their voice as their password.
A complaint from Big Brother Watch to the ICO revealed that callers were not given further information or advised that they did not have to sign up to the service. There was no clear option for callers who did not wish to register. In short, HMRC did not have adequate consent from its customers and we have issued an enforcement notice ordering HMRC to delete any data it continues to hold without consent.
In the notice, the Information Commissioner says that HMRC appears to have given `little or no consideration to the data protection principles when rolling out the Voice ID service’.
She highlights the scale of the data collection – seven million voice records – and that HMRC collected it in circumstances where there was a significant imbalance of power between the organisation and its customers. It did not explain to customers how they could decline to participate in the Voice ID system. It also did not explain that customers would not suffer a detrimental impact if they declined to participate.
The case raises significant data governance and accountability issues that require monitoring. We therefore plan to follow up the enforcement notice with an audit that will assess HMRC’s compliance with good practice in the processing of personal data.
It was also found that a data protection impact assessment (DPIA) that appropriately considered the compliance risks associated with processing biometric data was not in place before the system was launched.
Any organisations planning on using new and innovative technologies that involve personal data, including biometric data, need to think about these key points:
- Under the GDPR, controllers are required to complete a DPIA where their processing is ‘likely to result in a high risk to the rights and freedoms of natural persons’ such as the (large scale) use of biometric data. A DPIA is a process which should also ensure that responsible controllers to incorporate ‘data protection by design and by default’ principles into their projects. Data protection by design and default is a key concept at the heart of GDPR compliance.
- When you’ve done your DPIA, make sure you act upon the risks identified and demonstrate you have taken it into account. Use it to inform your work.
- Accountability is one of the data protection principles of the GDPR - it makes you responsible for complying with the GDPR and says that you must be able to demonstrate your compliance by putting appropriate technical and organisational measures in place.
- If you are planning to rely on consent as a legal basis, then remember that biometric data is classed as special category data under GDPR and any consent obtained must be explicit. The benefits from the technology cannot override the need to meet this legal obligation.
This is the first enforcement action taken in relation to biometric data since the advent of GDPR when, for the first time, biometric data was specifically identified as special category data that requires greater protection.
Our guidance on informed consent provides advice for organisations planning to use these kinds of systems and we are currently developing our guidance on biometric data.
With the adoption of new systems comes the responsibility to make sure that data protection obligations are fulfilled and customers’ privacy rights addressed alongside any organisational benefit. The public must be able to trust that their privacy is at the forefront of the decisions made about their personal data.
Steve Wood is Deputy Commissioner for Policy and responsible for the ICO’s policy position on the proper application of information rights law and good practice, through lines to take, guidance, internal training, advice and specific projects.
Latest News from
Information Commissioner's Office
Blog: Data protection doesn’t take a day off13/05/2019 15:20:00
Last year we began taking action against organisations for non-payment of the data protection fee, sending out a clear message that those who didn’t pay risked a fine.
Helping people be data aware09/05/2019 16:28:00
In our data-driven world, it’s more important than ever to know who is using people’s personal data, and why.
Blog: ICO regulatory sandbox09/05/2019 09:10:00
Please get in touch with any Sandbox queries as deadline for applications approaches.
ICO fines PPI claims company £120,000 for millions of nuisance texts08/05/2019 12:25:00
The ICO has fined a PPI claims management company £120,000 for sending unlawful spam texts about its services.
ICO says that voice data collected unlawfully by HMRC should be deleted07/05/2019 15:25:00
An ICO investigation into HMRC’s Voice ID service was prompted by a complaint from Big Brother Watch about the department’s conduct. The investigation focused on the use of voice authentication for customer verification on some of HMRC’s helplines since January 2017.
Blog: Helping us strike the right balance between journalism and data protection29/04/2019 15:20:00
Privacy and freedom of expression are both fundamental rights that are equally vital to our society, democracy and way of life.
ICO fines funeral plan firm and asks public to help elderly relatives to report nuisance calls17/04/2019 09:10:00
The Information Commissioner’s Office (ICO) wants people to support their elderly relatives or neighbours if they are receiving nuisance marketing calls.
Children’s privacy – new standards for online services will help protect children15/04/2019 16:10:00
Today we’re setting out the standards expected of those responsible for designing, developing or providing online services likely to be accessed by children, when they process their personal data.
Automated Decision Making: the role of meaningful human reviews15/04/2019 12:25:00
In the first detailed element of our AI framework blog series, Reuben Binns, our Research Fellow in AI, and Valeria Gallo, Technology Policy Adviser, explore how organisations can ensure ‘meaningful’ human involvement to make sure AI decisions are not classified as solely automated by mistake.