The Office of the Privacy Commissioner has requested submissions and feedback on privacy regulation of biometrics in Aotearoa New Zealand. This follows a position paper released in 2021, which I also reviewed and gave feedback on. Biometrics technology is evolving rapidly and while our principles-based privacy approach has generally served us well in the face of new technologies, there are specific concerns relating to biometrics that warrant further consideration and attention. A few of my suggestions are a bit bold and wacky, mostly because I believe it is strategically helpful to advocate for stronger change given that it is likely any eventual proposals will be watered down. Below is my written submission, although I recommend reading the Consultation Paper first for context, particularly for para 16 below:
2 October 2022 [I asked for a very small extension]
1. Thank you for the opportunity to provide feedback through this consultation process. I am a Research Fellow with Koi Tū: The Centre for Informed Futures at The University of Auckland, based in Wellington. My research area is in digital technologies and their impacts on society, particularly in terms of public sector use and privacy. I am a member of the Privacy Foundation and a Fellow of InternetNZ. The views in this submission are my own and may not reflect those of my employers or the organisations that I am a member of.
2. As a general comment, I am strongly in favour of stronger and clearer protections for biometric information. In my opinion, we need legislation to be developed that includes stronger penalties for inappropriate collection, storage, or use of biometric information, as well as clear limits on unacceptable use cases. A Code of Practice, while helpful, is only a partway step towards providing the necessary protection for individuals as the risks associated with biometric information continue to grow.
3. Additionally, I am generally opposed to establishing a separate Biometrics Commissioner (or Surveillance Camera Commissioner) as has been seen in some comparable jurisdictions. The Office of the Privacy Commissioner should already be able to fulfil the responsibilities and duties of such a Commissioner, but needs to be more properly resourced and given more significant enforcement powers.
4. While there has been an increasing level of concern around Facial Recognition Technology (FRT) recently, in some ways this specific focus distracts from the broader issues surrounding biometrics. The use of other biometric characteristics (e.g. fingerprints, voiceprints, activity data) can be just as concerning or more concerning than the use of FRT in some applications. I would encourage OPC to not lose sight of other forms of biometrics beyond FRT as they continue this review.
On the Objectives of the review
5. It is positive that the objectives do not establish a false dichotomy between regulation and “encouraging innovation” as has been established in other government consultations around data and digital issues. It is important to frame this discussion in the context of what the people of Aotearoa New Zealand will accept; agencies will lose their right to innovate (aka social licence to operate) if they do not sufficiently mitigate the risks and cause harm.
6. To that end, I would encourage that the review also include active “outbound” engagement with the people of Aotearoa New Zealand, rather than predominantly relying on individuals and agencies to file submissions to OPC. For example, OPC could conduct surveys and focus groups as a “pulse check” to understand broader perceptions of biometric information and what people are comfortable with, as a precursor to developing principles and approaches towards regulation. Relying mostly on “inbound” submissions is likely to exclude many communities who do not have the time, resource, or capacity to engage in these types of processes.
7. It is particularly important to uphold Te Tiriti and develop a stronger understanding of Māori perspectives on biometrics. This is not only important in the context of developing further regulation, but also because OPC should play a role in helping agencies across Aotearoa NZ understand the principles that influence the appropriateness of using biometric information in a local context. For example, agencies need to understand that tā moko and moko kauae are not merely decorative, but also reflect an individual’s whakapapa and personal history, and that therefore if they choose to use facial recognition systems on Māori individuals then they may not just be capturing images of individual faces but also designs that reflect an entire whānau. Amplifying Māori perspectives on biometric information would be a significant step towards helping agencies actually understand why they should or shouldn’t use biometric information in particular ways.
8. This is also important in the context of promoting or adopting particular standards or principles, as many of these exist in overseas contexts and will not sufficiently reflect the landscape of Aotearoa New Zealand. While we can draw inspiration from the work of others overseas, directly using their standards is likely to lead to unintended harms in the local context if our uniqueness is not reflected.
On the case for more regulatory action and risk assessment
9. With the growing use of biometric technologies, the risk of harm also increases. Two ways that biometric information is distinguishable from other types of personal information are that a) the information tends to relate to the innate characteristics of a person in a way that feels invasive for another person to have access to, and b) those characteristics are not consciously determined and cannot be easily changed. Biometric information should be considered sensitive (and is done so correctly by OPC), in large part because the harms that may be felt by an individual from having their biometric information collected or used in a way that they oppose are greater in magnitude than for many other types of personal information.
10. Contemporaneously, there is a growing sense that there are insufficient penalties or consequences for agencies after data breaches have occurred. While these agencies may receive some negative media attention, there is generally very little care for the individuals whose data has been lost, and the organisations themselves get away with simply pledging to do better with no accountability on follow-through. Compliance Notices are a useful tool for OPC to make things better going forwards (as has been demonstrated in the Reserve Bank of New Zealand case), but the growing rate of data breaches (https://securitybrief.co.nz/story/the-biggest-cyber-attacks-of-2021-in-new-zealand) demonstrates insufficient proactivity and a Compliance Notice may not be sufficient to remediate harm. While there have not been any known biometrics-related data breaches yet in NZ, as the use of biometric technologies grows this is inevitable if we maintain existing agency attitudes towards data security and privacy. The Suprema/Biostar 2 data breach in the UK is notable for the biometric data that was left exposed.
11. Three directions where we could focus regulatory measures include: a) centralised evaluation of biometric systems including PIAs and providing certification (similar to the Privacy Trust Mark) or requiring audits, b) greater guidance and support for agencies wishing to use biometric information, and c) stronger penalties for the inappropriate collection, storage, or use of biometric information.
12. One tool that could be helpful for all of these suggestions is the use of a risk-based approach towards regulatory thresholds. The most relevant and well-known example of this is the European Union’s draft regulatory framework on AI, which specifies unacceptable, high, limited, and minimal risk applications of AI as a foundation for proposing different thresholds for regulation.
13. Defining which applications fall into which risk category would be done by OPC under delegated/subordinate legislation to help keep the framework up-to-date as technology evolves. Defining our own lists would allow the framework to be appropriate for the Aotearoa New Zealand context – for example facial recognition that will be used on Māori may carry a higher risk in a local context than it would in another jurisdiction (due to bias risks and cultural considerations around tā moko and moko kauae).
14. A risk-based approach acknowledges that the applications and potential harms of using biometric information sit on a spectrum, and to apply one-size-fits-all regulation to all use of biometric information is dangerous. In the review that I conducted with Dr Nessa Lynch on Facial Recognition Technology for NZ Police in 2021, we drew upon work from her previous 2020 Law Foundation report on FRT in New Zealand (alongside Liz Campbell, Joe Purshouse, and Marcin Betkier), and developed a risk framework with unacceptable, high, medium, and low risk categories, with attributes and example applications in the policing context (see section 8.1 and the last page of https://www.police.govt.nz/sites/default/files/publications/facial-recognition-technology-considerations-for-use-policing.pdf). While we were not tasked with defining policy responses based on these thresholds, this approach has allowed NZ Police to clearly state which applications of FRT are clearly unacceptable and are not being explored, while being able to explore lower-risk FRT applications. Part 5 of that report also details current and potential future uses of FRT by NZ Police.
15. As an example of how such a framework could be applied, regulation could make it clear that use of biometric information in the unacceptable risk category is illegal, with use in the high-risk category requiring annual audits by a certifying agency (which could be OPC or other accredited agencies), and keeping the existing Privacy Act 2020 principles and protections in place for limited or minimal risk applications. The use of a risk-based approach helps keep the focus and attention on the applications with the greatest risk of harm and negative impact, while avoiding overly burdensome restrictions or compliance burdens on less risky and more acceptable applications.
16. While the factors mentioned in the consultation paper are generally appropriate for considering risk in biometric systems, I would suggest adding considerations for:
a. awareness and transparency (i.e. do people know that their biometric information is collected and understand how it will be used, which is separate to “genuine choice” as some agencies hide the use of biometrics)
b. whether or not alternatives are offered (which is a part of “genuine choice”)
c. storage of biometric information (including policies around governance, audit logs, and access, as well as the likelihood of inappropriate access and use by staff members or third parties)
d. the level of automation (i.e. is there a human in-the-loop, will there be a human-led oversight or appeals process)
e. combining biometric information with other sources (e.g. also using health information or pulling data from the Integrated Data Infrastructure)
f. overseas/cross-border transfers (i.e. will the biometric information be subject to differing standards and regulatory environments)
g. influencing power balances (i.e. how does collecting and using biometric information shift power between the individual and the agency, does it enable benefit for the individual or is it disempowering)
On bias, discrimination, and collective harms
17. Accuracy and bias of biometric technologies (and other digital technologies) have become increasingly prevalent topics of discussion over the last decade, and are concerns now commonly expressed in opposition to biometric technologies. However, it is important to consider whether or not bias and discrimination are inherent characteristics of biometric technologies and can never be remediated, or if the technology will eventually become accurate enough that these concerns fall.
18. As an example, the discussion around bias and discrimination in FRT systems has been primarily attributed to the use of biased datasets when training the AI models that distinguish between human faces. While some have argued that FRT is inherently biased and can never be as accurate for some ethnicities, genders, or ages as it is for others, more recent studies have shown that ethnic and gender bias is disappearing in commercial FRT systems. We have seen that commercial products trained on different datasets (e.g. products developed in the US vs in China) demonstrate different (and contradictory) biases, indicating that these issues may be resolvable with larger datasets and stronger training protocols.
19. The reason that this is important to consider is that while bias and discrimination can cause harm in the context of biometric information, we should also consider the harms that can be caused when the systems are working accurately and shown to be free of bias at a technology level. We must also consider biases that occur at the people or system level, for example in how biometric technologies might be used against people of particular demographics by the system owners, rather than because the technology itself is flawed.
20. In that context, it is important to consider regulation of biometric technologies beyond the technology itself. For example, regulation that a FRT system must be at least 99% accurate across all ethnicities would not prevent that system from being used exclusively on minority ethnicities. Discriminatory use of these technologies goes beyond privacy regulation because it is not simply about an individual’s personal information, but about how the technology is used.
21. The protections currently in the Privacy Act 2020 generally have an individualistic framing in terms of protecting personal information. However, when it comes to the use of biometric information, we should also consider how they may be used for or against groups of people (or “collectives”) in ways that may be harmful. Take the example of a supermarket using FRT to help enforce trespass notices: if there are biases in how those trespass notices are issued, this may mean individuals of particular ethnicities are more likely to be approached by security guards. In such circumstances, it is unlikely that privacy legislation related to biometrics will be able to provide much relief. While scenarios like this may be covered by broader Human Rights legislation (e.g. the Human Rights Act) with recourse through the Human Rights Review Tribunal and other courts, collective harm needs to be considered in any biometrics regulation.
On regulatory expectations and actions
22. It is of significant concern to me that the biometrics position paper frames OPC’s regulatory expectations as the existing principles and regulatory tools in the Privacy Act being sufficient. While the principles-based approach of the Privacy Act has served us well and is applicable towards a wide variety of applications and technologies, the penalties and their enforcement are not sufficient to prevent harm from occurring with the use of biometric information.
23. The existing biometrics position paper effectively builds on the information privacy principles by suggesting questions that should be asked during the development of biometric systems, but there is no consequence for answering these questions poorly or ignoring negative results. Even the “expectation” that agencies will undertake a PIA for any biometrics project does not carry any regulatory weight behind it. As with much of our privacy legislation, the position paper assumes that agencies are both competent and good actors, and these are not safe assumptions.
24. While it is helpful for OPC to be stating their expectations, to take inspiration from other jurisdictions in providing more detailed guidance (e.g. UK, EU, Australia, and Canada), and to explore promoting particular standards or principles, in my opinion this is all insufficient if not supported by sufficiently resourced enforcement and penalties, and subsequent establishment of precedents that encourage compliance and disincentivise poor actors. Agencies should have to prove that they are meeting a higher standard of care when it comes to biometric information.
25. While we must maintain proportionality in mind, it is clear to me that the penalties for causing privacy-related harm are insufficiently disincentivising. The current $10,000 penalty in the Privacy Act 2020 is well below the levels seen in comparable legislation, especially in situations where there is collective harm. The European Union’s GDPR has fines of up to either 20 million euros or four percent of annual global turnover, whichever is higher. Australia’s Privacy Act currently has a maximum penalty of AUD 2.22 million, although the Online Privacy Bill suggests penalties to the greater of AUD 10 million, three times the benefit obtained through misuse of personal information, and 10% of the company’s annual domestic turnover. Given the sensitive nature of biometric information, it stands to reason that penalties for poor behaviour by agencies could be higher than for other types of personal information. It is also important that those responsible for privacy breaches should be required to compensate and support the victims.
26. While my preference is for legislative change in the area of biometrics, I appreciate that this is not within OPC’s control and that in the meantime a Code of Practice would be helpful to proactively mitigate harms and establish new norms. If a Code of Practice is to be developed, there should be some acknowledgement that while there would be some common rules across all scenarios using biometric information, the Code cannot be one-size-fits-all. However, instead of specifying different rules by the technologies or types of biometric information, a risk-based approach could allow for those rules to be applied at different levels of risk. This would allow for applications across different technologies to be grouped together, and offer more flexibility as OPC can move applications or scenarios between the risk categories if needed. This can also allow for stronger appreciation of the different types of risks faced between private sector and public sector use of biometric information.
27. I would encourage OPC to consider applying proposed regulatory action to every agency that handles the biometric information of any New Zealander, including overseas agencies that may or may not be conducting business in New Zealand. Given the immutable nature of most biometric information, we need to ensure that the information is protected for all New Zealanders regardless of whether or not they are physically in New Zealand at the time. If a New Zealander’s biometric information is compromised while they are overseas, and then they return to New Zealand, they may still suffer harms locally. Further, the costs of the harm may be externalised (in an economic sense) in New Zealand, rather than by the agency perpetrating the harms. We would be failing New Zealanders if an agency could collect and misuse their biometric information while they were overseas when the same activity would be illegal if they were in New Zealand.
28. The Privacy Act 2020 already has limited extraterritorial effect, in that agencies that carry out activities in New Zealand are covered, and non-resident individuals physically in New Zealand are covered. This suggestion would further expand that effect to include New Zealanders who are overseas to ensure their privacy rights are protected with respect to biometric information. While issues of extraterritorial jurisdictions are tricky to consider, there is some evidence that this is a successful mechanism in lifting minimum standards in privacy more broadly, such as through the EU’s General Data Protection Regulation (GDPR) and to a lesser extent through the Illinois’ Biometric Information Privacy Act (BIPA) and the California Consumer Privacy Act (CCPA).
29. I would also encourage OPC to write to the relevant Ministers (and Chief Executives), not just encouraging them to develop biometrics legislation, but to also increase their level of understanding and to help them identify risks in the departments they are responsible for. It is worrying that some of the recent public sector biometrics controversies in NZ have developed without the awareness of the responsible Minister (e.g. Police trial of Clearview AI, CAA trial of facial recognition for passenger counts, insufficient Māori engagement on One Time Identity proposals), and these issues need to be taken more seriously by the top decisionmakers. I note that Minister David Clark previously indicated in late 2020 that facial recognition regulations would be reviewed (https://www.rnz.co.nz/news/national/432152/facial-recognition-regulations-will-be-reviewed-minister), but to my knowledge no outcome has been published and any review needs to go beyond the context of digital identity.
30. In a broader context, OPC could also establish a repository of publicly-available agency-submitted Privacy Impact Assessments so that a) it is easier for individuals to find relevant PIAs rather than navigating complicated websites, b) OPC can get a sense of who is using personal (and biometric) information and whether or not they are complying with the Privacy Act, and c) demonstrate current practice for agencies to draw inspiration from (without OPC providing legal advice or suggestion that these reflect best practice). This could then lay the foundations for future regulation that requires agencies to submit their PIAs to OPC for inclusion in this repository (not necessarily for certification or endorsement, and with the necessary exceptions on public dissemination such as commercial sensitivity or national security). This could be limited to PIAs relating to sensitive information (including biometrics) if necessary.
Thank you for taking the time to consider this submission. I would be willing to participate in further discussions or meetings if I can be of assistance.
No comments:
Post a Comment