Monday 7 November 2022

Reflections on Deep Tech in Aotearoa New Zealand

This article originally appeared on the Matū website.

I was asked to speak at the Angel Association of New Zealand (AANZ) Summit in Queenstown at the end of October 2022, alongside Hayley Horan from Microsoft and Carl Jones from WNT Ventures. The session was about leveraging Aotearoa New Zealand’s SaaS and Deep Tech competitive advantages to build globally significant companies. Andrew mostly covered the deep tech side of the equation, and we thought it would be useful to share some of his talk notes and insights from the conference.

What are we good at?

There are some niche specialities in New Zealand deep tech where we are genuinely just very good at the science. An exhaustive list is a fools’ errand (and I apologise if we have missed your favourite area or start-up), but here is a start of some of our strengths and some of the commercial examples to back that up:

When we reflect on these areas, there are two common attributes: first is that we have genuine world experts who are demonstrating leadership beyond the lab, and second is that there is a common hub or network or community that fosters broader growth. When they come together we can genuinely say that we have a competitive advantage against others worldwide.

We also have some areas where New Zealand is a globally representative beachhead market, which makes our home soil a better place to experiment, such as in agritech (HalterBioLumicEngenderCropXCropsy), conservation and bioprotection (AmpersandWilderlab), and earthquake resilience (TectonusSeismic Shift). These are slightly different to the previous list because there is a lot more competition overseas, but doing it out of New Zealand offers unique opportunities and insight.

What are we not so good at?

While we have our strengths, there are some areas where we simply cannot compete with the giants on a global scale. Largely this is down to a lack of human and financial capital where the race is won with brute force and speed, rather than finesse. For example, large AI systems that need to hoover up huge amounts of data, where training a model might cost hundreds of thousands of dollars in computing time, are very difficult to build from New Zealand. The top research institutions around the world are struggling to compete with the likes of Google and Meta and Baidu, alongside the start-ups that raise hundreds of millions of dollars to make those technologies work at scale. This is not to say that we can’t apply these technologies in our niches (such as AI in agritech or AI in medical devices), but that it is very hard for us to compete in fundamental AI. Perhaps there are better opportunities for international collaboration to pair our expertise with the technical and capital resources from overseas.

There are other big scary science areas that we will just find it difficult to compete in – billions are being poured into quantum computing, we don’t have much of a biohacking community here, we are not competing on genetic engineering due to our regulatory and cultural environment, and we have relatively little investment in our oceans despite being surrounded by water. We should never say never though – even a few years ago someone saying that we could be doing nuclear fusion in New Zealand would be called a fool, but now we have a start-up kicking off their journey!

What are the broader trends affecting research commercialisation right now?

The real experts are in Technology Transfer Offices (TTOs) across the country, so we can only offer observations from our position as investors in the ecosystem. Counter to many people’s expectations, through COVID we actually saw a significant increase in commercialisation from universities; some institutions that averaged one start-up a year suddenly had four or five.

However, in 2022 the TTOs have generally seen a drop in invention disclosures, meaning fewer projects being put up for consideration. This is probably a symptom of our border closures preventing international postgraduate students from getting into NZ and working in our labs. International students are a crucial part of the academic workforce, in many research groups they are the ones who are actually doing the operationalisation and implementation work, and without them a lot of science has had to be conducted in different ways or has simply stalled. With international students now returning, we hope to see that trend reverse.

The New Zealand ecosystem also has very high levels of competition in research grant funding – the major government grants have success rates in the order of 10-15%. It can be argued that this helps weed out poor quality, but it also means many researchers are wasting a lot of time applying for grants, and then having to find other ways to support their work. I’m not saying we should approve every research grant, but 10-15% success rates are also probably not healthy.

However, we are seeing more academics take an interest in commercialisation and start-ups. It may not be very obvious because a lot of it happens quietly – generally researchers want to take their time and learn what they can in the “safe” environment of the university before they jump out of the gates and face commercial drivers. It means that as investors, the time to start supporting a company is not the day we deposit investment funds, it might be months or years before a company is even formed. Human capital has been a challenge for a long time in the early-stage ecosystem, and we have to do something about it and grow capability rather than relying too much on importing talent.

Are we trend setters or trend followers in New Zealand?

We see a risk adverseness generally in the local angel investor space when it comes to deep tech, which can be overcome if something has the “cool” factor, but what’s cool is what’s on trend. It takes a lot of bravery to be at the front of the trend, and many investors worry about getting the timing wrong and going too early. We also see investors looking for quick wins, which is fine if you only care about financial returns, but those wins tend not to deliver meaningful impact – we need to look to build long-term defendable advantages for New Zealand.

But as long as we are dependent on international pools of capital for our start-ups, we will probably largely continue to be trend followers. Local investors still want to see that a similar path has been cut before, and that there is a roadmap to future capital. When we think about capital strategy for our start-ups, with recent government interventions like NZGCP Elevate we have the capital to fund New Zealand companies for longer than we used to, but in general there is still a point where we look overseas for large funding rounds or exits. Maybe when we get to the next stage of maturity in the capital ecosystem, when we are genuinely providing end-to-end capital, then we will have the confidence to support companies long-term and see some of those bolder trend-setting calls.

Thanks to my colleagues, Will McKay, Samuel Sutton, and Kiri Lenagh-Glue for helping review and edit this article.

Sunday 2 October 2022

Submission on Privacy Regulation of Biometrics in Aotearoa New Zealand

The Office of the Privacy Commissioner has requested submissions and feedback on privacy regulation of biometrics in Aotearoa New Zealand. This follows a position paper released in 2021, which I also reviewed and gave feedback on. Biometrics technology is evolving rapidly and while our principles-based privacy approach has generally served us well in the face of new technologies, there are specific concerns relating to biometrics that warrant further consideration and attention. A few of my suggestions are a bit bold and wacky, mostly because I believe it is strategically helpful to advocate for stronger change given that it is likely any eventual proposals will be watered down. Below is my written submission, although I recommend reading the Consultation Paper first for context, particularly for para 16 below:

2 October 2022 [I asked for a very small extension]

1. Thank you for the opportunity to provide feedback through this consultation process. I am a Research Fellow with Koi Tū: The Centre for Informed Futures at The University of Auckland, based in Wellington. My research area is in digital technologies and their impacts on society, particularly in terms of public sector use and privacy. I am a member of the Privacy Foundation and a Fellow of InternetNZ. The views in this submission are my own and may not reflect those of my employers or the organisations that I am a member of.

2. As a general comment, I am strongly in favour of stronger and clearer protections for biometric information. In my opinion, we need legislation to be developed that includes stronger penalties for inappropriate collection, storage, or use of biometric information, as well as clear limits on unacceptable use cases. A Code of Practice, while helpful, is only a partway step towards providing the necessary protection for individuals as the risks associated with biometric information continue to grow.

3. Additionally, I am generally opposed to establishing a separate Biometrics Commissioner (or Surveillance Camera Commissioner) as has been seen in some comparable jurisdictions. The Office of the Privacy Commissioner should already be able to fulfil the responsibilities and duties of such a Commissioner, but needs to be more properly resourced and given more significant enforcement powers.

4. While there has been an increasing level of concern around Facial Recognition Technology (FRT) recently, in some ways this specific focus distracts from the broader issues surrounding biometrics. The use of other biometric characteristics (e.g. fingerprints, voiceprints, activity data) can be just as concerning or more concerning than the use of FRT in some applications. I would encourage OPC to not lose sight of other forms of biometrics beyond FRT as they continue this review.

On the Objectives of the review

5. It is positive that the objectives do not establish a false dichotomy between regulation and “encouraging innovation” as has been established in other government consultations around data and digital issues. It is important to frame this discussion in the context of what the people of Aotearoa New Zealand will accept; agencies will lose their right to innovate (aka social licence to operate) if they do not sufficiently mitigate the risks and cause harm.

6. To that end, I would encourage that the review also include active “outbound” engagement with the people of Aotearoa New Zealand, rather than predominantly relying on individuals and agencies to file submissions to OPC. For example, OPC could conduct surveys and focus groups as a “pulse check” to understand broader perceptions of biometric information and what people are comfortable with, as a precursor to developing principles and approaches towards regulation. Relying mostly on “inbound” submissions is likely to exclude many communities who do not have the time, resource, or capacity to engage in these types of processes.

7. It is particularly important to uphold Te Tiriti and develop a stronger understanding of Māori perspectives on biometrics. This is not only important in the context of developing further regulation, but also because OPC should play a role in helping agencies across Aotearoa NZ understand the principles that influence the appropriateness of using biometric information in a local context. For example, agencies need to understand that tā moko and moko kauae are not merely decorative, but also reflect an individual’s whakapapa and personal history, and that therefore if they choose to use facial recognition systems on Māori individuals then they may not just be capturing images of individual faces but also designs that reflect an entire whānau. Amplifying Māori perspectives on biometric information would be a significant step towards helping agencies actually understand why they should or shouldn’t use biometric information in particular ways.

8. This is also important in the context of promoting or adopting particular standards or principles, as many of these exist in overseas contexts and will not sufficiently reflect the landscape of Aotearoa New Zealand. While we can draw inspiration from the work of others overseas, directly using their standards is likely to lead to unintended harms in the local context if our uniqueness is not reflected.

On the case for more regulatory action and risk assessment

9. With the growing use of biometric technologies, the risk of harm also increases. Two ways that biometric information is distinguishable from other types of personal information are that a) the information tends to relate to the innate characteristics of a person in a way that feels invasive for another person to have access to, and b) those characteristics are not consciously determined and cannot be easily changed. Biometric information should be considered sensitive (and is done so correctly by OPC), in large part because the harms that may be felt by an individual from having their biometric information collected or used in a way that they oppose are greater in magnitude than for many other types of personal information.

10. Contemporaneously, there is a growing sense that there are insufficient penalties or consequences for agencies after data breaches have occurred. While these agencies may receive some negative media attention, there is generally very little care for the individuals whose data has been lost, and the organisations themselves get away with simply pledging to do better with no accountability on follow-through. Compliance Notices are a useful tool for OPC to make things better going forwards (as has been demonstrated in the Reserve Bank of New Zealand case), but the growing rate of data breaches (https://securitybrief.co.nz/story/the-biggest-cyber-attacks-of-2021-in-new-zealand) demonstrates insufficient proactivity and a Compliance Notice may not be sufficient to remediate harm. While there have not been any known biometrics-related data breaches yet in NZ, as the use of biometric technologies grows this is inevitable if we maintain existing agency attitudes towards data security and privacy. The Suprema/Biostar 2 data breach in the UK is notable for the biometric data that was left exposed.

11. Three directions where we could focus regulatory measures include: a) centralised evaluation of biometric systems including PIAs and providing certification (similar to the Privacy Trust Mark) or requiring audits, b) greater guidance and support for agencies wishing to use biometric information, and c) stronger penalties for the inappropriate collection, storage, or use of biometric information.

12. One tool that could be helpful for all of these suggestions is the use of a risk-based approach towards regulatory thresholds. The most relevant and well-known example of this is the European Union’s draft regulatory framework on AI, which specifies unacceptable, high, limited, and minimal risk applications of AI as a foundation for proposing different thresholds for regulation. 

13. Defining which applications fall into which risk category would be done by OPC under delegated/subordinate legislation to help keep the framework up-to-date as technology evolves. Defining our own lists would allow the framework to be appropriate for the Aotearoa New Zealand context – for example facial recognition that will be used on Māori may carry a higher risk in a local context than it would in another jurisdiction (due to bias risks and cultural considerations around tā moko and moko kauae).

14. A risk-based approach acknowledges that the applications and potential harms of using biometric information sit on a spectrum, and to apply one-size-fits-all regulation to all use of biometric information is dangerous. In the review that I conducted with Dr Nessa Lynch on Facial Recognition Technology for NZ Police in 2021, we drew upon work from her previous 2020 Law Foundation report on FRT in New Zealand (alongside Liz Campbell, Joe Purshouse, and Marcin Betkier), and developed a risk framework with unacceptable, high, medium, and low risk categories, with attributes and example applications in the policing context (see section 8.1 and the last page of https://www.police.govt.nz/sites/default/files/publications/facial-recognition-technology-considerations-for-use-policing.pdf). While we were not tasked with defining policy responses based on these thresholds, this approach has allowed NZ Police to clearly state which applications of FRT are clearly unacceptable and are not being explored, while being able to explore lower-risk FRT applications. Part 5 of that report also details current and potential future uses of FRT by NZ Police.

15. As an example of how such a framework could be applied, regulation could make it clear that use of biometric information in the unacceptable risk category is illegal, with use in the high-risk category requiring annual audits by a certifying agency (which could be OPC or other accredited agencies), and keeping the existing Privacy Act 2020 principles and protections in place for limited or minimal risk applications. The use of a risk-based approach helps keep the focus and attention on the applications with the greatest risk of harm and negative impact, while avoiding overly burdensome restrictions or compliance burdens on less risky and more acceptable applications.

16. While the factors mentioned in the consultation paper are generally appropriate for considering risk in biometric systems, I would suggest adding considerations for:

a. awareness and transparency (i.e. do people know that their biometric information is collected and understand how it will be used, which is separate to “genuine choice” as some agencies hide the use of biometrics)

b. whether or not alternatives are offered (which is a part of “genuine choice”)

c. storage of biometric information (including policies around governance, audit logs, and access, as well as the likelihood of inappropriate access and use by staff members or third parties)

d. the level of automation (i.e. is there a human in-the-loop, will there be a human-led oversight or appeals process)

e. combining biometric information with other sources (e.g. also using health information or pulling data from the Integrated Data Infrastructure)

f. overseas/cross-border transfers (i.e. will the biometric information be subject to differing standards and regulatory environments)

g. influencing power balances (i.e. how does collecting and using biometric information shift power between the individual and the agency, does it enable benefit for the individual or is it disempowering)

On bias, discrimination, and collective harms

17. Accuracy and bias of biometric technologies (and other digital technologies) have become increasingly prevalent topics of discussion over the last decade, and are concerns now commonly expressed in opposition to biometric technologies. However, it is important to consider whether or not bias and discrimination are inherent characteristics of biometric technologies and can never be remediated, or if the technology will eventually become accurate enough that these concerns fall.

18. As an example, the discussion around bias and discrimination in FRT systems has been primarily attributed to the use of biased datasets when training the AI models that distinguish between human faces. While some have argued that FRT is inherently biased and can never be as accurate for some ethnicities, genders, or ages as it is for others, more recent studies have shown that ethnic and gender bias is disappearing in commercial FRT systems. We have seen that commercial products trained on different datasets (e.g. products developed in the US vs in China) demonstrate different (and contradictory) biases, indicating that these issues may be resolvable with larger datasets and stronger training protocols.

19. The reason that this is important to consider is that while bias and discrimination can cause harm in the context of biometric information, we should also consider the harms that can be caused when the systems are working accurately and shown to be free of bias at a technology level. We must also consider biases that occur at the people or system level, for example in how biometric technologies might be used against people of particular demographics by the system owners, rather than because the technology itself is flawed.

20. In that context, it is important to consider regulation of biometric technologies beyond the technology itself. For example, regulation that a FRT system must be at least 99% accurate across all ethnicities would not prevent that system from being used exclusively on minority ethnicities. Discriminatory use of these technologies goes beyond privacy regulation because it is not simply about an individual’s personal information, but about how the technology is used.

21. The protections currently in the Privacy Act 2020 generally have an individualistic framing in terms of protecting personal information. However, when it comes to the use of biometric information, we should also consider how they may be used for or against groups of people (or “collectives”) in ways that may be harmful. Take the example of a supermarket using FRT to help enforce trespass notices: if there are biases in how those trespass notices are issued, this may mean individuals of particular ethnicities are more likely to be approached by security guards. In such circumstances, it is unlikely that privacy legislation related to biometrics will be able to provide much relief. While scenarios like this may be covered by broader Human Rights legislation (e.g. the Human Rights Act) with recourse through the Human Rights Review Tribunal and other courts, collective harm needs to be considered in any biometrics regulation.

On regulatory expectations and actions

22. It is of significant concern to me that the biometrics position paper frames OPC’s regulatory expectations as the existing principles and regulatory tools in the Privacy Act being sufficient. While the principles-based approach of the Privacy Act has served us well and is applicable towards a wide variety of applications and technologies, the penalties and their enforcement are not sufficient to prevent harm from occurring with the use of biometric information. 

23. The existing biometrics position paper effectively builds on the information privacy principles by suggesting questions that should be asked during the development of biometric systems, but there is no consequence for answering these questions poorly or ignoring negative results. Even the “expectation” that agencies will undertake a PIA for any biometrics project does not carry any regulatory weight behind it. As with much of our privacy legislation, the position paper assumes that agencies are both competent and good actors, and these are not safe assumptions.

24. While it is helpful for OPC to be stating their expectations, to take inspiration from other jurisdictions in providing more detailed guidance (e.g. UK, EU, Australia, and Canada), and to explore promoting particular standards or principles, in my opinion this is all insufficient if not supported by sufficiently resourced enforcement and penalties, and subsequent establishment of precedents that encourage compliance and disincentivise poor actors. Agencies should have to prove that they are meeting a higher standard of care when it comes to biometric information.

25. While we must maintain proportionality in mind, it is clear to me that the penalties for causing privacy-related harm are insufficiently disincentivising. The current $10,000 penalty in the Privacy Act 2020 is well below the levels seen in comparable legislation, especially in situations where there is collective harm. The European Union’s GDPR has fines of up to either 20 million euros or four percent of annual global turnover, whichever is higher. Australia’s Privacy Act currently has a maximum penalty of AUD 2.22 million, although the Online Privacy Bill suggests penalties to the greater of AUD 10 million, three times the benefit obtained through misuse of personal information, and 10% of the company’s annual domestic turnover. Given the sensitive nature of biometric information, it stands to reason that penalties for poor behaviour by agencies could be higher than for other types of personal information. It is also important that those responsible for privacy breaches should be required to compensate and support the victims.

26. While my preference is for legislative change in the area of biometrics, I appreciate that this is not within OPC’s control and that in the meantime a Code of Practice would be helpful to proactively mitigate harms and establish new norms. If a Code of Practice is to be developed, there should be some acknowledgement that while there would be some common rules across all scenarios using biometric information, the Code cannot be one-size-fits-all. However, instead of specifying different rules by the technologies or types of biometric information, a risk-based approach could allow for those rules to be applied at different levels of risk. This would allow for applications across different technologies to be grouped together, and offer more flexibility as OPC can move applications or scenarios between the risk categories if needed. This can also allow for stronger appreciation of the different types of risks faced between private sector and public sector use of biometric information.

27. I would encourage OPC to consider applying proposed regulatory action to every agency that handles the biometric information of any New Zealander, including overseas agencies that may or may not be conducting business in New Zealand. Given the immutable nature of most biometric information, we need to ensure that the information is protected for all New Zealanders regardless of whether or not they are physically in New Zealand at the time. If a New Zealander’s biometric information is compromised while they are overseas, and then they return to New Zealand, they may still suffer harms locally. Further, the costs of the harm may be externalised (in an economic sense) in New Zealand, rather than by the agency perpetrating the harms. We would be failing New Zealanders if an agency could collect and misuse their biometric information while they were overseas when the same activity would be illegal if they were in New Zealand. 

28. The Privacy Act 2020 already has limited extraterritorial effect, in that agencies that carry out activities in New Zealand are covered, and non-resident individuals physically in New Zealand are covered. This suggestion would further expand that effect to include New Zealanders who are overseas to ensure their privacy rights are protected with respect to biometric information. While issues of extraterritorial jurisdictions are tricky to consider, there is some evidence that this is a successful mechanism in lifting minimum standards in privacy more broadly, such as through the EU’s General Data Protection Regulation (GDPR) and to a lesser extent through the Illinois’ Biometric Information Privacy Act (BIPA) and the California Consumer Privacy Act (CCPA).

29. I would also encourage OPC to write to the relevant Ministers (and Chief Executives), not just encouraging them to develop biometrics legislation, but to also increase their level of understanding and to help them identify risks in the departments they are responsible for. It is worrying that some of the recent public sector biometrics controversies in NZ have developed without the awareness of the responsible Minister (e.g. Police trial of Clearview AI, CAA trial of facial recognition for passenger counts, insufficient Māori engagement on One Time Identity proposals), and these issues need to be taken more seriously by the top decisionmakers. I note that Minister David Clark previously indicated in late 2020 that facial recognition regulations would be reviewed (https://www.rnz.co.nz/news/national/432152/facial-recognition-regulations-will-be-reviewed-minister), but to my knowledge no outcome has been published and any review needs to go beyond the context of digital identity.

30. In a broader context, OPC could also establish a repository of publicly-available agency-submitted Privacy Impact Assessments so that a) it is easier for individuals to find relevant PIAs rather than navigating complicated websites, b) OPC can get a sense of who is using personal (and biometric) information and whether or not they are complying with the Privacy Act, and c) demonstrate current practice for agencies to draw inspiration from (without OPC providing legal advice or suggestion that these reflect best practice). This could then lay the foundations for future regulation that requires agencies to submit their PIAs to OPC for inclusion in this repository (not necessarily for certification or endorsement, and with the necessary exceptions on public dissemination such as commercial sensitivity or national security). This could be limited to PIAs relating to sensitive information (including biometrics) if necessary.

Thank you for taking the time to consider this submission. I would be willing to participate in further discussions or meetings if I can be of assistance.

Friday 23 September 2022

Submission on possible changes to notification rules under the Privacy Act 2020

The Ministry of Justice is requesting submissions on possible changes to the Privacy Act 2020, specifically around introducing requirements for individuals to be notified if their personal information is being disclosed (or transferred) between two agencies / entities. The Information Privacy Principles of the Privacy Act generally require that agencies collect personal information directly from the individual, and that separately agencies should tell the individual what information they are collecting and how they plan to use it, but there is currently a bit of a loophole where information collected from a third party doesn't require notification. Given that similar requirements have been introduced in the European Union (and several other jurisdictions trying to match EU standards), it stands to reason that such protections should be considered here in Aotearoa New Zealand too. Below is my written submission on the topic:

18 September 2022 

1. Thank you for the opportunity to provide feedback on these possible changes. I am a Research Fellow with Koi Tū: The Centre for Informed Futures at The University of Auckland, based in Wellington. My research area is in digital technologies and their impacts on society, particularly in terms of public sector use and privacy. The views in this submission are my own and may not reflect those of my employers.

Key Factors

2. Upholding the principle that individuals should have control over their personal information, where it is, who has it, and how it is used, would logically conclude that when personal information is transferred between agencies that they should be notified so that they can make informed and appropriate choices about their personal information. Therefore, I generally support the intent of the possible changes.

3. This is particularly important as the type of personal information being commonly collected becomes increasingly invasive (for example, analysis or insights derived about a person that speak to intangible aspects like personality rather than purely tangible characteristics like street addresses or phone numbers) and increasingly immutable (for example, biometrics that are unique and cannot be changed). In these cases, the negative impacts of personal information misuse or privacy breaches are greater than in the past, and there may be more reason for individuals to oppose their information or specific types of information ending up in someone else’s ownership or control without their knowledge.

4. The level of privacy harm that has accrued as a result of a lack of requirement for notification of indirect collection thus far is very difficult to quantify, because for the most part we simply do not know how much personal information has been indirectly collected. What we do know is that many modern business models (e.g. large tech companies generating personalised advertising) rely on transferring personal information between agencies for monetary value. Similarly, government agencies are increasingly transferring information about individuals in order to make better decisions and provide better services to individuals (e.g. through the IDI, or through digital identity systems), although there are more protections in place in the public sector. Notifying individuals each time their information is being transferred (with an opportunity to opt-out) may reduce the scale of those transfers, which is not necessarily a bad thing.

5. That our broader society has seemingly accepted business models and processes that rely on the transfer of personal information without notification of individuals is not a sufficient reason to oppose the need for such notification – the harm is still present and therefore it is appropriate to explore mechanisms to mitigate against that harm. An approach that introduces new compliance costs to mitigate that harm should be evaluated by balancing those costs against the harm that is mitigated, rather than accepting an argument that any compliance costs are unacceptable.

6. Our traditional conception of privacy focuses at the individual level, which can make it challenging to assess the level of harm from indirect collection of personal information. It is difficult to identify that a tech company selling someone’s personal information to an advertising company produces significant monetary or emotional harm to justify taking regulatory or enforcement action. However, in developing this policy, the government should consider the level of harm at a collective level – that transferring the personal information of many people between agencies can allow those agencies to make decisions with broader harm. A strong example is the Cambridge Analytica scandal – the individual users, whose information was shared with a political consulting firm when they thought it was only being used for academic purposes, did not suffer much harm at the individual level, but the way that the data was then used to influence over 200 elections in 68 countries is significantly more harmful to those democratic societies.

7. A notification approach is effectively an opt-out approach, where a notified individual has to then take action to stop the transfer of personal information or to request that information be deleted, rather than agencies needing to actively seek permission from the individual to opt-in and agree to the information transfer. This is already a compromise against best practice privacy principles but a necessary one for practical reasons, particularly where significant amounts of information are being transferred. This approach should not be compromised further in any proposed changes.

8. On notification fatigue, while this is a fair concern and a real user experience challenge, it is a weak argument to reject the possible changes. Firstly, in jurisdictions that already have notifications for indirect collection of personal information, I have not been able to find any reports or literature on notification fatigue being an issue based on empirical data, only theoretical arguments. Secondly, that the notifications will be coming from different agencies helps mitigate against notification fatigue, which is much more common when the notifications are from the same source and similar in style and content. With sufficient variation between agencies, this may help reduce the potential risk of notification fatigue. An argument against the possible changes on the grounds of notification fatigue should be based on an analysis of the quantity and frequency of notifications that individuals are likely to receive if such changes were implemented.

9. Maintaining adequacy, particularly with the European Union, is a critical competitive advantage for New Zealand. Adequacy is the status of being deemed to have an adequate level of data protection relative to another jurisdiction’s regulations and expectations, and allows data to flow between the jurisdictions more easily. Particularly where notification of indirect collection of personal information has already been implemented under the EU’s GDPR, and we can see that it is effective and working, it is important for New Zealand to keep up with international best practice. This should also be considered in the discussion around compliance costs, as the cost to New Zealand of not meeting international best practice may be greater than the cost of compliance to agencies.

10. I believe that overall it would be likely beneficial to give individuals stronger agency over their personal information through the notification of indirect collection.

Additional Considerations

11. Practically, we should consider the scenario where agencies may transfer personal information to each other without either agency having contact details for the individual. The legislation should consider this situation and whether or not an exception is required. Taking “reasonable steps” may be sufficient in the legislation to allow for scenarios where it is simply not possible to notify the individual. 

12. However, we should not overly rely on a “reasonable steps” standard in other situations. Over-reliance on a “reasonable steps” standard makes it difficult for both businesses and individuals to know whether the standard has been met. It creates a period of uncertainty where we will have to wait for relevant cases to be brought to the Office of the Privacy Commissioner or the courts before precedent for “reasonable steps” can be established. Such an approach should be used sparingly and for relatively specific parts of the legislation.

13. Policymakers should also consider the scenario where an agency collects information about a person from public sources. Just because personal information is publicly available does not mean that information is no longer personal, and should not mean that the individual has relinquished their rights to privacy – for example, an agency may collect phone numbers from phone books, or harvest information about people from social media networks. Where the personal information will fall under the ownership of a new agency that the individual may not have known about, then they should still be notified about that (acknowledging the exceptions in IPP2/IPP3). For example, the personal information may be combined with other sources already held by the agency, or the personal information may be collected in a public space (e.g. a photo of a person’s face) which becomes a biometric identifier for the individual – the individual should have a right to know how the personal information will be used.

14. The framing of the consultation places emphasis on the collecting agency, which is understandable given the structure of the Privacy Act and the earlier Information Privacy Principles. In terms of the obligations on the disclosing agency, as currently described in IPP11, policymakers should consider whether or not to add an obligation that the disclosing agency must be satisfied that the collecting agency has sufficient processes and controls to be able to uphold the Privacy Act. This could be similar to the provisions of IPP12 in that agencies cannot make disclosures overseas unless the agency believe on reasonable grounds that the exceptions apply. This would reduce the likelihood of information being indirectly collected by poor actors if they cannot demonstrate to the disclosing agency that they are responsible stewards of personal information.

15. It would be important to ensure that, in their interactions with individuals, disclosing agencies cannot contract out of notification requirements ahead of time, and that providing blanket statements would be insufficient. Essentially, agencies should not be able to just put in the Terms and Conditions that the agency may disclose the information to other unspecified agencies without notifying the individual. Firstly, the general approach of satisfying IPP3 through a Privacy Policy or Terms and Conditions on an agency website is weak for ensuring that individuals actually understand what is happening to their personal information. Secondly, individuals’ perceptions of the value of their personal information, and the risks that may be associated with sharing it, change over time and individuals should be given the opportunity to exert control over their personal information at the time that it is being disclosed.

Preferred form of proposed changes

16. My preferred mechanism for enacting these changes would be through amending IPP 11, such that a disclosing agency has to notify the individual concerned that their information has been disclosed to a third party. It would be preferable to strengthen the amendment such that, where possible, notification is provided before the information is disclosed with a minimum notice period, so that the individual has the opportunity to exercise a right to opt-out or request that information not be disclosed.

17. Other mechanisms that place the obligation on the collecting agency run the risk of the collecting agency being a poor actor and notification not being given, and it can be very difficult to ensure that information is deleted once they already have it. If there are no obligations on the disclosing agency, and the collecting agency is either unaware of their obligations or a poor actor, then we may remain with the status quo where no one other than those two agencies know that the information transfer has taken place. Furthermore, the disclosing agency may receive some form of monetary value in exchange for disclosing the information (e.g. selling information to an advertising agency), and therefore they may be more incentivised to ensure that they are meeting their regulatory requirements in order to not compromise their ongoing business model. It may also be easier for a disclosing agency to build the infrastructure to serve notifications if they are providing data to multiple agencies, rather than each of those collecting agencies having to build their own systems.

18. If it is decided that the proposed changes are through IPP3 or otherwise place the onus on the collecting agency to provide notification, then it may still be helpful to specify in IPP11 that for particular types of sensitive personal information (e.g. biometrics) that the disclosing agency has an obligation to also notify the individual of indirect collection. The development of a sensitivity classification may be useful for other sections of the Privacy Act too, and is discussed further in para 22-23.

19. Separately, it would be beneficial to add to IPP2 that where personal information is collected from public sources, the collecting agency must make reasonable efforts to notify the individual that their personal information has been collected and what that information may be used for.

20. Additionally, there may need to be some consideration for how any possible changes to the Privacy Act 2020 may interact with s11, particularly where agencies argue that a discloser-collector relationship falls under this section. The threshold for use needs to be carefully considered in this context.

Applicability to individuals overseas vs domestically

21. While it is understandable that policymakers may want to limit the impact of changes by only requiring notification of indirect collection of information for individuals overseas, that would stop individuals in New Zealand from benefitting from the stronger protections. If we accept that notification of indirect collection is a good thing, then ethically it should be made available to all individuals under the jurisdiction of the Act. Harm can still accrue from the indirect collection of information within our domestic borders (perhaps most significantly when transferred between government agencies), and so these protections should apply to agencies operating exclusively domestically too.

22. If some form of reduction in scope is considered necessary to mitigate the risks of introducing the possible changes, then it may be better to base that on the sensitivity of the personal information through a risk-based approach rather than on jurisdiction. For example, the UK Information Commissioner’s Office maintains a list of “Examples of processing likely to result in high risk” based on both the type of information and the applications. A similar approach was taken in the European Union’s development of AI regulation, which separated use cases into unacceptable risk, high-risk, and limited or minimal risk categories. 

23. As the harms are more serious where the personal information being disclosed is more sensitive (e.g. biometrics), it would be appropriate to still require notification/action in these circumstances. This approach could even allow for banning unconsented and unnotified transfer of personal information for particular very high-risk applications (e.g. real-time biometric identification systems or social scoring), requiring agencies to collect the information from individuals directly. While this approach may require more maintenance than a purely principle-based approach (and therefore should be maintained by the Office of the Privacy Commissioner rather than through legislation), it would also offer more flexibility to allow lower risk indirect collection to occur without notification.

Thank you for considering this submission. I would be happy to engage in further dialogue about these issues in the future if that would be helpful to officials.

Tuesday 20 September 2022

More Zeros and Ones - The Algorithm Charter

After editing Shouting Zeros and Ones in 2020, I passed the mantle of editorship to the Pendergrast sisters, Anna and Kelly. Their edition, More Zeros and Ones: Digital Technology, Maintenance and Equity in Aotearoa New Zealand, is being added to bookstores around the country this month thanks to the support of the good folks at Bridget Williams Books. Since I wasn't editing, I got to contribute an actual Chapter relating to my research, so I wrote about the first year or so of the Algorithm Charter - He Tūtohi Hātepe mō Aotearoa. Here's a quick summary and taster of what I wrote about:

We've heard a lot about algorithms in recent years, whether they're calculating your insurance premiums, running our traffic lights, or deciding what content to show you on social media platforms. While algorithms have a broader meaning, we often think of pieces of software running on some computer somewhere, making decisions that affect our lives. It turns out the government has many scenarios where they would like to improve our lives, and algorithms can help them make better decisions, respond to changing situations faster, and allocate resources more efficiently and fairly.

However, there is also the potential for the government to misuse algorithms or use them poorly, whether intentionally or accidentally. The types of decisions that government makes are consequential – they can affect a lot of people very quickly, and the impacts can be significant for individuals. Trust in government is crucial for a strong society, yet mistakes in using algorithms and other new technologies can undermine that trust. This is why the government has also introduced an Algorithm Charter, which sets underlying principles and commitments that government agencies should consider when developing algorithms.

The Algorithm Charter, launched in July 2020, asks government agencies to assess six key areas: transparency, partnership and consistency with The Treaty of Waitangi, engaging with impacted communities, understanding limitations and bias, human rights and ethics, and retaining human oversight. It provides a very high-level framework to help policymakers evaluate whether or not they are doing the right things when it comes to using algorithms, and mitigating the risks.

However, the Charter is not without its pitfalls. Government agencies commit voluntarily to the Charter, and there are no enforcement mechanisms or centralised reporting to ensure that agencies are living up to the Charter. There is significant variation between agencies regarding how and when they apply the Charter. The Charter doesn't really engage with Māori, including in terms of Māori data sovereignty. And agencies are now finding edge cases where the Charter and its associated risk matrix don't really apply – for example, where the Charter asks agencies to evaluate negative impact as "unintended harms for New Zealanders", which leaves out harms to people overseas or the environment or other systems.

These shortcomings are of course, unintentional, and sometimes we have to operationalise something to find out why it might not work as fully as intended. StatsNZ and the Government Chief Data Steward (GCDS) have recently released their one-year review of the Charter, which reflects on the experiences of government agencies and subject matter experts. It's clear that the Charter has had a positive overall impact in its first two years, but there is still so much more potential. Greater coordination and sharing of best practice between agencies would significantly lift the bar, with better templates and accountability. Maintaining a high standard and repeatedly demonstrating good behaviour helps to build trust between the government and its citizens – we still have a long way to go.

I discuss these topics and issues in much more detail in the book! I've also read most of the other chapters and they are really great too, including a couple of really interesting case studies of how we are making technology work for us in an Aotearoa New Zealand context. You can order a physical copy of the book or an ebook at https://www.bwb.co.nz/books/more-zeros-and-ones/