Thursday 17 October 2019

Submission on the Government Draft Algorithm Charter

This is a submission responding to a request for feedback from StatsNZ, available here: https://data.govt.nz/use-data/analyse-data/government-algorithm-transparency-and-accountability (and the Charter itself is here: https://data.govt.nz/assets/Uploads/Draft-Algorithm-Charter-for-consultation.pdf).

Thank you for the opportunity to provide feedback on the New Zealand Government Algorithm Charter. Firstly, it is important to say that the development of this draft represents a positive step forward, and I hope that all government agencies will take it seriously and incorporate it into their operational practice.

1. While punitive measures may not be necessary, some form of enforcement or monitoring should be implemented alongside the Charter to ensure that the principles are being upheld. An annual algorithm scorecard or checklist or similar tool could help give an indication of which agencies are successfully or unsuccessfully upholding the Charter.

I have a few suggestions that could help strengthen the Charter:
2. I am encouraged by the third bullet point that requires communities, particularly minority and marginalised communities, to be consulted about the use of algorithms where it affects them. However, it is a little worrying that “as appropriate” has been included without explanation of what appropriateness means, which could be interpreted in different ways. I assume that in this case it refers to consulting with the appropriate groups of people subject to a particular algorithm, rather than public sector employees deciding whether it is appropriate or not to consult at all. In my opinion, it would be better to remove “as appropriate” to avoid the potential for that misunderstanding. 

3. It would also be helpful to require active consultation – not just based on desk research or one-way submissions, but processes that require government agencies to physically go out into the communities and talk to people, in person, about their perspectives on these algorithms. I appreciate that this may be costly, but it is an important step to establishing social license and helping people understand the choices and impacts.

4. As part of the fifth bullet point about publishing how data are collected and stored, it would be helpful to also include a commitment to have systems/procedures in place that allow people to see, correct, and remove their data from any system (if it is not held anonymously). These are principles in the Privacy Act already, but they need re-inforcement to ensure that the appropriate functionality is built-in and can be activated when needed.

5. The eighth bullet point indicates that the implementation and operation of algorithms will be monitored for unintended consequences such as bias. Ongoing monitoring is critical and this is a positive commitment. However, agencies should develop an understanding of potential unintended consequences before algorithms are deployed as well. In particular, it is important to understand potential errors, how often they may occur, and the consequences of these errors. Similar to a Privacy Impact Assessment, an Algorithm Impact Assessment would more broadly check for possible negative impacts. Appropriate checks and balances should be implemented to ensure that negative consequences do not happen silently/invisibly, and that there is sufficient ability for humans to intervene if an algorithm has made an error. Just like humans, algorithms are extremely rarely 100% accurate, and so the potential for error needs to be properly understood before implementation.

6. Algorithms increasingly rely on appropriate models being developed, which rely on there being sufficient data that is representative of the people who will be subjects of or subject to the algorithm. In my opinion, it would be useful to explicitly acknowledge that for some people or groups of people, there may simply be no data available to represent them, which leads to models that do not accurately reflect the real-world (which is one form of bias) and therefore leads to algorithms making errors. Government agencies need to understand which people are represented in their models, so that appropriate decisions about the use of those algorithms can be made. For example, it may be appropriate for some people to not be subject to an algorithm, simply because the algorithm won’t work for them because the underlying model doesn’t represent them, and a manual process is required instead. Recent migrants are an example of people who may be negatively impacted by the use of algorithms that rely on models that do not represent them.

7. Many new algorithms use artificial intelligence and machine learning methodologies, with increasingly complex systems that are hard for any human to understand. In my opinion, it would be helpful to include a bullet point that encourages government agencies to actively support algorithmic transparency and explainability, including through the use of data visualisation. This is different to offering technical information about the algorithms and the data, and would encourage agencies to develop plain English explanations or interactive experiences that help people understand how the algorithms operate.

8. In my opinion, nothing in this Charter stifles innovation, and agencies should be discouraged from treating innovation and transparency as being on opposite ends of a trade-off. The Charter encourages government agencies to use best practice already. The transparency encouraged through this Charter not only protects people’s rights and increases confidence, but it can help improve the quality of the algorithms and models, as well as build social license with the people who may be subject to these algorithms. Government algorithms can have wide-reaching and long-lasting impacts, and so it is only appropriate to have principles that ensure high-quality decisions are being made.

Thank you again for the opportunity to make a submission, and I view it as positive that the government is seeking broader input on this important topic.

Ngā mihi nui,
Dr. Andrew Chen
Research Fellow, Center for Science in Policy, Diplomacy, and Society (SciPoDS)
University of Auckland

No comments:

Post a Comment