Tuesday 23 October 2018

Submission to RCEP Negotiators on Algorithmic Bias and Discrimination

In October 2018, I was asked to give a short submission to the Regional Comprehensive Economic Partnership (RCEP) negotiators on algorithmic bias and discrimination (during their Round 24 meeting in Auckland). RCEP is a trade agreement between the ASEAN countries and Australia, China, India, Japan, Korea, and New Zealand. Of particular interest to me was the provisions that were likely to be copied from the CPTPP around source code.

Thank you for having me today to participate in this discussion. I am a Computer Systems Engineer at the University of Auckland, using and developing artificial intelligence and machine learning algorithms for image processing and computer vision. In other words, I write software code. I’d like to speak today about algorithmic bias and discrimination, and why access to source code matters. This is important for the e-commerce chapter, but also has implications for intellectual property.

We live in a world where software is not perfect. The motto and attitude of many companies is to "move fast and break things". In software development, encountering errors and bugs is the norm, and it is expected that updates and patches have to be provided in order to correct these after products have been released. We don't trust civil engineers to build bridges or buildings in this way, yet we increasingly rely on software for so many parts of our lives. Algorithms can decide who is eligible for a loan, who gets prioritised for health services, or even which children might be removed from their homes by social workers. We need to be able to find errors and to correct them, especially when the real-world stakes are high.

With the rise of artificial intelligence, we have also seen an increase in a particular type of error - algorithmic bias and discrimination. There have been a number of well publicised cases in recent years. Computer vision algorithms for facial detection and recognition have historically had higher error rates for people of darker skin colours. An algorithm for assessing the risk of re-offending for convicted criminals in the US was found to be biased against African Americans, leading to harsher sentences. Earlier this year, Amazon decided to deactivate a system that screened potential job candidates when they realised that it was biased against female applicants. These effects are not intentional, but sometimes we just get things wrong.

The MIT Technology Review wrote last year that bias in artificial intelligence is a bigger danger to society than automated weapons systems or killer robots. There is a growing awareness that algorithmic bias is a problem, and its impacts are large because of how pervasive software is. Software spreads very quickly, and negative effects can lay dormant for a long time before they are discovered.

Without going into too much technical detail, there are two critical sources of algorithmic bias:
- Poor data that either does not reflect the real world, or encodes existing biases forever
- Poor algorithm choices or systems that artificially constrain choices, for example by selecting the wrong features or wrong output classes, or optimising towards specific goals while ignoring others

In both cases, there is often no way for an end user to confirm that something is wrong. We say that these systems are opaque, because we cannot see into how these algorithms work. Most research into discovering biased algorithms requires population level data in order to reverse engineer the system, often after the system has already been deployed and harm has accrued. It is the role of governments to protect its populace from dangers such as these. Many currently do not know how to deal with this, and the black-box nature of many advanced algorithms can make this difficult.

It is therefore of concern that trade agreements may stifle this work by putting in place restrictions against inspecting source code. By doing so, a powerful tool is taken away from regulators, and we massively empower engineers and developers to make mistakes with real-world consequences.

As an example of how trade agreements have affected this, Article 14.17 of the CPTPP specifies that "no party shall require the transfer of, or access to, source code of software." I can understand why companies want this, to help protect their intellectual property rights. But we may have to decide which rights are more important – a company’s property rights, or the public’s rights to not be subject to mistakes, errors, biases, or discrimination that can have unforeseen and long-lasting impacts? Or in other words, the public’s right to safety.

In paragraph 2, it clarifies that source code restrictions are only limited to "mass-market software" and software used for critical infrastructure is exempted. Presumably this is an acknowledgement that software can have errors, and that in critical situations regulators must have the ability to inspect source code to protect people. It begs the question – what is critical infrastructure, and what about everything else that still has a strong impact on people’s lives?

Algorithms don’t just affect aeroplanes or nuclear power plants, we’re talking about scheduling algorithms that control shipping operations to decide what goods go to which places at what times, we’re talking about social media algorithms that influence our democratic processes, we’re talking about resource allocation algorithms that decide who gets a life-saving organ transplant. Why are we locking the door to our ability to reduce real harm? Where software is being imported across territorial boundaries, regulators need to have the opportunity to check for algorithmic bias and discrimination in order to protect their populations. Please do not just copy these articles from the CPTPP; more recent trade agreements such as NAFTA and EUFTA have already recognised that this was a mistake, and have tried to correct it with more exceptions. A high quality and modern trade agreement must allow us to manage and deal with the risks and harms of algorithmic bias. Thank you very much for your time.

Protecting Privacy Rights with Alternative/Progressive Trade Agreements

As part of the Alternative and Progression Trade Hui in 2018, I was asked to speak for a few minutes about privacy rights in the digital age, and how they can be influenced by international trade agreements.

Q: Privacy and controlling the use of our personal information for commercial purposes is increasingly at risk in the digital world. It is changing very rapidly, with the Facebooks and Googles of the world increasingly difficult to regulate and control. Their interests are also reflected in the trade and investment agreements, particularly in e-commerce chapters. How do you think this industry will develop over the next decade or so, and how would you see international agreements best structured in the face of this constant change in order to ensure people’s privacy and control of their lives is protected?

A: A lot of the threats to privacy in the coming years are enabled by advances in AI, which allow us to process a lot more data more quickly, while also doing so in a way that is opaque to humans. Our rights were not designed with these types of automated capabilities in mind.

Trade is not just about physical goods! Data and information have value and are now commoditised - privacy is what helps us keep that value to ourselves and maintain ownership. There has been an erosion of rights with commercial entities getting on board - we can't think of surveillance as being an exclusively state activity, and we need to understand how corporations are trading in and using our data.

Privacy seems to be one area where exposing the downsides of large-scale data collection and trade of that data can generate a lot of attention - e.g. NSA, Cambridge Analytica and Facebook, etc. But after each breach, we focus on individual responsibility and individual actions - delete Facebook, or don't use social media, etc. By and large, there are very few nation state actions or responses.

Governments simply do not know what is out there, lawmakers are unaware of both the risks and the opportunities to use technology to protect privacy. This is one area where states have been largely reactionary. The current Privacy Bill has been characterised as fit for 2013; it will be outdated upon arrival. There is a reliance on lobbyists in this space, funded by the types of companies that say "move fast and break things". Privacy rights are sometimes viewed as antithetical to capitalism, because they get in the way of doing business. More companies are wary of this now, but in a sense, without regulation there may not be sufficient incentive for companies to make privacy a priority and actually protect people's data. This influences our trade agreements, for example, by asking for source code to be kept secret in order to protect intellectual property. Strong privacy legislation can be seen as a trade barrier, and thus it becomes traded away in exchange for economic benefit.

At the same time, Europe is exporting their privacy standards with the GDPR - privacy is one area with contagious legislation where states often copy each other. In some ways this is good, if it means that everyone is adopting good protections. The GDPR led to a massive scramble of companies rushing to get themselves compliant - not because it was impossible before, but because they didn't need to before. So a progressive trade agreement could lift the standards in this area for everyone - it requires leadership from a state, such as we've seen in the EU. So while our privacy and our data can be at risk through trade agreements, there can also be opportunities for those trade agreements to strengthen privacy protections - it depends on how much we can convince governments to prioritise it. New Zealand can be a leader in this space and say that it’s important to us.

Trade agreements can demand performance standards over how trade is conducted, for example cross-border data transfers which are covered by TPP, RCEP, EUFTA, and others. It may be weird to think about data transfers as international trade, but there is an exchange of value there. There are some existing standards, but we could go much further to introduce stronger property rights around data, particularly around how multinationals obtain and then use and trade our data, and make sure that we can own our own data and protect it. GDPR is an example of how this can be achieved. New Zealand can make privacy a priority, and it should really be a priority for all trading nations.

[But in 30 years time none of this might matter anyway, as we head towards more complex AIs that cannot be understood or inspected by a human, which may process and trade our data in ways that we cannot foresee. How we deal with that as AI becomes more pervasive and harder to control is a different but also critical discussion.]