Tuesday, 10 August 2021

Remarks at the Dialogue on Autonomous Weapons and Human Control

Mary Wareham of the Human Rights Watch and the Campaign to Stop Killer Robots organised an event with the Ministry for Foreign Affairs and Trade at Parliament to discuss New Zealand's role in disarmament of autonomous weapons. I was somewhat reassured by Minister Phil Twyford's ongoing calls for legally binding rules at an international level, but also agreed with Edwina Hughes' calls for domestic legislation as an easy first step before having to deal with the foibles of multilateralism. I gave a short speech with my Koi Tū hat on.

10 August 2021

Tēnā koutou katoa, ngā mihi nui mo te korero. I’d like to thank Mary Wareham for the invitation to speak to you all today on this important topic. Mary has been a global leader in this space for many years, and it’s great to have her home in NZ. I’ve been into technology since I was at primary school and entered a website design competition when I was nine. When I was finishing high school, I remember telling the school librarian that I was going to study engineering at university and build robots. The librarian said “That’s great! But please don’t end up building robots for the military.” 

I remember not really knowing what to say in response to that. At that point in my life, the notion of robots being used in armed warfare was largely limited to video games and science fiction movies. Killer robots were cool, because they were giant mechs with heavy armor plating and used big lasers and could fly and had dispassionate personalities that could drop pithy one liners. I had no idea that killer robots would in fact be quadcopter drones divebombing on retreating soldiers in Libya.

I didn’t end up building robots for the military in the end, I spent years training as a computer engineer developing AI systems, and now I’m just an academic that writes and talks about the societal impacts of digital technologies. There isn’t a lot of research about the ethics of killer robots being done in New Zealand. Discussions about autonomous weapons can feel distant from us at the bottom of the world. But we may be more involved than we think.

The nature of modern technology development means that no one develops a whole thing by themselves anyone. Everything that we do has been built upon the pieces made by developers and engineers before us. In an economics sense, Adam Smith called it the division of labour, but in engineering we call it abstraction and reuse and developing with off-the-shelf components. It means that we can save a lot of time and effort, avoiding mistakes by not reinventing the wheel. But it also means that if we make something, it can be very difficult to control where that might end up. We just don’t know how someone might use something that we built. Which means that we don’t know if a piece of software written to control drones in New Zealand might end up in a device used to kill a person halfway around the world. Even if we pledge not to participate in or support the development of lethal autonomous weapons, we cannot guarantee that our code or our electronics or our hardware won’t have a hand in ending a life.

I can’t speak on behalf of the entire tech community, but I know that many of my colleagues are increasingly worried about the social and ethical implications of the tools and systems that we develop. In a technology space where the ethics are often murky, where we often have debates about free speech and safe harbour for platforms and discriminatory impacts of automation and so on, there really should be no question about autonomous weapons. When it comes to the decision to end a person's life, we cannot allow some lines of code take that humanity away from us.

I note that the Minister has spoken about the future impact of autonomous weapons, but autonomous weapons have been technically feasible for a long time. Weapon systems can be programmed to select targets and apply force without human intervention, and have existed for years even if they haven’t necessarily been operationalised. It is only our ethics thus far that have stopped the broader deployment of these weapons, and that is not a sufficient barrier to make us feel comfortable that these weapons won’t be used. We are already playing catch-up.

A legally binding instrument would go a long way. We’ve already collectively decided as a species that there are some weapons that are just too horrible to use. I think we all know what the problems are with killer robots, we can move on from showing concern and relitigating definitions and get onto debating the solutions. It boggles my mind that we spent so long debating the definition of killer robots, with one of the commonly used examples being that land mines could also be considered autonomous weapons because there is no human decision to detonate, to which I’d say, we should ban land mines too!

And New Zealand has a role to play as a leader on this issue. We have an impact on the global digital technology space, whether that’s through the Christchurch Call or through advancing indigenous data sovereignty. We are in a position to have a strong stance on autonomous weapons, to draw the support of those sitting on the fence, to help protect all humans everywhere in the world from this impassive evil. A national strategy that leads to international engagement on autonomous weapons is critical, because we have both an opportunity and an imperative to ensure that killer robots stay in the fiction part of science fiction. I’m sure that would make my school librarian feel more comfortable at least. Ngā mihi nui mo te whakarongo pīkari, thank you. 

No comments:

Post a Comment