Thursday, 30 August 2018

Privacy and Camera-based Surveillance

This talk was prepared as part of Raising the Bar 2018, a series of talks organised by the University of Auckland to get research out into the public in different settings and contexts. A recording of the talk is also available here!

Introduction
Kia ora koutou, anei taku mihi ki a koutou kua tae mai nei. Thank you very much everyone for coming along tonight. Welcome to Raising the Bar! A big thank you to the University of Auckland for putting this all together. My name is Andrew, and I’m a PhD candidate specialising in Computer Systems Engineering, working in the area of practical and ethical video analytics systems. Video analytics is a relatively new term, so you might not have heard of it before, but it’s really all in the name. Video analytics is essentially where we take a video, and we analyse it. In reality, it’s often just a nicer-sounding term for camera-based surveillance, because when we analyse video, we’re almost always looking for particular objects or things, and in many cases those things are people.

The system that we’re developing at the University of Auckland is one where we can track people in real-time across multiple cameras, so that we can have these large camera networks and see how people move and use physical spaces. We need to use artificial intelligence and machine learning, embedded systems, big data, hardware/software co-design, the internet of things, and a bunch of other buzzwordy technologies together to achieve this end goal. My degree is fundamentally an engineering degree, so the primary focus is on the application and development of the system itself, but as I continued to work away at this video analytics system, I became more and more concerned about how these systems might actually be used. Something in the back of my mind felt a bit bad about helping to create these next-generation surveillance systems, because I knew that as with most technologies, these systems can be used for good or for bad, depending on who owns and controls the system.

And so, from Edward Snowden and the NSA, to Cambridge Analytica and Facebook, information about us seems constantly at risk. Technological advancements have meant that surveillance capabilities have accelerated away from our understanding and regulations around privacy, and it’s an area fraught with complexity, differences in context, and many subjective opinions, which makes it really, really hard to figure out what the right answer is.

So tonight, I’m going to try and break things up into a few sections, and depending on how tipsy everyone is, we might try some audience interaction. We’re going to start off with an introduction to the problem space, and what has changed recently that means we might have to talk about privacy and surveillance in new ways. Then we’ll discuss privacy generally and why it matters. We’ll meander through some of the technologies that enable surveillance in new ways. Then I’d like to share some results from recent University of Auckland research on public perceptions of privacy and surveillance cameras, and the factors that we think affect how people feel about these systems. Lastly, I’ll touch on how we might be able to use technology to help protect our privacy, and what might be needed to get that technology in place. Sound good?

Problem Context
Right now, you’re probably most familiar with camera surveillance systems in law enforcement and public safety contexts. Airport immigration environments, CCTV cameras in London, and facial recognition systems in China are just a few examples of where cameras have been deployed on a  massive scale, automated with the help of artificial intelligence. That Chinese example is particularly interesting, because they plan to have full coverage of the entire country with facial recognition-based tracking by 2020, including surveillance in homes through smart TVs and smartphones. I’m not sure if they’ll get there based on the current state-of-the-art technology, but that’s just quibbling about the deadline – if it’s not 2020, it might be 2025. Still scary.

But as the costs of deploying large-scale camera networks continues to fall, and the abilities of artificial intelligence and computer vision continue to rise, we’re going to see more commercial entities utilise these types of systems to gain insights into how customers use and interact with physical space. You can call it business intelligence. For example, let’s say that we have a supermarket. There are a bunch of decisions about how you set up that supermarket, how you structure the aisles, where you put the products, that are known to have strong impacts on consumer purchasing behaviour. Up until recently, most of those insights have come from stationing human market researchers with a clipboard and a pen, observing shoppers and taking notes manually. It’s a boring job, and you can only get humans to observe people some of the time, and if the shoppers know that they’re being observed then they often end up changing their behaviour. Now imagine that we can set up a camera network that observes the shoppers all the time. It can count the number of people in the shop at any time, determine which aisles are most popular, and even tell you which paths customers are taking. There are commercially available systems in place right now that can detect if a checkout queue is getting too long, and send alerts to the manager that they need to open another checkout counter. Then you can collect statistics over time and start to answer higher level questions like, which products should I put closest to the entrance and exits, how often do we need to restock certain aisles, how many staff do we need to schedule in on a weekly basis? And if you really wanted to, the technology is there to allow you to answer questions like, what items did loyal customer number 362 pick up today but not buy, so we can send them an e-mail with a special offer so they’ll buy it next time? Is this person who has just entered the supermarket at risk of shoplifting based on their criminal history? Do customers who look a certain way buy more stuff, and so should we get a shop assistant to go upsell to them? And there is the real potential for secondary uses of data as well – even if you are told that the surveillance camera system is there to collect shopper statistics, what if the supermarket then sells that data to the food manufacturers, or sells that data to a health insurance company, or lets the police have access to those camera feeds?

I probably should have warned you at the beginning that this might be a bit of a scary talk. Unfortunately it just comes with the territory, that in order for me to talk about this stuff, I have to scare you all a little bit with examples of how this technology can be used. We often like to pretend that technology is value-neutral, in that technology itself is not inherently good or bad, but that’s not really true, because sometimes we can definitely foresee how that technology might be used. There is no shortage of science fiction out there featuring mass surveillance of the population, whether it’s Orwell’s 1984 or Minority Report. As technology developers, I believe that we have an obligation to not just ignore those dystopian futures, or in other words “do the thing and let the lawyers worry about the consequences later”. Where we can clearly foresee bad things happening, we should be doing something about it. I’ll come back to this later in the talk.

Back to the supermarket. What is it about this scenario that makes us feel so uneasy? There can be relatively benign uses of surveillance camera technology, such as letting managers know when the queues are getting too long, but there can also be much more controlling, more invasive uses. As I hinted at earlier, one of the big factors here is that the owner of this camera surveillance system is a commercial owner, rather than the state. In a traditional sense, whether it’s the police or the national intelligence agency, if they have a camera surveillance system, you’d hope that they’re using it for the public good, to keep people safe. You may have problems with that assumption, and that’s okay. But when it comes to corporations, their incentives are clearly different, and in some senses worse. They aren’t using this camera network for your safety – they’re using it to find ways to make more money. The benefit of having the surveillance network goes to the corporation, rather than to the general public who are being observed, whose privacy is being infringed upon. We hold corporations and the state to account in different ways, and the power relationship is different. Personally, I believe that this significantly changes the discussion about privacy and how we as a populace accept surveillance cameras. But, part of the problem is that we’re all used to surveillance cameras now – even if you don’t like them, you probably still walk down Queen St where there are CCTV cameras. You can’t really avoid them if you want to participate meaningfully in society – if you need to buy groceries, you’re going to do it whether there’s a camera there or not. In a sense, the use of surveillance cameras for security and safety has desensitised us to the use of cameras for less publicly beneficial purposes, which is why we need to be vigilant.

Why Privacy?
Okay, but before we get too much further down this line of thinking, we should take a step back and answer our zero-basis question. Why do we care about privacy? Why does it matter? [Audience answers]

Those are all good ideas, but we should think about it even more fundamentally than that. In the broadest sense, privacy is about keeping unknown information unknown. Another way to think about this is what a breach of privacy might look like. Again in the broadest sense, a breach of privacy is where some unknown information about someone becomes known.

Now you might feel that this definition is hopelessly broad, and it is. There are many bits of information that we have no choice but to give away – if I stand here and you look at me, then your brain has automatically extracted a bunch of information about my ethnicity, hair colour, height, and so on that your brain maybe previously did not know. There is a lot of information that we have to give away in order to function in society, such as our names, where we live, our phone numbers, etc.

And this is totally fine when we accept that privacy is not absolute, and it’s non-binary. You don’t have all privacy or no privacy all of the time. Privacy can depend on what information it is that is at risk, the specific use case of how our privacy is being protected or infringed, and other cultural or contextual factors like the type of government we have or the interface with which information is being collected. There are some situations that we could define as privacy breaches, that we are actually fine with and we think are probably okay. Let’s try to make this more concrete with some examples. If the government put surveillance cameras in your home, you would probably feel uncomfortable with that and call that a breach of privacy. But if there is a natural disaster, and the government uses drones with cameras to survey property damage in your area, then you might be more okay with that. That changes if your government is more democratic or not, more transparent or not, more trustworthy or not. Another example: a CCTV camera outside a McDonalds for public safety purposes will probably see you as you walk inside, and you might not care about it at all if you’re not a criminal. But that might change if you’re supposed to be on a diet, and your friend works at the company that monitors the surveillance cameras. I found out a few weeks ago that the CCTV cameras in central Wellington are actually monitored by a team of volunteers, not uniformed police officers, so the people behind the cameras probably operate at a different standard to what you might expect. Your feelings might change if data is being extracted from the video feed and then sold to health insurance companies who might raise your premiums if you go to McDonalds too often. It’s physically the same camera, but how it’s being used, who is in charge, and your own personal circumstances can have an impact on what privacy means.

This is all before we talk about the right to privacy. All of what we just discussed was just defining privacy, but that is separate to whether or not we actually have a right to privacy. So why is it important that everyone have a reasonable expectation of privacy? There are a lot of different arguments for why something as nebulous as privacy should be protected. It’s much easier to make a case for well defined things, like a right to life or a right to access basic needs like water and air. But the right to privacy is sort of like the right to free speech – it’s really hard to define and there are a lot of exceptions. I think for me, my summary of many arguments is that the need for privacy is a response to imperfect trust. We know that there are bad people around, and we can’t perfectly trust everyone all the time to always act in our collective interest. There are many interpretations of what is morally and ethically right to do at any point in time. And information is power, information gives people control over others. So we need to keep some information to ourselves to prevent it from being abused or used against us, ultimately so that we can maintain some sense of feeling secure. And I think that feeling of security and being able to trust people in limited ways is inherent in allowing our society to function. If you go to a coffee shop and buy a cup of coffee, you inherently trust that the barista are going to keep up their end of the bargain and give you a cup of coffee and not orange juice or soup or poison. If you couldn’t trust them, you’d have to make your own coffee all the time, and that might be an added cost to you. But we can only trust each other so much. While you’re okay with trusting the barista to make you coffee, you probably wouldn’t just give them all your medical and financial records, because you don’t necessarily trust them to handle those in the context of your customer-barista relationship. You need to keep some things private from others in order to maintain the appropriate social boundaries that define your relationship, with an appropriate level of trust. Maybe it’d be nice if we could all be open books and give away all of our information and be public about everything, but we just know that we can’t do that. Scarily, the day after I drafted this, I saw some news that a pregnant lady in Canada was accidentally served cleaning fluid instead of a latte because they plugged the wrong tubes into the coffee machine by accident, so even trusting your barista to make coffee right might be going too far.

This notion of trust and confidence is captured in our privacy legislation. The new Privacy Bill, which is currently at Select Committee, has the explicit intention of “promoting people’s confidence that their personal information is secure and will be treated properly.” If the information is secure and treated properly, that would be insufficient – it’s people’s confidence that is targeted by this Bill.

New Surveillance
But on the topic of legislation, one of the big problems with legislation is that it simply doesn’t keep up with the pace of technology. Here’s an example – NEC is a Japanese company that has been contracted to provide some person tracking services on Cuba Street in Wellington as part of the council’s smart city initiative. Most of us probably missed this story up here in Auckland, although there have been discussions within Auckland Council about doing the same thing up Queen St. The idea is that they want to know how many people are moving up and down a busy pedestrian route, at what times, and at what speeds, to inform pedestrian traffic management officers and so that the urban planners can have better information to work with when redesigning that space. Good intentions, good use of the technology. NEC proposed to do this in multiple ways, including the use of microphones and cameras. But it turns out that recording audio is illegal, because there are laws that prohibit the interception of private conversations, originally intended as a defence against espionage and police overreach, before video recording was cheap and ubiquitous. This is really old law in the Crimes Act that has been around for decades, and so NEC had to disable the microphones. You can’t make an audio recording of a conversation, but it seems to be legal for you to make a video recording of two people having a chat, and it’s okay for you to know that the conversation took place, which is in a sense metadata, which could be enough to infer all sorts of things, like that John Key supports John Banks enough to have a cup of tea with him. You could then watch the footage and try to figure out what they were saying by reading their lips or similar. Maybe you could even use an algorithm to do the lip reading. So back on Cuba St, the cameras are still running, collecting counts of people as they move throughout the space. There are privacy principles around a reasonable expectation of privacy, but even though we managed to make audio recordings of conversations illegal, video is, in a general sense, legal. The Office of the Privacy Commissioner has kept an eye on it for a long time, but the Privacy Act and Crimes Act have very different enforcement mechanisms. This is a demonstration of how the legislation might fall behind the development of technology, how the government has not protected the populace from a potential threat, and so our expectations and rights have eroded away. Oh and by the way, it turned out that the council wasn’t just interested in person counts and tracks – news articles reported that they also wanted to identify beggers and rough sleepers, and use the data to improve their efforts to get rid of homeless people on Cuba St. NEC also publicly said that they wanted to sell the data to tourism companies and retailers. So maybe not so well-intentioned after all... but apparently pretty legal. [Note: This system has recently been shut down and is no longer running in its original form]

And it’s not just that there’s a gap between technology and legislation, but that the technology is accelerating away. Think about what we might consider the status quo at a shop like Farmers. Most of the time if you see a surveillance camera in a shop, one of two things is happening behind the scenes. Either the footage is just being recorded and stored, and no one looks at it unless something bad happens, or there is a human security officer trying to watch ten camera feeds at once. With computer vision and big data architectures, a third option has become accessible to camera network owners – getting computers to automatically process the footage and then just generate statistics or alerts for human supervisors. The technology is at the point where we can go fast enough to process the footage in real-time, and this all enables surveillance networks to be implemented on much larger scales. Rather than needing one human to struggle to watch ten camera feeds at once, you can get a hundred computers to watch a hundred cameras in real-time.

The other thing we can do is combine data from multiple sources. You may have read about people painting their faces with weird shapes to try and fool face detection systems, or people advocating for wearing masks in public. Well, our research at the University of Auckland doesn’t use facial recognition, it recognises people based on the appearance of their clothing. Other research has shown that gait or walk recognition works, because people walk in slightly different ways. When that fails, surveillers can track your phone, sometimes through the cell network, but also by tracking the MAC address reported by the wi-fi or bluetooth. All of this can be done now, and in some cases, is already commercially available. If any one of these systems fail, we can fuse together enough data from the others sources to still get a pretty good understanding of where people are.

The natural response is to try and think of ways to defeat these systems as an advesary – change your clothing regularly, put your phone in flight mode when you’re not using it, take a class from the Ministry of Silly Walks. You could try to legislate against specific technologies too. But there will always be a way for technology to be developed further, to defeat those methods, and you just end up in an escalating war against technology, which probably doesn’t end well for the humans. We can stamp out one type of surveillance, and there will still be many others that can be used and exploited by unscrupulous system owners. The technology will evolve beyond the narrow defintions offered in the law. Instead, we need to ask ourselves some more principle-based questions – who actually wants these systems to exist, who is paying for the development and installation of these surveillance systems? And then we can ask a deeper question – why do they want these systems, and how do we as consumers or the electorate accept or reject these systems?

Public Perceptions
So when I told my supervisors that I wanted to do some research on privacy, their first response was “you’re doing an engineering degree though, so where are you gonna get the numbers?” So to understand why people accept or reject surveillance camera networks, I ran a survey earlier this year to understand what drives public perceptions of privacy. With a survey, now we have numbers, so I can justify putting it in my thesis!

What we wanted to know, was what makes people feel more comfortable or less comfortable about the presence of surveillance cameras and how they’re used? We know that not all surveillance cameras are necessarily bad, you can have good intentions mixed with good purposes and good system owners and maybe things will be okay, but it’s the people who are observed who should get to make a judgement of what good means. Privacy is not absolute, and the context makes a big difference, but what is it about that context that changes people’s perceptions? In contrast to previous research, our survey was designed to be a bit more subtle – rather than asking a series of questions like “do you like surveillance cameras if they are being used for public safety”, we used scenarios – short stories that provided a bit of detail about the context in which the surveillance cameras are being used.

Let’s do one as an example: the question that we asked the respondents was “how comfortable does this scenario make you feel?” So the local traffic authority wants to be able to track cars and trucks on major city streets and highways in order to learn about traffic patterns. They propose to do this by placing surveillance cameras on top of every traffic light and at certain points of highways, and running an automated algorithm that can count the vehicles automatically. The footage would not be recorded as the algorithm just produces a report with the number of vehicles on each road at certain times.” Hold up a hand, on a scale of 1 to 5, 1 being not comfortable at all, and 5 being very comfortable, how does that scenario make you feel? If there are any gaps in the story, you should fill them in yourself with your own personal context. [Generally okay, mostly comfortable? Why do you feel comfortable about it? Why do you feel uncomfortable about it?]

You might get a sense of why even though you might be pro or anti surveillance cameras generally, you can still have different feelings towards those cameras in different contexts, and that the context has different implications for different people.

Alright, so what did our research find? I don’t have much time to go into details, so I’ll skip the statistics and just get to the end results. The first headline result was that demographics don’t matter. There have historically been arguments that demographics play a strong factor, for example, some research has shown that women tend to prefer surveillance cameras in a public safety context, because they feel safer out in public, that they won’t get attacked. But this relationship simply didn’t appear in our data. Whether it was by age, binary gender, level of education, ethnicity, country of origin, country of current residence, or occupation, there were no statistical correlations with liking surveillance cameras more or less. Even though demographic groupings have long been held to influence or predict ideology and beliefs, in this case it really didn’t seem to matter. The conclusion may seem obvious – that what you believe is more than just your demographic characteristics.
Instead, we found that the context that the surveillance camera was much, much more important. Even those who self-reported as hating surveillance cameras could find some merit in using cameras after a natural disaster to maintain public safety, while those who seemed to be totally apathetic to cameras were still wary of a pervasive national-level person tracking system controlled by an intelligence agency. We distilled this down to the five most significant factors, which gives us a sense of what causes people’s perceptions to change.

The first is access – who has access to the video feed or footage, including any secondary data that has been derived from the cameras. For example, people’s perceptions might be changed if only three trusted government officials are allowed to view the footage, versus any one of ten thousand employees of a large corporation that can then onsell collected statistics to other companies.
The second is human influence – is there a person-in-the-loop, is there someone watching the footage, or is it entirely processed by computers? Generally in a public safety context, people felt better if a human is watching or the footage is recorded, but in a commercial video analytics context, people felt better if a computer processes the footage and no human ever sees it.

The third is anonymity – are the observed people in the footage personally identifiable or anonymous? Might there be personally targeted actions as a result? Generally respondents felt uncomfortable if they knew that being watched by the surveillance camera would lead directly to actions that affected them personally, like getting customised specials from the supermarket.
The fourth is data use – how will the data be used? Is the purpose in the public good and providing benefit to the observed? Are there secret secondary uses of the data? The scenario that made people the most uncomfortable wasn’t actually the one that involved an intelligence agency tracking every person in the country, which surprised result – it was actually the scenario where the supermarket tracked consumers and tried to sell them more stuff.

The last factor, and possibly the most important one, is trust – do we trust the owner of the surveillance camera network? Do we belive that they are competent? And this applies whether the owner is a government or a corporation; if we have a trust deficit where people simply do not believe what the owner is telling them, or we do not believe that they have good intentions, then people will feel uncomfortable.

Privacy-affirming Architectures
Okay, so a lot of the talk so far has probably been a bit scary, and we should try to address the big question of “well, what are we going to do about all this?” The first step for us was to understand what makes surveillance camera networks more okay, more comfortable, less scary. And as the prevalence of corporately-owned camera networks continues to rise, it’s really important that we consider how we can systematically put the right protections in place.

And so we have two pathways to achieving this. The first is to regulate. Governments can pass laws that protect our privacy, by requiring system owners to play by rules such as banning unconsented secondary uses of data, requiring footage to be deleted within a set timeframe if unused, requiring opt-in rather than opt-out approaches to consent, requiring transparency or reliability tests for algorithmic processing of footage, and so on. In New Zealand, we’re lucky that we have principles based privacy legislation that is very flexible and covers a lot of cases, but there are further rights that could be extended to the populace. Then the other tricky part is actaully enforcing these laws, regularly auditing these surveillance systems to ensure that they do what they say they do, and that they are compliant, and punishing those that turn out to be infringing upon the privacy rights of individuals. The GDPR in the European Union is starting down that direction, but we’re still awhile away here in New Zealand. The Office of the Privacy Commissioner just doesn’t have the tools it needs to really enforce our privacy legislation right now.

But governments are slow, and they simply cannot respond to the pace of technological development that creates these threats and dangers. Legislators often aren’t expert enough in these areas, and rely on outside information that is amplified by money, which means that the information that they get is more likely to be in the interests of malicious system owners than in the interests of the general population. And to make things worse, international trade agreements seem to be tying the hands of our legislators, forcing them to weaken privacy protections at the behest of corporate lobbyists in exchange for other economic benefits. For example, the EU-Japan economic partnership agreement has conflicts with the GDPR, and they’ve given themselves three years to sort it out – but in a battle between privacy rights and the economy, which one do you think is going to win?

The other approach is to protect privacy by design. Technology developers like myself should, or must, build privacy into their products, such that privacy becomes harder to infringe upon. So one of the features of the system that I’ve designed at the university is what we call the privacy-affirming architecture. In this system, we use smart cameras, where some processing can be done at the point of image capture, such that the footage does not actually need to be stored or transmitted. This means that in a commercial context where you just want the high-level statistics about how your supermarket is being used, the footage would never be seen by a human, it would all be automatically processed and you just get the anonymised statistics out at the other end. A system like this forces the system owners to respect the privacy of individuals, because even if they wanted to be voyeuristic and spy on their customers, they can’t. It takes away one tool from malicious system operators who could otherwise abuse that source of information. It doesn’t solve all of the privacy problems, but it’s a step towards protecting the privacy rights of individuals by default.

But the big counterargument against protecting privacy using technology is that some argue that rights protection is somewhat incompatible with capitalism, because there are real costs associated with developing privacy-affirming or privacy-conscious camera systems, and system owners are not incentivised to pay for the development of these types of systems. They would rather order a system that doesn’t have the extra privacy protecting stuff. Maybe you actually want to infringe upon people’s rights in order to improve your analytics and drive profits up. Maybe you actually want to infringe upon people’s rights to control your population better. No amount of rights-protecting technology is useful if the people responsible for implementing and owning these systems choose not to respect those rights and simply don’t buy that better technology.

So well, there is kind of a third pathway, which is about education. A more educated populace, that knows more about the way that these surveillance cameras are used, that knows more about the threats and dangers of these systems, that knows more about the potential downside of abuse by system owners, and that knows more about how we could make things better with legislation or technology or otherwise, can exert power in other ways. Whether that’s participating in the democratic process, or using market forces to tell corporations how we feel, in the same way that governments and corporations can control people, they also depend on people, who have opinions and feelings that eventually have to be respected.

And that’s why I want to do talks like these – the content can be a bit scary at times, but I’ve seen the academic papers that describe how large-scale surveillance systems can be practically achieved in the real world soon. Not in fifty years time or in twenty years time, but genuinely soon. I mean, I’ve contributed in some way towards creating one, even if my ethics have gotten in the way of me making a lot of money off it. So I want to get the word out, that this technology is coming, and if we are too complacent about it and let surveillance happen to us, before we know it we might be living in one of those science non-fiction dystopias. So the next time that you see a surveillance camera, look into it and ask yourself – do I know who owns this camera? Do I know how this footage is being processed? Do I know what they’re going to do with the data? Did I meaningfully consent to being observed? Do I trust the owner to actually do what’s best for me? Am I getting something out of this camera being here, or is the benefit entirely for someone else? And more importantly of all, how does this all make me feel?

There is a quote that is often attributed to Thomas Jefferson, even though it turns out he never actually said it, but it makes a good point so I’ll close with it here anyway: “An informed citizenry is a vital requisite for our survival as a free people”. I hope that if we can all become more informed, then we can fight against poor uses of surveillance technology, and technology in general, and keep our freedoms. Thank you for coming along to talks like this, for continuing to learn, and for keeping your minds open to new voices. Ngā mihi nui ki a koutou katoa, thank you very much for taking the time this evening to listen to me.