Friday 17 May 2019

Thoughts on the Harmful Digital Communications Act

With increasing pressure against social media companies, it's worth looking at the Harmful Digital Communications Act in NZ again. The HDCA came into force in 2015 with the aim to "deter, prevent, and mitigate harm" caused to individuals by digital communications and to provide victims with a means of redress. The Act introduced both civil penalties and criminal offences, giving victims different pathways to recourse depending on the type of harm experienced. Netsafe was appointed to operate the civil regime, and is tasked with receiving and resolving complaints in the first instance (analogous to the Office of the Privacy Commissioner for the Privacy Act). Netsafe will also assist with removal of content from the internet where possible, working with content hosts in New Zealand. Police are responsible for criminal cases, which are for more serious cases. 

One of the main aims of the legislation was to produce social change, to make online spaces safer for New Zealanders to participate in. There was particular focus on cyber-bullying, and the impacts of online harm on young people, especially as it contributes to our growing mental health crisis. The procedures were also designed to be accessible for victims, both in terms of speed and cost. While there were concerns at the time over the chilling and suppressive nature the legislation could have on freedom of expression, many MPs said that the pressing harms being perpetrated online far outweighed those concerns; arguably, the Act has not had any tangible effect on freedom of expression in the subsequent years. The legislation has also become clearer over time as case law has been built up, with some clarity being provided around the tests and thresholds for harm.

While the legislation is relatively young, this may be an opportunity to highlight the challenges faced by the Act going into the future, and to make adjustments or corrections to minimise harm sooner.

a) In the subsequent 3 years, [18, 85, 107] people were charged with offences under the HDCA in each year, and [12, 42, 53] people were convicted respectively. The majority of cases have related to revenge pornography, while incidences of hate speech, racial harassment, and incitement to commit suicide have been largely unpursued. (Interestingly, 65-75% of people charged with HDCA offenses plead guilty. Unsurprisingly, 90% of those charged have been men.)

While the principle of proportionality is important, the lack of consequences for harmful digital communications at the lower end of the scale mean that the new Act has little deterrent effect, and arguably has not shifted societal attitudes or behaviours in this area. The Act requires that digital communications be directed at an individual to be deemed harmful, but is there scope to amend the Act to cover other cases where groups of people or sectors of society are suffering harm? Arguably, more harm overall is being perpetrated in cases where it affects many people at once.

b) The need to demonstrate harm has proven to be a difficult barrier, with a number of cases dismissed simply because the prosecution could not show that there was sufficient harm, especially when it is defined ambiguously and subject to interpretation by Judges. What further guidance needs to be given about establishing harm, and what recourse can there be for legitimate victims who do suffer harm but may not meet the statutory threshold? There have been comments by lawyers that what was initially unclear has now become clearer over time, but at what (social and personal) cost did this clarification develop over time?

c) One of the aims is to provide “quick and efficient” redress, but how fast is the NetSafe/District Court process in reality? What incongruities lie between the fast, digital nature of the harm and the slow, physical nature of the recourse process, and could technology be better used to help accelerate these processes?

d) Enforcement has struggled against the prevalence of anonymous perpetrators, leaving victims without recourse. The District Court process can order parties to release the identity of the source of an anonymous communication, but how often/well is this used? Sometimes this is technically impossible (e.g. when people share an internet connection and the identity cannot be confirmed). Is this something that technology can help with?

e) Amongst these issues, it may also be worth re-investigating the notion of safe habour – should content hosts be protected from failing to moderate harmful digital communications? Currently, as long as they respond to complaints within 48 hours, they can get away with the argument of "we are just the messenger and are not responsible for the content". Can we enforce safe harbour requirements on platforms operated by companies overseas? Weak enforceability (or in some cases, social media companies belligerently ignoring the law and saying it doesn't apply to them) challenges notions of big multinational companies taking the law seriously. Do we need to be braver and have stronger penalties?

So we come back to the purpose of the Act - Hon Amy Adams (Minister of Justice at the time) said "this bill will help stop cyber-bullies and reduce the devastating impacts their actions can have." Has it done so, sufficiently, over the course of 3+ years? The HDCA is currently under review by the Ministry of Justice, so hopefully some of these issues are already being looked at. But the review doesn't seem to have a public consultation element, so there isn't much visibility for the rest of us to see what's happening.

No comments:

Post a Comment