1. It can be a time-consuming process for the Chief Censor to issue a decision on whether content is objectionable, which could delay and/or contribute to uncertainty about the removal of harmful content. The terrorist's manifesto was a lengthy, complex document and the Chief Censor had to consider delaying his classification in order to meet a requirement of publishing a written decision within five days of making a classification. It is proposed that the Chief Censor could make interim classification decisions, so that in clearly defined circumstances, the Chief Censor could make an interim decision without triggering the five-day limit and then make a full written decision within twenty working days.
2. Legislation does not sufficiently support government agencies to direct and enforce speedy removal of violent extremist content online by online content hosts. Government had no clear legal backing to tell online content hosts that failing to remove copies of the terror attack livestream was illegal. Companies that complied with this request operated under unclear legal requirements. It is proposed that the Department of Internal Affairs could issue take-down notices for violent extremist content online.
3. Outdated legislation does not sufficiently cover contemporary content like livestreaming (broadcasting events over the internet in real time). The relevant legislation was last amended in 2005, and the current legal definition of "publication" does not include livestreaming. It was unclear whether the actual livestreaming of the attack was a criminal offence. It is proposed that livestreaming should be included in the definition of a "publication", and therefore could be classified as objectionable by the Chief Censor.
4. It is not clear for online content hosts what responsibilities they have under New Zealand law if they host violent extremist content on their platforms. It was unclear whether the FVPC Act applied to overseas based online platforms operating in a New Zealand market. It is proposed that penalties should be applied to online content hosts for non-compliance of a take-down notice.
5. Two interacting pieces of legislation (the Harmful Digital Communications Act (HDCA) and the Film, Videos, and Publications Classifications Act (FVPCA)) create confusion and potentially a loophole for companies hosting harmful content. Subsequent legal analysis identified that online content hosts could have simply notified posters of the terrorist's video, waited two days to take it down, and be exempted from criminal liability under the FVPCA. It is proposed that the FVPCA should be amended to clarify that safe harbour provisions offered in the HDCA do not apply for objectionable content.
6. The government has no mechanism to filter sites that repeatedly do not comply with requests to remove objectionable content. In the wake of the terror attacks, some ISPs raised concerns on the issue and continue to request greater support to identify what content should be blocked. Additionally, certain websites refused to comply with requests to take down the video of the attack. As they are based overseas, we had little ability to force these sites into complying with NZ law and to remove the video. It is proposed that DIA could consider establishing a web filter operating at the ISP level for violent extremist content. It is important to note that this is a policy proposal to consider this more, rather than a proposal to implement a web filter tomorrow.
---
Submission
1. Thank you for the opportunity to provide a written submission on this important issue. I am a Research Fellow with the Centre for Science in Policy, Diplomacy, and Society at The University of Auckland, conducting research in the area of digital transformation and its impacts on society, including digital ethics and public policy in areas of digital technologies.
2. It is positive that the DIA is looking to act quickly and provide clarity to the stakeholders in this area, and I appreciate that broad consultation has been conducted. I attended a workshop in Wellington on Sunday, and appreciated the wide audience with diverse backgrounds, although noted a lack of Māori representation, and insufficient representation from some ethnic and religious minorities. The work that has been done so far should be seen as a stepping stone towards more consultation. DIA should be encouraged to broaden the conversation, and also take the opportunity to educate more people on the role of the Censor and build social license around some of the longer-term changes to be proposed later on.
3. My subsequent comments focus on the six proposals that were presented as part of the workshops. These were presented as being short-term interim fixes, while a larger and longer review takes place. My comments are therefore limited to this context, and do not attempt to deal with the bigger underlying/foundational issues that will need to be better understood and consulted upon over the next couple of years.
4. The first proposal refers to giving the Chief Censor the power to make interim classification decisions. This seems like a good idea generally to allow the Chief Censor to act quickly to reduce harm. A timeframe of 20 working days (effectively one month) would be appropriate for giving the Chief Censor enough time to carefully word and craft a decision that can be used as a precedent and in court cases. However, in the fast-moving media landscape in the digital age, the absence of official messaging creates a void that could be filled by pundits, conspiracy theorists, lobbyists, and others that may try to twist any classification to their own purposes. The danger is that because the Chief Censor does not justify why something has been classified as objectionable in the short-term, it creates an environment that encourages misinformation and disinformation to flow.
5. I suggest that the legislation require an “interim classification decision” be accompanied by a “summary decision” that gives a short description of the harms that the Chief Censor is trying to mitigate and how the content may lead to those harms. It should be made clear that this summary decision is not final, does not create a legal precedent, and is not fit for use as evidence in a court case and that a full written decision is still needed. This would also reduce the need to provide “clearly defined circumstances”, which can never fully cover all possible cases where something like this may need to be deployed, and empowers the Chief Censor to make these decisions more quickly to provide clarity and certainty for Government authorities.
6. The second proposal empowers DIA to make take-down orders to online content hosts for content that has been deemed objectionable, and the fourth proposal adds penalties to online content hosts if they do not comply with take-down notices. These proposals make sense, ensuring that DIA can notify online content hosts that something has been classified as objectionable, as well as take enforcement action to ensure that hosts comply. However, there are two broader concerns that should be considered as well.
7. “Online content host” is a term used in the Harmful Digital Communications Act (HDCA). Firstly, any use of the term in the Film, Videos, and Publications Classification Act (FVPCA) needs to have a consistent definition to avoid any confusion between the two pieces of legislation. Secondly, there appears to be some confusion in the technical/business community about who qualifies as an online content host – the definitions that I have seen are relatively broad, but it would be useful to take this opportunity to issue more guidance to organisations about whether they are subject to this legislation or not. We have some complex businesses in New Zealand that play multiple roles in digital technology. For example, it can be difficult to separate out the part of the business that is a “host” from the part of the business that is responsible for “transmission” or transferring data. It can also be difficult to distinguish “hosts” who may also be “publishers”. Providing more clarity (even if the definition has to be broadened further) could help strengthen both pieces of legislation.
8. The second concern is around derivative material. When a piece of content is declared to be objectionable, it may be easy for those with malicious intent to subtly change the content such that it could be argued that it is materially different to the objectionable content and therefore legal. For example, a video with a completely replaced audio track could be argued to be sufficiently different to the original video and need to be considered a new piece of content. Another example could be if a video of a real-life incident is recreated as a cartoon animation, which is effectively a different medium. Still images from a video should also be included in the same classification decision. The Censor should not have to issue decisions on each derivative piece of content, or they could become quickly overwhelmed. The legislation could be amended to cover derivative content that would contravene the spirit of a decision from the Censor, subject to appeals. However, this is an issue that should be very carefully considered, as there are freedom of speech considerations to be had as well, particularly in satire or media reporting. This is something that may need to be considered in a longer-term review of the media and censorship landscape.
9. The third proposal refers to adding “livestream” to the definition of “publication” in the FVPCA. Putting aside that arguably a “livestream” could be covered under part d) of the definition of publication, specifically covering livestreaming makes sense given the current context and concerns following the Christchurch massacre. However, it raises a broader question of why publication is so narrowly defined in s 2 of the Act, and this could be an opportunity to provide a broader definition based on first principles. The danger is that legislation moves slower than technology, and we may find ourselves playing regulatory whack-a-mole where we are constantly waiting for bad things to happen before amending the legislation to cover new technology. Instead, a better approach would be to have underlying definitions that cover publication more broadly, for example “any recorded or repeatable communication” (which may not be the right definition here and is only a suggestion). This may again be something that needs to be considered as part of a longer-term review.
10. The fifth proposal refers to clarifying that safe harbour under the HDCA does not apply to objectionable content. This largely makes sense as the current legal position creates an unintended loophole. However, this presents an opportunity to reconsider the role of safe harbour provisions in a New Zealand context more carefully. We adopted safe harbour from a US context that has allowed online content hosts to operate without needing to consider or take responsibility for the impacts of the content that they are hosting. This is significantly different to the expectations that we place on other forms of media, where organisations like the Broadcasting Standards Authority, Media Council, and Advertising Standards Authority can punish publishers (i.e. owners of the distribution medium) for the content that they distribute, where there is no notion of safe harbour. It is important to acknowledge that some of this regulatory power is not from government and is voluntary/self-regulated within the relevant industry. However, there may still be a role for a similar organisation to play in the online space, beyond what is currently covered by the HDCA and Netsafe. If self-regulation is not forthcoming, then it may be up to government to either help encourage that self-regulation to occur, or take the larger step of introducing its own regulatory body.
11. The sixth proposal refers to DIA considering the establishment of a web filter at the ISP level for violent extremist content. This is frequently compared to the existing voluntary filter for child exploitation material. However, the notion of a hard web filter is controversial, and scares a lot of people. This proposal is on a completely different level to the other five in terms of impact and scale. Fears of scope or mission creep would be well justified – the idea that violent extremist content could be added to a child exploitation filter is in itself scope creep. There are significant technical concerns, and questions about who has control over the filter and how content goes into the filter are still open. My concern is that this proposal is essentially too much too fast, even though it is only a suggestion that DIA “consider establishing” the filter. The risk is that this proposal gives some people fuel to criticise the entire package of interim changes, and that it may generate distrust amongst the technical community and civil society against DIA and the government. My suggestion is that no legislative changes be made associated with this proposal, and that it be made very clear that this is a policy proposal that DIA consider, develop, and consult more widely on such a web filter, with a clear timeline that gives people at least a few years to react and respond rationally.
12. It is clear that these policy proposals sit in the context of responding to harms that accrued in the wake of the Christchurch terror attacks. My concern is that the language being used in describing the problems convey a meaning that may be lost when translated to the language used in the policy and legislation itself. This may lead to unintended and unforeseen consequences outside of the context of terrorist or violent extremist content. For example, content relating to homosexuality that has been historically declared objectionable and may no longer be objectionable by today’s standards could be covered in a regime that requires take-down notices to be issued to online content hosts and complied with. Similarly, we may find that there is violent content that should be taken down, but doesn’t meet a particularly narrow definition of promoting terrorism. The policies refer to “violent extremist content”, but the definition given for this relies on a definition of “objectionable” which is broader and covers more than just violent extremist content. Therefore, my suggestion is that as a thought exercise, all of these recommendations should be considered in the context of “objectionable” content, to see if policymakers still feel comfortable about these proposals with the broader lens being applied. If not, then it may point to issues with the underlying proposal that may be masked by the strongly charged language around terrorism and extremism.
Thank you again for the opportunity to make a submission at this early-stage, and for taking much needed action in this space while also consulting broadly.
---