Wednesday 6 November 2019

Submission on Countering Violent Extremist Content Online

The DIA is collecting submissions on their "interim" proposals for Countering Violent Extremist Content Online in the wake of the Christchurch terror attack. I attended a workshop in Wellington, and then decided to make a written submission as well. Before I get into my submission, here's the six policy proposals (with me taking the key points from each rather than giving the full text, which is unfortunately not available online):

1. It can be a time-consuming process for the Chief Censor to issue a decision on whether content is objectionable, which could delay and/or contribute to uncertainty about the removal of harmful content. The terrorist's manifesto was a lengthy, complex document and the Chief Censor had to consider delaying his classification in order to meet a requirement of publishing a written decision within five days of making a classification. It is proposed that the Chief Censor could make interim classification decisions, so that in clearly defined circumstances, the Chief Censor could make an interim decision without triggering the five-day limit and then make a full written decision within twenty working days.

2. Legislation does not sufficiently support government agencies to direct and enforce speedy removal of violent extremist content online by online content hosts. Government had no clear legal backing to tell online content hosts that failing to remove copies of the terror attack livestream was illegal. Companies that complied with this request operated under unclear legal requirements. It is proposed that the Department of Internal Affairs could issue take-down notices for violent extremist content online.

3. Outdated legislation does not sufficiently cover contemporary content like livestreaming (broadcasting events over the internet in real time). The relevant legislation was last amended in 2005, and the current legal definition of "publication" does not include livestreaming. It was unclear whether the actual livestreaming of the attack was a criminal offence. It is proposed that livestreaming should be included in the definition of a "publication", and therefore could be classified as objectionable by the Chief Censor.

4. It is not clear for online content hosts what responsibilities they have under New Zealand law if they host violent extremist content on their platforms. It was unclear whether the FVPC Act applied to overseas based online platforms operating in a New Zealand market. It is proposed that penalties should be applied to online content hosts for non-compliance of a take-down notice.

5. Two interacting pieces of legislation (the Harmful Digital Communications Act (HDCA) and the Film, Videos, and Publications Classifications Act (FVPCA)) create confusion and potentially a loophole for companies hosting harmful content. Subsequent legal analysis identified that online content hosts could have simply notified posters of the terrorist's video, waited two days to take it down, and be exempted from criminal liability under the FVPCA. It is proposed that the FVPCA should be amended to clarify that safe harbour provisions offered in the HDCA do not apply for objectionable content.

6. The government has no mechanism to filter sites that repeatedly do not comply with requests to remove objectionable content. In the wake of the terror attacks, some ISPs raised concerns on the issue and continue to request greater support to identify what content should be blocked. Additionally, certain websites refused to comply with requests to take down the video of the attack. As they are based overseas, we had little ability to force these sites into complying with NZ law and to remove the video. It is proposed that DIA could consider establishing a web filter operating at the ISP level for violent extremist content. It is important to note that this is a policy proposal to consider this more, rather than a proposal to implement a web filter tomorrow.

---

Submission
1. Thank you for the opportunity to provide a written submission on this important issue. I am a Research Fellow with the Centre for Science in Policy, Diplomacy, and Society at The University of Auckland, conducting research in the area of digital transformation and its impacts on society, including digital ethics and public policy in areas of digital technologies.

2. It is positive that the DIA is looking to act quickly and provide clarity to the stakeholders in this area, and I appreciate that broad consultation has been conducted. I attended a workshop in Wellington on Sunday, and appreciated the wide audience with diverse backgrounds, although noted a lack of Māori representation, and insufficient representation from some ethnic and religious minorities. The work that has been done so far should be seen as a stepping stone towards more consultation. DIA should be encouraged to broaden the conversation, and also take the opportunity to educate more people on the role of the Censor and build social license around some of the longer-term changes to be proposed later on.

3. My subsequent comments focus on the six proposals that were presented as part of the workshops. These were presented as being short-term interim fixes, while a larger and longer review takes place. My comments are therefore limited to this context, and do not attempt to deal with the bigger underlying/foundational issues that will need to be better understood and consulted upon over the next couple of years.

4. The first proposal refers to giving the Chief Censor the power to make interim classification decisions. This seems like a good idea generally to allow the Chief Censor to act quickly to reduce harm. A timeframe of 20 working days (effectively one month) would be appropriate for giving the Chief Censor enough time to carefully word and craft a decision that can be used as a precedent and in court cases. However, in the fast-moving media landscape in the digital age, the absence of official messaging creates a void that could be filled by pundits, conspiracy theorists, lobbyists, and others that may try to twist any classification to their own purposes. The danger is that because the Chief Censor does not justify why something has been classified as objectionable in the short-term, it creates an environment that encourages misinformation and disinformation to flow.

5. I suggest that the legislation require an “interim classification decision” be accompanied by a “summary decision” that gives a short description of the harms that the Chief Censor is trying to mitigate and how the content may lead to those harms. It should be made clear that this summary decision is not final, does not create a legal precedent, and is not fit for use as evidence in a court case and that a full written decision is still needed. This would also reduce the need to provide “clearly defined circumstances”, which can never fully cover all possible cases where something like this may need to be deployed, and empowers the Chief Censor to make these decisions more quickly to provide clarity and certainty for Government authorities.

6. The second proposal empowers DIA to make take-down orders to online content hosts for content that has been deemed objectionable, and the fourth proposal adds penalties to online content hosts if they do not comply with take-down notices. These proposals make sense, ensuring that DIA can notify online content hosts that something has been classified as objectionable, as well as take enforcement action to ensure that hosts comply. However, there are two broader concerns that should be considered as well.

7. “Online content host” is a term used in the Harmful Digital Communications Act (HDCA). Firstly, any use of the term in the Film, Videos, and Publications Classification Act (FVPCA) needs to have a consistent definition to avoid any confusion between the two pieces of legislation. Secondly, there appears to be some confusion in the technical/business community about who qualifies as an online content host – the definitions that I have seen are relatively broad, but it would be useful to take this opportunity to issue more guidance to organisations about whether they are subject to this legislation or not. We have some complex businesses in New Zealand that play multiple roles in digital technology. For example, it can be difficult to separate out the part of the business that is a “host” from the part of the business that is responsible for “transmission” or transferring data. It can also be difficult to distinguish “hosts” who may also be “publishers”. Providing more clarity (even if the definition has to be broadened further) could help strengthen both pieces of legislation.

8. The second concern is around derivative material. When a piece of content is declared to be objectionable, it may be easy for those with malicious intent to subtly change the content such that it could be argued that it is materially different to the objectionable content and therefore legal. For example, a video with a completely replaced audio track could be argued to be sufficiently different to the original video and need to be considered a new piece of content. Another example could be if a video of a real-life incident is recreated as a cartoon animation, which is effectively a different medium. Still images from a video should also be included in the same classification decision. The Censor should not have to issue decisions on each derivative piece of content, or they could become quickly overwhelmed. The legislation could be amended to cover derivative content that would contravene the spirit of a decision from the Censor, subject to appeals. However, this is an issue that should be very carefully considered, as there are freedom of speech considerations to be had as well, particularly in satire or media reporting. This is something that may need to be considered in a longer-term review of the media and censorship landscape.

9. The third proposal refers to adding “livestream” to the definition of “publication” in the FVPCA. Putting aside that arguably a “livestream” could be covered under part d) of the definition of publication, specifically covering livestreaming makes sense given the current context and concerns following the Christchurch massacre. However, it raises a broader question of why publication is so narrowly defined in s 2 of the Act, and this could be an opportunity to provide a broader definition based on first principles. The danger is that legislation moves slower than technology, and we may find ourselves playing regulatory whack-a-mole where we are constantly waiting for bad things to happen before amending the legislation to cover new technology. Instead, a better approach would be to have underlying definitions that cover publication more broadly, for example “any recorded or repeatable communication” (which may not be the right definition here and is only a suggestion). This may again be something that needs to be considered as part of a longer-term review.

10. The fifth proposal refers to clarifying that safe harbour under the HDCA does not apply to objectionable content. This largely makes sense as the current legal position creates an unintended loophole. However, this presents an opportunity to reconsider the role of safe harbour provisions in a New Zealand context more carefully. We adopted safe harbour from a US context that has allowed online content hosts to operate without needing to consider or take responsibility for the impacts of the content that they are hosting. This is significantly different to the expectations that we place on other forms of media, where organisations like the Broadcasting Standards Authority, Media Council, and Advertising Standards Authority can punish publishers (i.e. owners of the distribution medium) for the content that they distribute, where there is no notion of safe harbour. It is important to acknowledge that some of this regulatory power is not from government and is voluntary/self-regulated within the relevant industry. However, there may still be a role for a similar organisation to play in the online space, beyond what is currently covered by the HDCA and Netsafe. If self-regulation is not forthcoming, then it may be up to government to either help encourage that self-regulation to occur, or take the larger step of introducing its own regulatory body.

11. The sixth proposal refers to DIA considering the establishment of a web filter at the ISP level for violent extremist content. This is frequently compared to the existing voluntary filter for child exploitation material. However, the notion of a hard web filter is controversial, and scares a lot of people. This proposal is on a completely different level to the other five in terms of impact and scale. Fears of scope or mission creep would be well justified – the idea that violent extremist content could be added to a child exploitation filter is in itself scope creep. There are significant technical concerns, and questions about who has control over the filter and how content goes into the filter are still open. My concern is that this proposal is essentially too much too fast, even though it is only a suggestion that DIA “consider establishing” the filter. The risk is that this proposal gives some people fuel to criticise the entire package of interim changes, and that it may generate distrust amongst the technical community and civil society against DIA and the government. My suggestion is that no legislative changes be made associated with this proposal, and that it be made very clear that this is a policy proposal that DIA consider, develop, and consult more widely on such a web filter, with a clear timeline that gives people at least a few years to react and respond rationally.

12. It is clear that these policy proposals sit in the context of responding to harms that accrued in the wake of the Christchurch terror attacks. My concern is that the language being used in describing the problems convey a meaning that may be lost when translated to the language used in the policy and legislation itself. This may lead to unintended and unforeseen consequences outside of the context of terrorist or violent extremist content. For example, content relating to homosexuality that has been historically declared objectionable and may no longer be objectionable by today’s standards could be covered in a regime that requires take-down notices to be issued to online content hosts and complied with. Similarly, we may find that there is violent content that should be taken down, but doesn’t meet a particularly narrow definition of promoting terrorism. The policies refer to “violent extremist content”, but the definition given for this relies on a definition of “objectionable” which is broader and covers more than just violent extremist content. Therefore, my suggestion is that as a thought exercise, all of these recommendations should be considered in the context of “objectionable” content, to see if policymakers still feel comfortable about these proposals with the broader lens being applied. If not, then it may point to issues with the underlying proposal that may be masked by the strongly charged language around terrorism and extremism.

Thank you again for the opportunity to make a submission at this early-stage, and for taking much needed action in this space while also consulting broadly.

---

Additional issues that I didn't cover because I felt that I couldn't explain the points well enough: issues around overseas online content hosts and enforceability, appeals processes (which are somewhat covered under existing legislation), being bold and using GDPR-like legislation that covers all people in NZ as well as all NZ citizens anywhere in the world, and a multitude more about the risks and dangers of having a hard web filter at the ISP level that have been covered by the tech community and civil society at large.

Friday 18 October 2019

A couple of tweet threads from the last 12 months

Mostly so that I don't lose these in case I need to come back and find them again:

NZH covers the increasing number of CCTV cameras in NZ: https://twitter.com/andrewtychen/status/1203437923950968832

The time I attended a workshop on regulating facial recognition technologies in New Zealand, hosted by VUW: https://twitter.com/andrewtychen/status/1185079461152092161

The time I went to NetHui and attended sessions about:
- AI and Ethics: https://twitter.com/andrewtychen/status/1179894601403883520
- Digital Inclusion: https://twitter.com/andrewtychen/status/1179879434771238912
- Forming a Disinformation Response Plan: https://twitter.com/andrewtychen/status/1179588969622671361
- Freedom of Expression: https://twitter.com/andrewtychen/status/1179854990514388992
- Blockchain: https://twitter.com/andrewtychen/status/1179531136583626752
- Environmental Impacts of the Internet: https://twitter.com/andrewtychen/status/1179515237784903680

Updates to the NZ Political Polling Visualisation: https://twitter.com/andrewtychen/status/1166902555571417089

Auckland Transport increasing their CCTV presence and being on RNZ: https://twitter.com/andrewtychen/status/1160628434390802432 and https://twitter.com/andrewtychen/status/1160724532363067392

Figuring out where this package from the UK was supposed to go to instead of my house: https://twitter.com/andrewtychen/status/1146645040912982016

Reacting to Kinley Salmon's Jobs, Robots, and Ushttps://twitter.com/andrewtychen/status/1132913965418160128

At a Techweek debate about AI and impacts on society: https://twitter.com/andrewtychen/status/1130715566786764803

The Harmful Digital Communications Act: https://twitter.com/andrewtychen/status/1125273660351115265

Why don't we track hate crimes in NZ? A subset of thoughts after Christchurch: https://twitter.com/andrewtychen/status/1106646705343074305

Some people protesting the UN Migration Compact: https://twitter.com/andrewtychen/status/1074528168172707841

Engineering academics don't care about me caring about privacy: https://twitter.com/andrewtychen/status/1069467564156178432

Computer vision technologies and applications presented at AVSS18: https://twitter.com/andrewtychen/status/1068269138093592576

Thursday 17 October 2019

Submission on the Government Draft Algorithm Charter

This is a submission responding to a request for feedback from StatsNZ, available here: https://data.govt.nz/use-data/analyse-data/government-algorithm-transparency-and-accountability (and the Charter itself is here: https://data.govt.nz/assets/Uploads/Draft-Algorithm-Charter-for-consultation.pdf).

Thank you for the opportunity to provide feedback on the New Zealand Government Algorithm Charter. Firstly, it is important to say that the development of this draft represents a positive step forward, and I hope that all government agencies will take it seriously and incorporate it into their operational practice.

1. While punitive measures may not be necessary, some form of enforcement or monitoring should be implemented alongside the Charter to ensure that the principles are being upheld. An annual algorithm scorecard or checklist or similar tool could help give an indication of which agencies are successfully or unsuccessfully upholding the Charter.

I have a few suggestions that could help strengthen the Charter:
2. I am encouraged by the third bullet point that requires communities, particularly minority and marginalised communities, to be consulted about the use of algorithms where it affects them. However, it is a little worrying that “as appropriate” has been included without explanation of what appropriateness means, which could be interpreted in different ways. I assume that in this case it refers to consulting with the appropriate groups of people subject to a particular algorithm, rather than public sector employees deciding whether it is appropriate or not to consult at all. In my opinion, it would be better to remove “as appropriate” to avoid the potential for that misunderstanding. 

3. It would also be helpful to require active consultation – not just based on desk research or one-way submissions, but processes that require government agencies to physically go out into the communities and talk to people, in person, about their perspectives on these algorithms. I appreciate that this may be costly, but it is an important step to establishing social license and helping people understand the choices and impacts.

4. As part of the fifth bullet point about publishing how data are collected and stored, it would be helpful to also include a commitment to have systems/procedures in place that allow people to see, correct, and remove their data from any system (if it is not held anonymously). These are principles in the Privacy Act already, but they need re-inforcement to ensure that the appropriate functionality is built-in and can be activated when needed.

5. The eighth bullet point indicates that the implementation and operation of algorithms will be monitored for unintended consequences such as bias. Ongoing monitoring is critical and this is a positive commitment. However, agencies should develop an understanding of potential unintended consequences before algorithms are deployed as well. In particular, it is important to understand potential errors, how often they may occur, and the consequences of these errors. Similar to a Privacy Impact Assessment, an Algorithm Impact Assessment would more broadly check for possible negative impacts. Appropriate checks and balances should be implemented to ensure that negative consequences do not happen silently/invisibly, and that there is sufficient ability for humans to intervene if an algorithm has made an error. Just like humans, algorithms are extremely rarely 100% accurate, and so the potential for error needs to be properly understood before implementation.

6. Algorithms increasingly rely on appropriate models being developed, which rely on there being sufficient data that is representative of the people who will be subjects of or subject to the algorithm. In my opinion, it would be useful to explicitly acknowledge that for some people or groups of people, there may simply be no data available to represent them, which leads to models that do not accurately reflect the real-world (which is one form of bias) and therefore leads to algorithms making errors. Government agencies need to understand which people are represented in their models, so that appropriate decisions about the use of those algorithms can be made. For example, it may be appropriate for some people to not be subject to an algorithm, simply because the algorithm won’t work for them because the underlying model doesn’t represent them, and a manual process is required instead. Recent migrants are an example of people who may be negatively impacted by the use of algorithms that rely on models that do not represent them.

7. Many new algorithms use artificial intelligence and machine learning methodologies, with increasingly complex systems that are hard for any human to understand. In my opinion, it would be helpful to include a bullet point that encourages government agencies to actively support algorithmic transparency and explainability, including through the use of data visualisation. This is different to offering technical information about the algorithms and the data, and would encourage agencies to develop plain English explanations or interactive experiences that help people understand how the algorithms operate.

8. In my opinion, nothing in this Charter stifles innovation, and agencies should be discouraged from treating innovation and transparency as being on opposite ends of a trade-off. The Charter encourages government agencies to use best practice already. The transparency encouraged through this Charter not only protects people’s rights and increases confidence, but it can help improve the quality of the algorithms and models, as well as build social license with the people who may be subject to these algorithms. Government algorithms can have wide-reaching and long-lasting impacts, and so it is only appropriate to have principles that ensure high-quality decisions are being made.

Thank you again for the opportunity to make a submission, and I view it as positive that the government is seeking broader input on this important topic.

Ngā mihi nui,
Dr. Andrew Chen
Research Fellow, Center for Science in Policy, Diplomacy, and Society (SciPoDS)
University of Auckland

Friday 17 May 2019

Thoughts on the Harmful Digital Communications Act



With increasing pressure against social media companies, it's worth looking at the Harmful Digital Communications Act in NZ again. The HDCA came into force in 2015 with the aim to "deter, prevent, and mitigate harm" caused to individuals by digital communications and to provide victims with a means of redress. The Act introduced both civil penalties and criminal offences, giving victims different pathways to recourse depending on the type of harm experienced. Netsafe was appointed to operate the civil regime, and is tasked with receiving and resolving complaints in the first instance (analogous to the Office of the Privacy Commissioner for the Privacy Act). Netsafe will also assist with removal of content from the internet where possible, working with content hosts in New Zealand. Police are responsible for criminal cases, which are for more serious cases. 

One of the main aims of the legislation was to produce social change, to make online spaces safer for New Zealanders to participate in. There was particular focus on cyber-bullying, and the impacts of online harm on young people, especially as it contributes to our growing mental health crisis. The procedures were also designed to be accessible for victims, both in terms of speed and cost. While there were concerns at the time over the chilling and suppressive nature the legislation could have on freedom of expression, many MPs said that the pressing harms being perpetrated online far outweighed those concerns; arguably, the Act has not had any tangible effect on freedom of expression in the subsequent years. The legislation has also become clearer over time as case law has been built up, with some clarity being provided around the tests and thresholds for harm.

While the legislation is relatively young, this may be an opportunity to highlight the challenges faced by the Act going into the future, and to make adjustments or corrections to minimise harm sooner.

a) In the subsequent 3 years, [18, 85, 107] people were charged with offences under the HDCA in each year, and [12, 42, 53] people were convicted respectively. The majority of cases have related to revenge pornography, while incidences of hate speech, racial harassment, and incitement to commit suicide have been largely unpursued. (Interestingly, 65-75% of people charged with HDCA offenses plead guilty. Unsurprisingly, 90% of those charged have been men.)

While the principle of proportionality is important, the lack of consequences for harmful digital communications at the lower end of the scale mean that the new Act has little deterrent effect, and arguably has not shifted societal attitudes or behaviours in this area. The Act requires that digital communications be directed at an individual to be deemed harmful, but is there scope to amend the Act to cover other cases where groups of people or sectors of society are suffering harm? Arguably, more harm overall is being perpetrated in cases where it affects many people at once.

b) The need to demonstrate harm has proven to be a difficult barrier, with a number of cases dismissed simply because the prosecution could not show that there was sufficient harm, especially when it is defined ambiguously and subject to interpretation by Judges. What further guidance needs to be given about establishing harm, and what recourse can there be for legitimate victims who do suffer harm but may not meet the statutory threshold? There have been comments by lawyers that what was initially unclear has now become clearer over time, but at what (social and personal) cost did this clarification develop over time?

c) One of the aims is to provide “quick and efficient” redress, but how fast is the NetSafe/District Court process in reality? What incongruities lie between the fast, digital nature of the harm and the slow, physical nature of the recourse process, and could technology be better used to help accelerate these processes?

d) Enforcement has struggled against the prevalence of anonymous perpetrators, leaving victims without recourse. The District Court process can order parties to release the identity of the source of an anonymous communication, but how often/well is this used? Sometimes this is technically impossible (e.g. when people share an internet connection and the identity cannot be confirmed). Is this something that technology can help with?

e) Amongst these issues, it may also be worth re-investigating the notion of safe habour – should content hosts be protected from failing to moderate harmful digital communications? Currently, as long as they respond to complaints within 48 hours, they can get away with the argument of "we are just the messenger and are not responsible for the content". Can we enforce safe harbour requirements on platforms operated by companies overseas? Weak enforceability (or in some cases, social media companies belligerently ignoring the law and saying it doesn't apply to them) challenges notions of big multinational companies taking the law seriously. Do we need to be braver and have stronger penalties?

So we come back to the purpose of the Act - Hon Amy Adams (Minister of Justice at the time) said "this bill will help stop cyber-bullies and reduce the devastating impacts their actions can have." Has it done so, sufficiently, over the course of 3+ years? The HDCA is currently under review by the Ministry of Justice, so hopefully some of these issues are already being looked at. But the review doesn't seem to have a public consultation element, so there isn't much visibility for the rest of us to see what's happening.

Monday 22 April 2019

Paper Title: The IEEE Conference Proceedings Template Text

It all started with IEC 61499 function blocks - a way of modelling industrial systems using pictorial representations in a standardised, and therefore programmable, way. It is used widely around the world, and a lot of research effort has gone into enhancing its capabilities and making it more usable in real-world applications. The paper "Remote Web-Based Execution of IEC 61499 Function Blocks'' (ID:7090220), published in the 6th Electronics, Computers, and Artificial Intelligence (ECAI) Conference held in 2014, described a prototype that integrated IEC 61499 with web technologies in a safe and secure way. The introduction of the paper suggests that this might allow for computationally expensive tasks like iterative optimisation or image processing to be executed on the cloud, with results used to control specific function blocks. The introduction also suggests that "this template, modified in MS Word 2003 and saved as "Word 97-2003 & 6.0/95 - RTF'' for the PC, provides authors with most of the formatting specifications needed for preparing electronic versions of their papers.''

If the last point seems incongruous with the highly technical subject matter of the paper, that is because it comes from the IEEE Conference Proceedings Template. The authors of that paper used the template to start the writing of their paper, and while they deleted most of the original template text, a large chunk of text was simply forgotten and submitted. In total, 147 words from the introduction section of the IEEE template remain in this IEEE Xplore-published paper. The rest of the paper seemed original and interesting, yet this passage of text clearly should not have been in the final paper. How did a paper with such a large block of text from the IEEE template make it past peer-review, plagiarism checks, and eXpress PDF checks to become indexed and published on IEEE Xplore?


This started a journey into uncovering just how widespread this issue was. Thousands of IEEE Xplore-published papers were discovered that contain at least some text that matches the IEEE conference template. I thought it might be worth documenting this process and describing our journey. This blog article covers how these papers were found, briefly describes how IEEE was informed about the issue and how they responded, and offers some opinions on the systematic failures that have allowed these errors to go unnoticed.

In most cases, I believe that the presence of template text in a paper was just a genuine mistake on the part of the authors. In many of the papers that I read, there is legitimate scientific work being reported that is of value to the academic community, and there may only be a few sentences of template text. It is not my intention to offend or embarrass any of these authors. Therefore, rather than referring to papers by their full title or authors, I mostly refer to them by their IEEE Xplore ID numbers. Readers interested in tracking these papers down can search for the ID number in IEEE Xplore to retrieve publication details.

Data Collection
The methodology was pretty simple - I used Google Scholar to search for papers that match some part of the IEEE conference template text. This was because Google Scholar's exact quote search seemed to be more accurate than the IEEE Xplore search. Each search used quote marks in order to get exact matches only. Google Scholar's search results were restricted to only those matching site:ieeexplore.ieee.org. Google Scholar has an undocumented limit to the length of each search query. Empirically, this appears to be 256 characters, so after taking the site filter into account, each query can be a maximum of 232 characters. After each search, a random sample of the papers were checked to make sure that the search was accurate and that the queried text was in fact in the paper (the examples in the Table below have been manually verified). Unfortunately, since Google Scholar does not offer an API, and scraping the website is against the Terms of Service, all of the data collection was done manually. The Table below shows some of the queries that were run, and gives an indication of the scale of this problem. While this cannot be interpreted as an exact number of papers that contain template text, it is hoped that this analysis gives a sense of the scale of the problem; it is not limited to a handful of papers. Hundreds, if not thousands, of papers have some template text in them. This search was done in June 2017, so the numbers have increased since then (estimated to be approximately 5-10%).


There are two important caveats that prevent us from simply adding up the number of papers to find the total number of papers that match template text. Firstly, it is probable that there are some papers that appear in more than one search. In manual checks, most papers appeared in only one of these searches, meaning that the amount of template text was relatively small in most papers (usually from one of the sections or just one of the sentences), but without further analysis no strong claim can be made here. Secondly, Google Scholar's search may not be perfect, and it is possible that papers may be listed more than once in the search results or listed when there actually is no match. In some cases, the authors have hidden the text so that it is not readable to humans (e.g. making the text white or placing a figure over the text), but is still searchable by computers, leading to an erroneous listing in the search results.

Importantly, there are also reasons to suspect that these numbers may undercount the actual number of papers matching template text. Firstly, these searches only parse papers where the PDFs are text-searchable. A small proportion of conferences have uploaded their papers with scanned PDF files that are essentially images without searchable text, which may not appear in these results. This is more likely to happen for older conferences. Secondly, even slight changes such as an additional word or an extra space could result in the paper not being included in the search results, because exact quotes were sought. It should be noted that although we are primarily interested in papers published on IEEE Xplore, the IEEE conference template is widely used around the world for other publishers as well, and there are large numbers of papers published outside of the IEEE that also contain text from this template, that were excluded because of the site filter used in the search query.

(Some Other) Analysis
The IEEE conference template file also includes seven references. A number of papers have failed to remove these references or re-used some of them, significantly increasing the number of citations for these papers. We can easily assess the magnitude of this issue, because two of the references in the template are not for real publications. K. Elissa's work, "Title of paper if known'' (unpublished), and R. Nicole's work, "Title of paper with only first word capitalized'' (published in the J. Name Stand. Abbrev.), have been cited over a thousand times by IEEE Xplore papers according to Google Scholar (1440 and 1110 respectively). There are some issues with this result, as Google Scholar's citation tracking is not perfect, but I have found IEEE Xplore papers that cite these papers directly, such as ID:5166784 and ID:5012315. Some papers only appear if the reference text is searched directly, as sometimes these placeholder references appear appended to the end of legitimate references, such as ID:6964641. Meanwhile, the other five real references have received an artificial boost to their citation counts - James Clerk Maxwell's "A treatise on electricity and magnetism'' had plenty of citations anyway, but a non-negligible number of these citations, such as ID:6983343, are not genuine.

As far as I can tell, the current IEEE conference template was created around 2002/2003, based on the IEEEtran LaTeX class made by Michael Shell. It therefore makes sense that the earliest paper that was found with template text was from a conference in 2004 (ID:1376936), although at this point in time most papers were still scanned into IEEE Xplore and not text-searchable.

The most egregious case, ID:6263645, was literally just the IEEE conference template in full with the title changed. Even the authors section of the paper was from the template. How was this paper accepted and published? The conference website seems to suggest that only the abstracts were peer-reviewed, with full submission of the papers after notification of acceptance to authors. The conference website includes the text "Failure to present the paper at the conference will result in withdrawal of the paper from the final published Proceedings,'' which implies that a presentation was made since the paper was published to IEEE Xplore. But perhaps, no one checked the uploaded paper itself after the conference.

After this paper was reported to the IEEE, it was removed several months later "in accordance with IEEE policy'', although evidence of the original paper is still available through secondary sources such as ResearchGate and SemanticScholar which carry the original abstract. In fact, the website DocSlide contains a copy of the full text of the paper. It is important to note that this paper appeared in the conference schedule and proceedings table of contents, alongside legitimate papers in a legitimate conference. As stated earlier, my intention in investigating and reporting template text in conference papers is not to punish or embarass the authors who have made these errors, as I believe that in most cases these errors were made unintentionally, and there is still scientific merit in the papers that outweighs the impact of these errors. I am not advocating for papers containing template text to be removed from IEEE Xplore. However, in cases like ID:6263645, where the whole paper is nothing but template text, it is clear that the paper is so flagrantly against the spirit of academic publication, that there is little choice but to remove the paper.

The IEEE Response
Members of our research group first notified IEEE about this in July 2017. After much searching about the correct process for reporting this type of issue, we tried to contact the IEEE Publication Services and Products Board (PSPB) Publishing Conduct Committee. However, no contact details were to be found anywhere, so we e-mailed the Managing Director of IEEE Publications. Eventually our report made its way to the Meetings, Conferences and Events (MCE) team, where the matter was placed under investigation and it began a slow internal process. Every couple of months we would e-mail for an update, and be told that the investigation was ongoing and we would be notified when it was concluded, but that they would be unable to report on each individual instance. IEEE assured us that "IEEE has been fully assessing the situation regarding this circumstance, and putting the appropriate time and resources into investigating this issue thoroughly." To my knowledge, ID:6263645 was the only paper that was removed since it contained no original content other than the title (and I am not advocating for papers that only have a few sentences of template text to be removed).

Since our original report, in May 2018 the following text was added to the IEEE conference template page (partly in bold) and in the actual template files at the end (in red):


IEEE conference templates contain guidance text for composing and formatting conference papers. Please ensure that all template text is removed from your conference paper prior to submission to the conference. Failure to remove template text from your paper may result in your paper not being published.

This is slowly being reflected in copies of the template as it propagates throughout the world for new conferences. Will this action by the IEEE resolve the problem?

In the subsequent year or so (to April 2019), Google Scholar suggests that there are 18 papers published on IEEE Xplore that contain the above warning text. A manual check over these papers reveals that authors have changed the text colour of the warning to white for most of these papers (which makes it invisible to humans, but not to computers), leaving four papers that contain the new template text. This includes ID:8580104, which appears to be a new paper from a 2018 conference that is just the new template published in its entirety (which we have just informed the IEEE about). Maybe the new warning in the template has helped reduce the rate of incidence, but cases are still slipping through.

Systematic Failures?
The IEEE claims to publish conference proceedings for "more than 1,500 leading-edge conference proceedings every year''. While the standards of IEEE are high, it is understandable that with so many papers being published every year, some papers will inevitably slip through the cracks of quality control. It could even be argued that a couple of papers out of the hundreds of thousands published by IEEE each year is relatively insignificant. However, we should still seek to understand why so many papers containing template text, something which should be easily avoidable, have been published in the IEEE Xplore database.

Similarity Checks
The IEEE requires that all papers submitted for publication be checked for plagiarism. It is important to note here that the inclusion of template text in a paper is not generally intentional plagiarism. However, the method for automatically detecting template text, similarity analysis, is more commonly used for identifying plagiarism. In the case of conferences, all organisers are expected to screen their papers for plagiarism. Any papers that are not screened during manuscript submission are checked by the Intellectual Property Rights (IPR) Office before the papers are published on IEEE Xplore. The point to emphasise here is that it is claimed by IEEE that at some point, every paper passes through a standard plagiarism check before publication.

The IEEE has its own portal, CrossCheck, which program chairs and other conference proceeding organisers can use to check for plagiarism. It is essentially an IEEE-branded front-end, with iThenticate running as the back-end engine. iThenticate is arguably the world's leading plagiarism checking service, and is also used by Turnitin, CrossRef, many universities, and others. The strength of CrossCheck in particular is that all participating organisations agree to provide full-text versions of their content, so that they can build up a large corpus of work and increase the probability of catching plagiarised text. It stands to reason that a plagiarism checking service as powerful as this should be able to detect text from the IEEE conference template and alert reviewers/editors/organisers.

However, anecdotally, I have heard that for many conferences the rule of thumb is that a paper should have an overall similarity score of less than 30%, and a similarity score with any single source of less than 7%. If the similarity scores exceed these thresholds, in most cases authors are given an opportunity to edit and reduce their similarity scores, or the paper is rejected. In paper management systems like EDAS, an alert is only generated if the similarity score exceeds a threshold; otherwise it is normally assumed that the paper doesn't have significant plagiarism and can be reviewed.

The template text problem shows an issue with this percentage based approach - one or two sentences can easily fall below these thresholds to avoid automatic detection. In a 6-8 page conference paper, even an entire paragraph of template text may only constitute 1-2% of the overall paper. If the IEEE template appears towards the bottom of the similarity report, then it may likely be missed by publication volunteers and staff, if the similarity report is checked at all.

Perhaps we should recognise that not all sentences are equal, and that some matching sentences are more problematic than others. One possible solution is to develop similarity checks that use two corpora; one corpus that contains the current collection of internet and otherwise published sources, and another corpus that contains privileged text that should never appear in texts passed through the similarity check. If there is any sentence in the paper that exactly matches one in the second corpus, then that should produce an alert at the top of the similarity report. Examples of passages to include in this second corpus include template text from different publishers, lorem ipsum, and other sources that contain text that should never (or very rarely) appear in a published paper. A human reviewer is still required to interpret the results of these similarity reports to ensure that false positives do not hinder or prevent the publication of good papers.

Peer Review
Conference peer-review is generally of lower quality than journal peer-review. There are, of course, exceptions in terms of the highest level conferences and the lowest quality journals, but overall, review expectations are lower for conference publications. The shorter review periods and lower standards disincentive reviewers from spending too much time conducting their reviews. Anecdotally, recruiting reviewers for conferences has become increasingly difficult as the number of publication opportunities grow.

One of the problems with the presence of template text is that there should be no cases where including the template text makes any logical sense in the context of the paper (unless it was a paper about the template text like this one could have been). If a reviewer has read the paper, then this error should be obvious. So why has peer-review failed to detect the template text?

First of all, it appears that some of the papers that are published in IEEE Xplore have not actually been peer-reviewed. In some cases, only conference abstracts are peer-reviewed, and once accepted, the subsequent paper is not reviewed at all. In these cases, the fault does not lie with the reviewers, but demonstrates that this model of publication is flawed and easily exploitable.

Where reviewers do spot template text, there is generally limited opportunity for them to inform authors. There may be a field in the paper review system to enter some comments. If the reviewer is motivated enough, then they might indicate to the authors exactly where the template text in their paper is. But in my experience, conference paper reviewers tend to provide higher-level feedback, looking at the contribution and novelty of the paper, rather than specific grammar or spelling errors. After all, these should be caught during proof-reading.

Even if the reviewer has provided the feedback to the authors that template text is in the paper, there is generally no opportunity for anyone in the process to make sure that the template text has been removed. For many conferences, there is only one round of review, and therefore reviewers do not see the papers again after camera-ready submission. Program Chairs and Publication Chairs cannot be expected to read and check every single paper. So if a paper is accepted but the authors have ignored the feedback provided by the reviewers, then chances are, it will go straight through to publication and appear in IEEE Xplore.

However, following Occam's Razor, the obvious answer here is that not all reviewers are fully reading their assigned papers. It is easy for template text to slip past the review process if no one actually reads the template text in the paper. This is perhaps an uncomfortable truth, and cannot be easily proven (or disproven).

The issues that are discussed here are symptomatic of a wider challenge in scientific peer-review. The issue of predatory open-source journals that publish papers without sufficient (or any) peer-review has been well publicised. One has to wonder if similar issues have affected conferences as well. Solving the issues of peer-review is well above my pay grade, and there is a wide range of literature on the subject across many academic disciplines.

The sheer scale of this problem indicates another major issue - the general apathy of the academic community towards this behaviour. Many of these papers have hundreds of reads, some have even been cited. Apparently we were the first to report these issues to the IEEE. Does this mean that this isn't really a significant problem, and that no one really cares? The impact is probably relatively small, with most readers accessing the paper for the meaningful scientific content, and are probably smart enough to ignore the template text, right? One could have said the same about the authors who published these papers in the first place.

Conclusions
So, maybe the impact of this template text being in published papers is negligible beyond it being a source of some amusement and entertainment. But at the same time, it can be seen as a symptom of the wider issues that face academia. There are more, and more, and more papers being published every year, and peer-review is falling apart. Automated tools that are meant to help detect misconduct are woefully insufficient. The current models of publishing research articles are exploitable. And there is always the uncomfortable question lingering in the background - how much "high quality" research output is genuinely high quality? Meanwhile, no one really has the time to figure out how to fix these issues while under the pressures of Publish or Perish.

I repeat here that this article is not accusing anyone of any intentional plagiarism or misconduct - everyone makes mistakes sometimes, and that's okay. However, a high-quality repository of academic content should have systems in place to catch mistakes and help rectify them. Over time, the problem has grown too large for the IEEE to retrospectively rectify, and realistically that's probably okay. But does this reflect the academic literature that we want to build and share, or is it just the academic literature that we deserve?

Acknowledgements
The initial instance of template text found was reported by Hammond Pearce, who then brought it to the attention of our research group, which kicked off this whole prosaic journey. This article is informed by discussions between members of the Embedded Systems Research Group, part of the Department of Electrical, Computer, and Software Engineering at the University of Auckland, New Zealand. The IEEE conference template says that "The preferred spelling of the world 'acknowledgment' in America is without an 'e' after the 'g'", but this article isn't being written in America, and the author prefers the 'e' to be in there.