Microsoft President Brad Smith Calls for Pre-Emptive Tech Censorship Measures In Response to Christchurch Terror Attacks

In an official blog post, Microsoft President Brad Smith issued a call to his peers in the tech industry to take action and develop ways to prevent a future dissemination of controversial materials, in reference to the rapid spread of the Christchurch terror attack livestream video and the panicked failure of many companies who attempted to prevent the dissemination of the video:

“Words alone are not enough. Across the tech sector, we need to do more. Especially for those of us who operate social networks or digital communications tools or platforms that were used to amplify the violence, it’s clear that we need to learn from and take new action based on what happened in Christchurch.

Across Microsoft, we have reviewed how our various services were used by a relatively small number of individuals to try to spread the video from Christchurch. While our employees and technology tools worked quickly to stop this distribution, we have identified improvements we can make and are moving promptly to implement them. This includes the accelerated and broadened implementation of existing technology tools to identify and classify extremist violent content and changes for the process that enables our users to flag such content. We are exploring additional steps we can take as a company and will move quickly to add to these improvements.”

In the weeks since the Christchurch terror attack took place, institutions around the world have scrambled to censor and hide the terrorist’s manifesto and the recorded video of the attack that the terrorist streamed from his Facebook account to the general public. New Zealand courts deemed the manifesto ‘objectionable’ (a classification which, in NZ law, allows the government to ban a piece of material or content) while the spread or possession of the video has become a legal offense within the country, with one 18-year old facing 14 years in prison for sharing the video. Journalists on Twitter have noted that their coverage of the event has been found to be in violation of Twitter’s terms of service, despite the fact that they were not sharing the video, but solely reporting on news and developments in the story.

Smith then opines that, even if a country’s legal system does not outright prevent the spread of a particular piece of content, tech companies must apply higher standards according to an arbitrary and ill-defined “decency requirement”:

“While these are vital steps, one of the lessons from New Zealand is that the industry rightly will be judged not only by what it prevented, but by what it failed to stop. And from this perspective, there is clearly much more that needs to be done. As Prime Minister Jacinda Ardern noted last week, gone are the days when tech companies can think of their platforms akin to a postal service without regard to the responsibilities embraced by other content publishers. Even if the law in some countries gives digital platforms an exemption from decency requirements, the public rightly expects tech companies to apply a higher standard.”

Smith then puts forth three proposals which he believes will aid tech companies in their goal of preventing ‘objectionable content’ from being spread on their platforms. The first proposal is to focus on prevention, including content detection and removal performed by machines and preventing users from accessing said content:

“First, we need to focus on prevention. We need to take new steps to stop perpetrators from posting and sharing acts of violence against innocent people. New and more powerful technology tools can contribute even more than they have already. We must work across the industry to continue advancing existing technologies, like PhotoDNA, that identify and apply digital hashes (a kind of digital identifier) to known violent content. We must also continue to improve upon newer, AI-based technologies that can detect whether brand-new content may contain violence. These technologies can enable us more granularly to improve the ability to remove violent video content. For example, while robust hashing technologies allow automated tools to detect additional copies already flagged as violent, we need to further advance technology to better identify and catch edited versions of the same video.

We should also pursue new steps beyond the posting of content. For example, we should explore browser-based solutions – building on ideas like safe search – to block the accessing of such content at the point when people attempt to view and download it.

We should pursue all these steps with a community spirit that will share our learning and technology across the industry through open source and other collaborative mechanisms. This is the only way for the tech sector as a whole to do what will be required to be more effective.

We also should recognize that technology cannot solve this problem by itself. We need to consider and discuss additional controls or other measures that human beings working at tech companies should apply when it comes to the posting of this type of violent material. There are legal responsibilities that need to be discussed as well. It’s a complicated topic with important sensitivities in some parts of the tech sector. But it’s an issue whose importance can no longer be avoided.”

The second proposal involves the creation of a central ‘command center’ for those in the tech sector to use in the event of another ‘major incident’ to coordinate the removal and prevent the spread of content before it reaches the public:

“Second, we need to respond more effectively to moments of crisis. Even with better progress, we cannot afford to assume that there will never be another tragedy. The tech sector should consider creating a “major event” protocol, in which technology companies would work from a joint virtual command center during a major incident. This would enable all of us to share information more quickly and directly, helping each platform and service to move more proactively, while simultaneously ensuring that we avoid restricting communications that are in the public interest, such as reporting from news organizations.

We should also discuss whether to define a category of agreed “confirmed events,” upon which tech companies would jointly institute additional processes to detect and prevent sharing of these types of extremist violent content. This would better enable efforts to identify and stop this content before it spreads too broadly.”

Smith then turns his attention to the general ‘online environment,’ condemning the toxicity of the environment (which is often ill-defined and broadly applied, despite the inherently subjective nature of the term) and calling for users to focus on improving online discourse:

“Finally, we should work to foster a healthier online environment more broadly. As many have noted, while much of the focus in recent days rightly has been on the use of digital tools to amplify this violence, the language of hate has existed for decades and even centuries. Nonetheless, digital discourse is sometimes increasingly toxic. There are too many days when online commentary brings out the worst in people. While there’s obviously a big leap from hateful speech to an armed attack, it doesn’t help when online interaction normalizes in cyberspace standards of behavior that almost all of us would consider unacceptable in the real world.

Working on digital civility has been a passion for many employees at Microsoft, who have recognized that the online world inevitably reflects the best and worst of what people learn offline. In many ways, anonymity on the internet can free people to speak and behave in ways they never would in person. This is why we believe it’s important to continue to promote four tenets to live by when engaging online. Namely, we all need to treat others with respect and dignity, respect each other’s differences, pause before replying and stand up for ourselves and for others. This too is an area on which we can build further.”

Smith’s blog post, though a seemingly well-intentioned response issued in the wake of a horrifying and confusing event, appears to be an uneasy justification for further censorship and control over online discussions. At a time when public trust in big tech is at dismal levels, many users fear that allowing these companies to dictate what is acceptable may have disastrous consequences for freedom of speech.

As the blog post was published relatively recently, it remains to be seen whether any of Smith’s proposals will find a stronger foothold in the tech industry and what effect, if supported and applied, these proposals would have on public internet use.

What do you make of Smith’s proposals? Do you think other tech companies should adopt them or do you think this will lead to more censorship on the internet?

Mentioned In This Article:

More About: