Making Internet firms the censor

Miniature Facebook banners are seen on snacks prepared for the visit by Facebook's Chief Operating Officer in Paris, France. PHOTO: REUTERS

Prime Minister Theresa May's political fortunes may be waning in Britain, but her push to make Internet companies police their users' speech is alive and well.

In the aftermath of the recent London attacks, Mrs May called platforms such as Google and Facebook breeding grounds for terrorism. She has demanded that they build tools to identify and remove extremist content. Leaders of the Group of Seven countries recently suggested the same thing. Germany wants to fine platforms up to €50 million (S$79 million) if they don't quickly take down illegal content. And a European Union draft law would make YouTube and other video hosts responsible for ensuring that users never share violent speech.

The fears and frustrations behind these proposals are understandable. But making private companies curtail user expression on important public forums - which is what platforms such as Twitter and Facebook have become - is dangerous.

The proposed laws would harm free expression and information access for journalists, political dissidents and ordinary users. Policymakers should be candid about these consequences and not pretend that Silicon Valley has silver-bullet technology that can purge the Internet of extremist content without taking down important legal speech with it.

Platforms in Europe operate notice-and-takedown systems for content that violates the law. Most also prohibit other legal but unwelcome material, such as pornography and bullying, under voluntary community guidelines. Sometimes platforms remove too little. More often, research suggests, they remove too much - silencing contested speech rather than risking liability. Accusers exploit this predictable behaviour to target expression they don't like - as the Ecuadorean government has reportedly done with political criticism, the Church of Scientology with religious disputes and disgraced researchers with scholarship debunking their work. Germany's proposed law increases incentives to err on the side of removal: Any platform that leaves criminal content up for more than 24 hours after being notified about it risks fines as large as €50 million.

Miniature Facebook banners are seen on snacks prepared for the visit by Facebook's Chief Operating Officer in Paris, France. PHOTO: REUTERS

European politicians tout the proposed laws as curbs on the power of big American Internet companies. But the reality is just the opposite. These laws give private companies a role - deciding what information the public can see and share - previously held by national courts and legislators. That is a meaningful loss of national sovereignty and democratic control.

Moving this responsibility from state to private actors also eliminates key legal protections for Internet users. Private-platform owners are not constrained by the First Amendment or human rights law the way the police or courts are. Users most likely have no remedy if companies are heavy-handed or sloppy in erasing speech. Governments that outsource speech control to private companies can effectively achieve censorship by proxy.

Proposed laws making platforms go beyond notice-and-takedown to proactively police users' speech would be even worse than Germany's draconian takedown proposals. About 300 hours of video are uploaded to YouTube every minute, so reviewing it is not humanly possible. Courts, including the European Union Court of Justice and European Court of Human Rights, have recognised that users' speech and privacy rights will suffer if platforms must vet every word they post. And studies suggest Internet users self-censor when they think they are being surveilled. Researchers found journalists afraid to write about terrorism, Wikipedia users reluctant to learn about Al-Qaeda and Google users avoiding searching for sensitive terms in the wake of the Edward Snowden revelations.

Some politicians say the solution is to build filters - software that automatically identifies and suppresses illegal material. But no responsible technologist believes filters can tell what speech is legal.

Skilled lawyers and judges struggle to make that kind of call. What real-world filters can do, at best, is find duplicates of particular text, pictures or videos - but only after human judgment has determined that they are illegal. Filters that can find child sexual abuse images work relatively well because those images are illegal in every instance.

But violent and extremist material is different. Almost any such image or video is legal in some context. Filters can't tell the difference between footage used for terrorist recruitment and the same footage used for journalism, political advocacy or human rights efforts. When filters fail to make those distinctions, they will take down information and discussion on topics of vital public importance. Risk-averse companies erring on the side of over-removal for this kind of speech will disproportionately silence Arabic speakers and Islamic religious material.

As a lawyer with long experience handling takedowns from Google Web searches, I believe there are responsible ways to remove illegal content from platforms. A good start is to have courts decide what violates the law - not machines and not company employees operating under the threat of huge fines. Accused speakers should have opportunities to defend their speech, and the public should be able to find out when content disappears from the Internet.

If politicians think that eroding online expression rights will make us safer, they should explain how. For all the rhetoric, we know very little about whether curbing online speech prevents real-world violence. What little research we have suggests that driving violent or hateful material into dark corners of the Web may make matters worse.

Outraged demands for "platform responsibility" are a muscular-sounding response to terrorism that shifts public attention from the governments' duties. But we don't want an Internet where private platforms police every word at the behest of the state. Such power over public discourse would be Orwellian in the hands of any government, be it Mrs May's, Mr Donald Trump's or Mr Vladimir Putin's. NYTIMES

•Daphne Keller is the director of Intermediary Liability at Stanford Law School's Centre for Internet and Society, and previously was associate general counsel at Google.

Join ST's Telegram channel and get the latest breaking news delivered to you.

A version of this article appeared in the print edition of The Straits Times on June 29, 2017, with the headline Making Internet firms the censor. Subscribe