Sexual exploitation and bullying of kids online spur US outrage, bid to cut social media's shield from liability

Targets of online exploitation and harassment say they sometimes face indifference from platforms. PHOTO: REUTERS

WASHINGTON (BLOOMBERG) - The teen was in high school when his secret spilled onto the Internet, driving him to consider suicide: Classmates were viewing sexual images of him and a friend on Twitter that child pornographers had duped him into sending.

The videos remained visible for more than a week as the teen and his mother pleaded with Twitter to block the material, according to a lawsuit filed on the teen's behalf. The complaint alleges that the company acted only after the images drew 167,000 views and leering comments from Twitter users, with some remarking on how young the pictured victims appeared to be.

Targets of online exploitation and harassment say they sometimes face indifference from platforms that operate under protection of a decades-old US law that limits liability for content their users post online.

The law has drawn protests from Republicans and Democrats who allege it has been used by the platforms to mishandle political speech.

Now, child advocates and families say the provision has permitted companies to dodge responsibility for online harassment and even sexual exploitation. "Things like this happen all the time," said Fordham University law professor Olivier Sylvain.

The law, he said, "poses a real obstacle" for those pressing social media sites to remove material.

That has led privacy advocates, politicians and even parents of murdered children who have been trolled to urge US Congress to restrict or do away with the legal shield, known by its chapter heading in the Communications Decency Act of 1996: Section 230.

The issue gained prominence during the 2020 elections when US President Donald Trump and other Republicans said, with scant evidence, that it let the websites suppress conservative speech - something the sites denied. Democrats, in turn, blame the provision for an unchecked flood of misinformation about candidates or Covid-19.

US President Joe Biden while a candidate called for repealing Section 230. More recently, his Commerce Secretary Gina Raimondo spoke of revising the law.

The companies say they are doing what they can to take down offensive content, a task made difficult by the huge volume of posts. In the first half of 2020, Twitter suspended 438,809 accounts for displaying material that sexually exploited children, according to a filing in the lawsuit brought on behalf of the teen who was harassed.

But, the company said, it's not possible to remove all offending content from the hundreds of millions of tweets daily from more than 190 million users.

Still, Twitter asked the court to dismiss the youth's case, saying in the March 10 filing that "there is no legal basis for holding Twitter liable". Under Section 230, "Internet platforms are immune from suit based on the failure to remove offensive third-party content", the company wrote.

The issue heated up when Twitter permanently banned Mr Trump for breaking its rules against glorifying violence after the Jan 6 assault on the US Capitol.

Twitter has permanently banned former US president Donald Trump. PHOTO: AFP

Less prominent in the debate have been private victims, such as the teen whose photos were still circulating even after his parents say they brought it to the attention of Twitter.

"It was deeply shocking, and traumatising," said Mr Peter Gentala, an attorney for the National Centre on Sexual Exploitation. The non-profit group, along with two law firms and the teen's mother, filed the lawsuit on behalf of the boy, identified as John Doe in the filings.

"John Doe is in the difficult and sad position of looking for accountability because Twitter didn't follow its own policy, or even the law," Gentala said. "When you hear the largest companies in the world say, 'We can't be held responsible for this,' it's small wonder you see consensus building among lawmakers" to consider changes.

Twitter declined to comment on specifics of the lawsuit beyond its filings, and said in an email that it has "zero-tolerance for any material that features or promotes child sexual exploitation."

Proprietary tools

"We aggressively fight online child sexual abuse and have heavily invested in technology and tools to enforce our policy," the company said in the statement. Twitter says it uses "internal proprietary tools" to identify child sexual exploitation that have been used to close thousands of accounts.

Twitter also offers an online form for reporting online child sexual material, and says that in most cases consequences for violating its ban on such material is immediate and permanent suspension.

The company says it reports offensive posts to the National Centre for Missing & Exploited Children, a private non-profit group that works to help find missing children and reduce child sexual exploitation.

Similarly, Facebook says it uses technology to find child exploitative content and detect possible inappropriate interactions with children.

"In addition to zero-tolerance policies and cutting-edge safety technology, we make it easy for people to report potential harms, and we use technology to prioritise and to swiftly respond to these reports," Facebook said on a Web page describing its policies.

In Congress, lawmakers have jousted over Section 230's effect on political speech. Now, there are nearly two dozen legislative proposals to reform Section 230, according to a count by policy group Public Knowledge. One Senate Bill aims to hold social media companies accountable for enabling cyber stalking and targeted harassment.

"Section 230 has provided a 'Get Out of Jail Free' card to the largest platform companies even as their sites are used by scam artists, harassers and violent extremists to cause damage and injury," said Virginia Democratic Senator Mark Warner, a sponsor of the measure announced on Feb 5 called the Safe Tech Act.

The problem is widespread. Four in 10 US adults say they've experienced online harassment, according to the Pew Research Centre. Most Americans are critical of how social media companies address online harassment, the centre found.

Targets have included parents and families of the victims at Sandy Hook Elementary School, where 20 pupils and six teachers and staff members were slain by a gunman in 2012. Accusations quickly arose that the attack was a hoax and that Mr Lenny Pozner, father of the youngest victim, six-year-old Noah, had faked the event.

Mr Pozner became a target for conspiracy theorists. He said social media platforms ignored his requests to remove content, leading him to form the HONR Network that began by helping to organise campaigns by volunteers to flag false and defamatory posts, bringing the content to websites' attention for potential action.

Today, the site has direct relations with major platforms such as Facebook and Google, and can bring cases of people illegally harassed online to operators' attention for possible removal of the content, Ms Alexandrea Merrell, executive chairman of the HONR board, said in an interview. Twitter hasn't joined the initiative, Ms Merrell said.

Twitter declined to comment about its participation. Still, the HONR website laments "the apathetic and inconsistent response" by the platforms to requests to remove harmful material. The companies need better procedures to deal with misuse of the platforms, the group says.

Mr Pozner learnt to use copyright claims to his family images to force sites to remove content. He's also suing Mr Alex Jones, host of the conspiracy website InfoWars, who had derided the shooting as fake and possibly staged by the government. The lawsuit filed in 2018 is set to come to trial this summer, Ms Merrell said.

"You can say that Sandy Hook never happened, but you can't say that Lenny Pozner is a crisis actor who took money to pretend his son was murdered at Sandy Hook," she said. "That is defamatory.'"

"Social media has been largely apathetic, simply because they can," Ms Merrell added.

Social media sites operate in an environment shaped by years of jurisprudence.

Social media sites operate in an environment shaped by years of jurisprudence. PHOTO: AFP

"Courts have stretched Section 230's legal shield far beyond what its words, context, and purpose support," Professor Danielle Citron, of the University of Virginia Law School, told lawmakers at a 2019 hearing.

"It has led to the rise of social media companies like Facebook, Twitter, and Reddit. But it also has subsidised platforms that encourage online abuse," Prof Citron said. "It has left victims without leverage to insist that platforms take down destructive activity."

Section 230 protects platforms from lawsuits over hosting speech. At the same time, the so-called Good Samaritan part of Section 230 lets platforms weed out some speech, for instance to prevent children from being exposed to adult content, or to suppress abusive online behaviour.

It's still illegal for platforms to knowingly host illegal content, such as sexually explicit conduct involving a minor.

In the case of John Doe, the boy and his mother used Twitter's online reporting form to say it was hosting illegal child pornography. The material at issue was extracted when the teen responded to an online request that he thought was from a 16-year-old girl. Once the first images were sent, the hook was set for blackmail to produce more images, including those involving a friend.

Twitter in automated messages ("Hello, thanks for reaching out," began one) said it didn't see a problem, according to the lawsuit. The material was blocked only when the family turned to a personal connection with a law enforcement official, who reached out to the company.

According to the lawsuit, the boy complained to Twitter on Jan 21 and the images weren't removed until "on or about" Jan 30. The platform is designed to help its users disseminate material quickly to large numbers of people, and its safeguards don't work to quickly block illegal child pornography, according to the lawsuit.

Twitter said it works to protect children.

"Our dedicated teams work to stay ahead of bad-faith actors and to ensure we're doing everything we can to remove content, facilitate investigations, and protect minors from harm - both on and offline," the company said in an e-mailed statement.

Join ST's Telegram channel and get the latest breaking news delivered to you.