After New York shooting video spreads, social platforms face questions

FBI agents look at bullet impacts in a Tops Grocery store in Buffalo, New York, on May 15, 2022. PHOTO: AFP

NEW YORK (NYTIMES) - In March 2019, before a gunman murdered 51 people at two mosques in Christchurch, New Zealand, he went live on Facebook to broadcast his attack.

In October of that year, a man in Germany broadcast his own mass shooting live on Twitch, the Amazon-owned livestreaming site popular with gamers.

On Saturday (May 14), a gunman in Buffalo, New York, mounted a camera to his helmet and livestreamed on Twitch as he killed 10 people and injured three more at a grocery store in what authorities said was a racist attack.

In a manifesto posted online, Payton Gendron, the 18-year-old whom authorities identified as the shooter, wrote that he had been inspired by the Christchurch gunman and others.

Twitch said it reacted swiftly to take down the video of the Buffalo shooting, removing the stream within two minutes of the start of the violence. But two minutes was enough time for the video to be shared elsewhere.

By Sunday, recordings of the video had circulated widely on other social platforms, including Facebook and Twitter. An excerpt from the original video on a site called Streamable was viewed more than 3 million times before it was removed.

Mass shootings - and live broadcasts - raise questions about the role and responsibility of social media sites in allowing violent and hateful content to proliferate.

Many of the gunmen in the shootings have written that they developed their racist and antisemitic beliefs trawling online forums like Reddit and 4chan, and were spurred on by watching other shooters stream their attacks live.

"It's a sad fact of the world that these kind of attacks are going to keep on happening, and the way that it works now is there's a social media aspect as well," said Ms Evelyn Douek, a senior research fellow at Columbia University's Knight First Amendment Institute who studies content moderation. "It's totally inevitable and foreseeable these days. It's just a matter of when."

Questions about the responsibilities of social media sites are part of a broader debate over how aggressively platforms should moderate their content. That discussion has been escalated since Mr Elon Musk, CEO of Tesla, recently agreed to purchase Twitter and has said he wants to make unfettered speech on the site a primary objective.

Social media and content moderation experts said Twitch's quick response was the best that could reasonably be expected. But the fact that the response did not prevent the video of the attack from being spread widely on other sites also raises the issue of whether the ability to livestream should be so easily accessible.

"I'm impressed that they got it down in two minutes," said Mr Micah Schaffer, a consultant who has led trust and safety decisions at Snapchat and YouTube. "But if the feeling is that even that's too much, then you really are at an impasse: Is it worth having this?"

In a statement, Ms Angela Hession, Twitch's vice-president of trust and safety, said the site's rapid action was a "very strong response time considering the challenges of live content moderation, and shows good progress."

Ms Hession said the site was working with the Global Internet Forum to Counter Terrorism, a non-profit coalition of social media sites, as well as other social platforms to prevent the spread of the video.

"In the end, we are all part of one internet, and we know by now that that content or behaviour rarely - if ever - will stay contained on one platform," she said.

There may be no easy answers. Platforms like Facebook, Twitch and Twitter have made strides in recent years, the experts said, in removing violent content and videos faster.

In the wake of the shooting in New Zealand, social platforms and countries around the world joined an initiative called the Christchurch Call to Action and agreed to work closely to combat terrorism and violent extremism content.

One tool that social sites have used is a shared database of hashes, or digital footprints of images, that can flag inappropriate content and have it taken down quickly.

But in this case, Ms Douek said, Facebook seemed to have fallen short despite the hash system. Facebook posts that linked to the video posted on Streamable generated more than 43,000 interactions, according to CrowdTangle, a web analytics tool, and some posts were up for more than nine hours.

When users tried to flag the content as violating Facebook's rules, which do not permit content that "glorifies violence," they were told in some cases that the links did not run afoul of Facebook's policies, according to screenshots viewed by The New York Times.

Facebook has since started to remove posts with links to the video, and a Facebook spokesperson said the posts do violate the platform's rules. Asked why some users were notified that posts with links to the video did not violate its standards, the spokesperson did not have an answer.

Twitter had not removed many posts with links to the shooting video, and in several cases, the video had been uploaded directly to the platform.

A company spokesperson initially said the site might remove some instances of the video or add a sensitive content warning, then later said Twitter would remove all videos related to the attack after the Times asked for clarification.

A spokesperson at Hopin, the video conferencing service that owns Streamable, said the platform was working to remove the video and delete the accounts of people who had uploaded it.

Removing violent content is "like trying to plug your fingers into leaks in a dam," Ms Douek said. "It's going to be fundamentally really difficult to find stuff, especially at the speed that this stuff spreads now."

Join ST's Telegram channel and get the latest breaking news delivered to you.