NEW YORK - Over 120,000 views of a video showing a boy being sexually assaulted. A recommendation engine suggesting that a user follow content related to exploited children. Users continually posting abusive material, delays in taking it down when it is detected, and friction with organisations that police it.
All since Mr Elon Musk declared that “removing child exploitation is priority #1” in a tweet in late November 2022.
Under Mr Musk’s ownership, Twitter’s head of safety Ms Ella Irwin said she had been moving rapidly to combat child sexual abuse material, which was prevalent on the site – as it is on most tech platforms – under the previous owners.
“Twitter 2.0” will be different, the company promised.
But a review by The New York Times found that the imagery, commonly known as child pornography, persisted on the platform, including widely circulated material that authorities consider the easiest to detect and eliminate.
After Mr Musk took the reins in late October 2022, Twitter largely eliminated or lost staff experienced with the problem and failed to prevent the spread of abusive images previously identified by authorities, the review shows. Twitter also stopped paying for some detection software considered key to its efforts.
All the while, people on dark-web forums discuss how Twitter remains a platform where they can easily find the material while avoiding detection, according to transcripts of those forums from an anti-abuse group that monitors them.
“If you let sewer rats in,” said Ms Julie Inman Grant, Australia’s online safety commissioner, “you know that pestilence is going to come.”
In a Twitter audio chat with Ms Irwin in early December 2022, an independent researcher working with Twitter said illegal content had been publicly available on the platform for years and garnered millions of views.
But Ms Irwin and others at Twitter said their efforts under Mr Musk were paying off. During the first full month of the new ownership, the company suspended nearly 300,000 accounts for violating “child sexual exploitation” policies, 57 per cent more than usual, the company said.
The effort accelerated in January 2023, Twitter said, when it suspended 404,000 accounts.
“Our recent approach is more aggressive,” the company declared in a series of tweets on Wednesday, saying it had also cracked down on people who search for the exploitative material and had reduced successful searches by 99 per cent since December 2022.
Ms Irwin, in an interview, said the bulk of suspensions involved accounts that engaged with the material or were claiming to sell or distribute it, rather than those that posted it. She did not dispute that child sexual abuse content remains openly available on the platform, saying that “we absolutely know that we are still missing some things that we need to be able to detect better.”
She added that Twitter was hiring employees and deploying “new mechanisms” to fight the problem. “We have been working on this non-stop,” she said.
Wired, NBC and others have detailed Twitter’s ongoing struggles with child abuse imagery under Mr Musk. On Tuesday, Senator Richard Durbin, D-Ill., asked the Justice Department to review Twitter’s record in addressing the problem.
To assess the company’s claims of progress, the Times created an individual Twitter account and wrote an automated computer program that could scour the platform for the content without displaying the actual images, which are illegal to view. The material wasn’t difficult to find. In fact, Twitter helped promote it through its recommendation algorithm – a feature that suggests accounts to follow based on user activity.
Among the recommendations was an account that featured a profile picture of a shirtless boy. The child in the photo is a known victim of sexual abuse, according to the Canadian Center for Child Protection, which helped identify exploitative material on the platform for the Times by matching it against a database of previously identified imagery.
That same user followed other suspicious accounts, including one that had “liked” a video of boys sexually assaulting another boy. By Jan 19, the video, which had been on Twitter for more than a month, had got more than 122,000 views, nearly 300 retweets and more than 2,600 likes. Twitter later removed the video after the Canadian centre flagged it for the company.
In the first few hours of searching, the computer program found a number of images previously identified as abusive – and accounts offering to sell more. The Times flagged the posts without viewing any images, sending the web addresses to services run by Microsoft and the Canadian centre.
One account in late December offered a discounted “Christmas pack” of photos and videos. That user tweeted a partly obscured image of a child who had been abused from about age eight through adolescence. Twitter took down the post five days later, but only after the Canadian centre sent the company repeated notices.
In all, the computer program found imagery of 10 victims appearing more than 150 times across multiple accounts, most recently on Thursday. The accompanying tweets often advertised child rape videos and included links to encrypted platforms.
Mr Alex Stamos, the director of the Stanford Internet Observatory and the former top security executive at Facebook, found the results alarming. “Considering the focus Musk has put on child safety, it is surprising they are not doing the basics,” he said.
Separately, to confirm the Times’ findings, the Canadian centre ran a test to determine how often one video series involving known victims appeared on Twitter. Analysts found 31 different videos shared by more than 40 accounts, some of which were retweeted and liked thousands of times. The videos depicted a young teenager who had been extorted online to engage in sexual acts with a prepubescent child over a period of months.
The centre also did a broader scan against the most explicit videos in their database. There were more than 260 hits, with more than 174,000 likes and 63,000 retweets.
“The volume we’re able to find with a minimal amount of effort is quite significant,” said Mr Lloyd Richardson, the technology director at the Canadian centre. “It shouldn’t be the job of external people to find this sort of content sitting on their system.”
In 2019, the Times reported that many tech companies had serious gaps in policing child exploitation on their platforms. This past December, Ms Inman Grant, the Australian online safety official, conducted an audit that found many of the same problems remained at a sampling of tech companies.
The Australian review did not include Twitter, but some of the platform’s difficulties are similar to those of other tech companies and predate Musk’s arrival, according to multiple current and former employees.
Twitter, founded in 2006, started using a more comprehensive tool to scan for videos of child sexual abuse in the third quarter of 2022, they said, and the engineering team dedicated to finding illegal photos and videos was formed just 10 months earlier. In addition, the company’s trust and safety teams have been perennially understaffed, although the company continued expanding them even amid a broad hiring freeze that began April 2022, four former employees said.
Over the years, the company did build internal tools to find and remove some images, and the national centre often lauded the company for the thoroughness of its reports.
The platform in recent months has also experienced problems with its abuse reporting system, which allows users to notify the company when they encounter child exploitation material. (Twitter offers a guide to reporting abusive content on its platform.)
The Times used its research account to report multiple profiles that were claiming to sell or trade the content in December and January. Many of the accounts remained active and even appeared as recommendations to follow on the Times’ own account. The company said it would need more time to unravel why such recommendations would appear.
To find the material, Twitter relies on software created by an anti-trafficking organization called Thorn. Twitter has not paid the organisation since Mr Musk took over, according to people familiar with the relationship, presumably part of his larger effort to cut costs.
Twitter has also stopped working with Thorn to improve the technology. The collaboration had industry-wide benefits because other companies use the software.
Ms Irwin declined to comment on Twitter’s business with specific vendors.
Twitter’s relationship with the National Center for Missing and Exploited Children has also suffered, according to people who work there.
Mr John Shehan, an executive at the centre, said he was worried about the “high level of turnover” at Twitter and where the company “stands in trust and safety and their commitment to identifying and removing child sexual abuse material from their platform”.
After the transition to Mr Musk’s ownership, Twitter initially reacted more slowly to the centre’s notifications of sexual abuse content, according to data from the centre, a delay of great importance to abuse survivors, who are re-victimized with every new post.
Twitter, like other social media sites, has a two-way relationship with the centre. The site notifies the centre (which can then notify law enforcement) when it is made aware of illegal content. And when the centre learns of illegal content on Twitter, it alerts the site so the images and accounts can be removed.
Toward the end of 2022, the company’s response time was more than double what it had been during the same period a year earlier under the prior ownership, even though the centre sent it fewer alerts. In December 2021, Twitter took an average of 1.6 days to respond to 98 notices; In December 2022, after Mr Musk took over the company, it took 3.5 days to respond to 55. By January, it had greatly improved, taking 1.3 days to respond to 82.
The Canadian centre, which serves the same function in that country, said it had seen delays as long as a week. In one instance, the Canadian centre detected a video on Jan 6 depicting the abuse of a naked girl, age eight to 10. The organisation said it sent out daily notices for about a week before Twitter removed the video.
In addition, Twitter and the US national centre seem to disagree about Twitter’s obligation to report accounts that claim to sell illegal material without directly posting it.
The company has not reported to the national centre the hundreds of thousands of accounts it has suspended because the rules require that they “have high confidence that the person is knowingly transmitting” the illegal imagery and those accounts did not meet that threshold, Ms Irwin said.
Mr Shehan of the national centre disputed that interpretation of the rules, noting that tech companies are also legally required to report users even if they only claim to sell or solicit the material. So far, the national centre’s data show, Twitter has made about 8,000 reports monthly, a small fraction of the accounts it has suspended.
Ms Inman Grant, the Australian regulator, said she had been unable to communicate with local representatives of the company because her agency’s contacts in Australia had quit or been fired since Musk took over. She feared that the staff reductions could lead to more trafficking in exploitative imagery.
“These local contacts play a vital role in addressing time-sensitive matters,” said Ms Inman Grant, who was previously a safety executive at both Twitter and Microsoft.
Ms Irwin said the company continued to be in touch with the Australian agency, and more generally she expressed confidence that Twitter was “getting a lot better” while acknowledging the challenges ahead.
“In no way are we patting ourselves on the back and saying, ‘Man, we’ve got this nailed,’” Ms Irwin said.
Offenders continue to trade tips on dark-web forums about how to find the material on Twitter, according to posts found by the Canadian centre.
On Jan 12, 2023, one user described following hundreds of “legit” Twitter accounts that sold videos of young boys who were tricked into sending explicit recordings of themselves. Another user characterised Twitter as an easy venue for watching sexual abuse videos of all types.
“People share so much,” the user wrote. NYTIMES