Twitter exec says it’s moving fast on moderation as harmful content surges
Sign up now: Get ST's newsletters delivered to your inbox
The approach to safety reflects an acceleration of changes that were already being planned since 2021.
PHOTO: AFP
Follow topic:
SAN FRANCISCO – Mr Elon Musk’s Twitter is leaning heavily on automation to moderate content, doing away with certain manual reviews and favouring curbs on distribution rather than removing certain speech outright, its new head of trust and safety told Reuters.
Twitter is also more aggressively restricting abuse-prone hashtags and search results in areas, including child exploitation, regardless of potential impacts on “benign uses” of those terms, said Twitter vice-president of trust and safety product Ella Irwin.
“The biggest thing that’s changed is the team is fully empowered to move fast and be as aggressive as possible,” she said.
Her comments come as researchers are reporting a surge in hate speech on the social media service,
The company has faced pointed questions about its ability and willingness to moderate harmful and illegal content since Mr Musk slashed half of Twitter’s staff
Advertisers, Twitter’s main revenue source, have fled the platform over concerns about brand safety.
On Friday, Mr Musk vowed “significant reinforcement of content moderation and protection of freedom of speech” during a meeting with French President Emmanuel Macron.
Ms Irwin said Mr Musk encouraged the team to worry less about how their actions would affect user growth or revenue, saying safety was the company’s top priority. “He emphasises that every single day, multiple times a day,” she said.
The approach to safety Ms Irwin described, at least in part, reflects an acceleration of changes already being planned since 2021 around Twitter’s handling of hateful conduct and other policy violations, according to former employees familiar with that work.
One approach, captured in the industry mantra “freedom of speech, not freedom of reach”, entails leaving up certain tweets that violate the firm’s policies but barring them from appearing in places like the home timeline and search.
Twitter has long deployed such “visibility filtering” tools around misinformation and already incorporated them into its official hateful conduct policy before the Musk acquisition. The approach allows for more freewheeling speech while cutting down on the potential harms associated with viral abusive content.
The number of tweets containing hateful content on Twitter rose sharply in the week before Mr Musk tweeted on Nov 23 that impressions, or views, of hateful speech were declining, according to the Centre for Countering Digital Hate – in one example of researchers pointing to the prevalence of such content, while Mr Musk touts a reduction in visibility.
Tweets containing words that were anti-black that week were triple the number seen in the month before Mr Musk took over, while tweets containing a gay slur were up 31 per cent, the researchers said.
‘More risks, move fast’
Ms Irwin, who joined the company in June and previously held safety roles at other companies, including Amazon.com and Google, pushed back on suggestions that Twitter did not have the resources or willingness to protect the platform.
She said layoffs did not significantly impact full-time employees or contractors working on what the company referred to as its “health” divisions, including in “critical areas” like child safety and content moderation.
Two sources familiar with the cuts said more than 50 per cent of the health engineering unit had been laid off. Ms Irwin did not respond to a request for comment on the assertion, but previously denied that the health team was severely impacted by layoffs.
She added that the number of people working on child safety has not changed since the acquisition, and that the product manager for the team is still there.
Ms Irwin said Twitter backfilled some positions for people who left the company, though she declined to provide specific figures for the extent of the turnover.
Ms Irwin said Mr Musk was focused on using automation more, arguing that the company has erred on the side of using time- and labour-intensive human reviews of harmful content.
“He’s encouraged the team to take more risks, move fast, get the platform safe,” she said.
On child safety, for instance, Ms Irwin said Twitter shifted towards automatically taking down tweets reported by trusted figures with a track record of accurately flagging harmful posts.
It also was restricting hashtags and search results frequently associated with abuse, like those aimed at looking up teen pornography. Past concerns about the impact of such restrictions on permitted uses of the terms were gone, she said.
The use of “trusted reporters” was “something we’ve discussed in the past at Twitter, but there was some hesitancy and frankly just some delay”. REUTERS

