ByteDance researcher wrongly added to AI safety group chat, says US standards body

Sign up now: Get ST's newsletters delivered to your inbox

ByteDance's TikTok is at the centre of a national debate over whether the popular app has opened a backdoor for the Chinese government to spy on, or manipulate Americans at scale.

ByteDance's TikTok is at the centre of a debate over whether it has opened a backdoor for the Chinese government to spy on or manipulate Americans.

PHOTO: REUTERS

Follow topic:

- A researcher from TikTok’s Chinese owner ByteDance was wrongly added to a group chat for American artificial intelligence (AI) safety experts last week, the US National Institute of Standards and Technology (Nist) said on March 18.

The researcher was added to a Slack instance for discussions between members of Nist’s US Artificial Intelligence Safety Institute, according to a person familiar with the matter.

In an e-mail, Nist said it added the researcher in the understanding that she was a volunteer.

“Once Nist became aware that the individual was an employee of ByteDance, they were swiftly removed for violating the consortium’s code of conduct on misrepresentation,” the e-mail said.

The researcher, whose LinkedIn profile says she is based in California, did not return messages seeking comment.

ByteDance also did not respond to e-mails seeking comment.

The person familiar with the matter said the appearance of a ByteDance researcher raised eyebrows in the consortium because the company is not a member, and TikTok is at the centre of a national debate over whether the popular app has opened a backdoor for the Chinese government to spy on, or manipulate Americans at scale.

Last week,

the US House of Representatives passed a Bill

to force ByteDance to divest itself of TikTok or face a nationwide ban; the ultimatum faces

an uncertain path in the Senate.

The AI Safety Institute is intended to evaluate the risks of cutting-edge AI programs. Announced in 2023, the institute was set up under Nist and its founding members include hundreds of major American tech companies, universities, AI start-ups, nongovernmental organisations and others, including Reuters’ parent company Thomson Reuters.

Among other things, the consortium works to develop guidelines for the safe deployment of AI programs and to help AI researchers find and fix security vulnerabilities in their models.

Nist said the Slack instance for the consortium includes about 850 users. REUTERS

See more on