US scrutinises Chinese AI for ideological bias, memo shows

Sign up now: Get ST's newsletters delivered to your inbox

US officials have been testing Chinese AI models, including Alibaba’s Qwen 3 and DeepSeek’s R1.

US officials have been testing Chinese artificial intelligence models, including Alibaba’s Qwen 3 and DeepSeek’s R1.

PHOTO: REUTERS

Follow topic:
  • US officials are evaluating Chinese AI programmes for alignment with the Chinese Communist Party's ideology by scoring their responses to standardised questions.
  • Testing of models like Alibaba's Qwen 3 and DeepSeek's R1 reveals Chinese AI increasingly adopts Beijing's talking points, showing censorship signs.
  • The US may publicise these evaluations to highlight ideological bias in Chinese AI, amid concerns over AI creators tilting ideological viewpoints.

AI generated

WASHINGTON American officials have quietly been grading Chinese artificial intelligence (AI) programmes on their ability to mould their output to the Communist Party of China’s official line, according to a memo reviewed by Reuters.

US State Department and Commerce Department officials are working together on the effort, which operates by feeding the programmes standardised lists of questions in Chinese and in English and scoring their output, the memo showed.

The evaluations, which have not previously been reported, are another example of how the US and China are competing over the deployment of large language models, sometimes described as AI.

The integration of AI into daily life means that any ideological bias in these models could become widespread.

One State Department official said their evaluations could eventually be made public in a bid to raise the alarm over ideologically slanted AI tools being deployed by America’s chief geopolitical rival.

The State and Commerce departments did not immediately return messages seeking comment on the effort.

In an e-mail, Chinese embassy spokesman Liu Pengyu did not address the memo itself but noted that China was “rapidly building an AI governance system with distinct national characteristics” which balanced “development and security”.

Beijing makes no secret of policing Chinese models’ output to ensure they adhere to the one-party state’s “core socialist values”.

In practice, that means ensuring the models do not inadvertently criticise the government or stray too far into sensitive subjects like China’s 1988 crackdown on pro-democracy protests at Tiananmen Square, or the subjugation of its minority Uighur population.

The memo reviewed by Reuters shows that US officials have recently been testing models, including Alibaba’s Qwen 3 and DeepSeek’s R1, and then scoring the models according to whether they engaged with the questions or not, and how closely their answers aligned with Beijing’s talking points when they did engage.

According to the memo, the testing showed that Chinese AI tools were significantly more likely to align their answers with Beijing’s talking points than their US counterparts, for example by backing China’s claims over disputed islands in the South China Sea.

DeepSeek’s model, the memo said, frequently used boilerplate language praising Beijing’s commitment to “stability and social harmony” when asked about sensitive topics such as Tiananmen Square.

The memo said each new iteration of Chinese models showed increased signs of censorship, suggesting that Chinese AI developers were increasingly focused on making sure their products toed Beijing’s line.

DeepSeek and Alibaba did not immediately return messages seeking comment.

The ability of AI models’ creators to tilt the ideological playing field of their chatbots has emerged as a key concern, and not just for Chinese AI models. When billionaire Elon Musk – who has frequently championed far-right causes – announced changes to his xAI chatbot, Grok,

the model began endorsing Hitler and attacking Jews

in conspiratorial and bigoted terms.

In a statement posted to X, Mr Musk’s social media site, on July 8, Grok said it was “actively working to remove the inappropriate posts”.

On July 9, X’s chief executive Linda Yaccarino said

she would step down

from her role.

No reason was given for the surprise departure. REUTERS

See more on