Microsoft probing if DeepSeek-linked group improperly obtained OpenAI data
Sign up now: Get ST's newsletters delivered to your inbox
DeepSeek earlier in January released a new open-source artificial intelligence model called R1 that can mimic the way humans reason.
PHOTO: REUTERS
Microsoft and OpenAI are investigating whether data output from OpenAI’s technology was obtained in an unauthorised manner by a group linked to Chinese artificial intelligence start-up DeepSeek, according to people familiar with the matter.
Microsoft’s security researchers in autumn observed individuals they believe may be linked to DeepSeek exfiltrating a large amount of data using the OpenAI application programming interface, or API, said the people, who asked not to be identified because the matter is confidential.
Software developers can pay for a licence to use the API to integrate OpenAI’s proprietary AI models into their own applications.
Microsoft, an OpenAI technology partner and its largest investor, notified OpenAI of the activity, the people said.
Such activity could violate OpenAI’s terms of service or could indicate the group acted to remove OpenAI’s restrictions on how much data they could obtain, the people said.
DeepSeek earlier in January released a new open-source AI model called R1 that can mimic the way humans reason, upending a market dominated by OpenAI and US rivals such as Google and Meta Platforms.
The Chinese upstart said R1 rivalled or outperformed leading US developers’ products on a range of industry benchmarks, including for mathematical tasks and general knowledge – and was built for a fraction of the cost.
The potential threat to the US firms’ edge in the industry sent technology stocks tied to AI, including Microsoft, Nvidia, Oracle and Google parent Alphabet, tumbling on Jan 27, erasing a total of almost US$1 trillion (S$1.35 trillion) in market value.
OpenAI did not respond to a request for comment, and Microsoft declined to comment. DeepSeek and hedge fund High-Flyer, where DeepSeek was started, did not immediately respond to requests for comment via email.
Mr David Sacks, US President Donald Trump’s artificial intelligence czar, said on Jan 28 there is “substantial evidence” that DeepSeek leaned on the output of OpenAI’s models to help develop its own technology.
In an interview with Fox News, Mr Sacks described a technique called distillation, whereby one AI model uses the outputs of another for training purposes to develop similar capabilities.
“There’s substantial evidence that what DeepSeek did here is they distilled knowledge out of OpenAI models and I don’t think OpenAI is very happy about this,” Mr Sacks said, without detailing the evidence.
In a statement responding to Mr Sacks’ comments, OpenAI did not directly address his comments about DeepSeek.
“We know PRC based companies – and others – are constantly trying to distil the models of leading US AI companies,” an OpenAI spokesperson said in the statement, referring to the People’s Republic of China.
“As the leading builder of AI, we engage in countermeasures to protect our intellectual property, including a careful process for which frontier capabilities to include in released models, and believe as we go forward that it is critically important that we are working closely with the US government to best protect the most capable models from efforts by adversaries and competitors to take US technology.” BLOOMBERG


