EU opens investigation into AI chatbot Grok over explicit imagery, lawmaker says
Sign up now: Get ST's newsletters delivered to your inbox
A billboard urging the banning of social media platform X and AI chatbot Grok in London on Jan 14.
PHOTO: REUTERS
BRUSSELS/LONDON - The European Commission has launched an investigation into billionaire Elon Musk’s artificial intelligence (AI) chatbot Grok over the production of explicit imagery, Ms Regina Doherty, a member of the European Parliament representing Ireland, said in a statement on Jan 26.
The investigation will assess whether social media platform X has complied with its obligations under European Union digital legislation, including requirements relating to risk mitigation, content governance and the protection of fundamental rights, the lawmaker said.
The investigation risks antagonising the administration of US President Donald Trump amid an EU crackdown on Big Tech that has triggered criticism and even the threat of tariffs from the United States.
A commission spokesperson did not immediately respond when asked to confirm if an investigation had been opened.
X did not immediately respond to an e-mailed request for comment on Jan 26.
“This case raises very serious questions about whether platforms are meeting their legal obligations to assess risks properly and to prevent illegal and harmful content from spreading,” Ms Doherty said in an e-mailed statement.
The commission said earlier in January that the AI-generated images of undressed women and children being shared on X were unlawful and appalling, joining condemnation across the world.
xAI, the AI company owned by Mr Musk, said in mid-January it had implemented tweaks to prevent the Grok account “from allowing the editing of images of real people in revealing clothing such as bikinis”.
xAI also said at the time that it had blocked users, based on their location, from generating images of people in revealing clothing in “jurisdictions where it’s illegal”.
It did not identify those jurisdictions.
Ms Doherty said the images had exposed wider weaknesses in how emerging AI technologies are regulated and enforced.
“The European Union has clear rules to protect people online. Those rules must mean something in practice, especially when powerful technologies are deployed at scale. No company operating in the EU is above the law,” she added.
Britain’s media regulator Ofcom launched its own separate investigation


