In South Korea, what you ask AI could land you in court

Sign up now: Get insights on Asia's fast-moving developments

Lawyers noted that the authorities increasingly examine generative AI chat logs during mobile phone forensic analyses.

Lawyers noted that the authorities increasingly examine generative AI chat logs during mobile phone forensic analyses.

PHOTO ILLUSTRATION: UNSPLASH

Google Preferred Source badge

SEOUL – South Korean police are increasingly

examining suspects’ generative artificial intelligence use history

as key evidence in establishing intent or motive, according to media reports citing legal experts on Feb 25.

In a recent case, investigators said they decided to pursue murder charges – rather than charges of death resulting from bodily injury – against a female suspect accused of serial killings in Gangbuk-gu, Seoul, after reviewing her chat logs with OpenAI’s ChatGPT.

The suspect, a woman in her 20s surnamed Kim, was charged with murder, aggravated bodily injury and violations of the Narcotics Control Act. She is accused of giving drug-laced hangover remedies to three men at a motel between December 2025 and Feb 9. Two victims died, after the first survived with injuries.

Police said the suspect had asked ChatGPT: “Would people die if they took sleeping pills with alcohol?”

Investigators viewed this as evidence suggesting criminal intent.

Legal experts say such investigative practices are becoming more common. Lawyers noted that the authorities increasingly examine generative AI chat logs during mobile phone forensic analyses.

One lawyer, who requested anonymity, said the shift has influenced defence strategies.

“When I take on a case now, I review my clients’ ChatGPT conversations with them,” he said.

Experts point to a fundamental difference between conventional browser searches and AI conversations. While both may be used to seek information, AI’s conversational structure can reveal a user’s internal reasoning, intentions and specific objectives more directly.

Dr Jeong Doo-won, a professor of forensic science at Sungkyunkwan University who has published research on generative AI forensics, explained that AI records may carry stronger evidentiary value.

“Web browser searches are largely keyword-based, but interactions with AI systems inevitably take the form of sentences,” Dr Jeong said.

“Because prompts are written as full statements, they can preserve a user’s actual intent more explicitly.”

However, experts also warn of legal and ethical concerns.

Conversations with generative AI often contain highly sensitive personal information, raising questions about privacy, proportionality and the permissible scope of digital evidence collection.

Dr Kim Myung-joo, head of the AI Safety Institute, cautioned against overly broad investigative use of AI records.

“If a crime occurs, the authorities could attempt to review a person’s entire AI conversation history and argue that criminal intent existed long before the incident,” Dr Kim told Yonhap News Agency.

He warned that indiscriminate seizures of AI chat histories could trigger future human rights disputes.

Dr Kim also addressed ongoing debates about AI accountability, especially if AI had instigated or aided the crime.

“The most difficult issue is responsibility,” he said. “For ordinary products, liability is governed by product liability laws. AI systems do not fit neatly into that framework. This is ultimately a challenge society must resolve.” THE KOREA HERALD/ASIA NEWS NETWORK

See more on