Malaysia grapples with AI legal grey zone as deepfake porn blackmail targets lawmakers

Sign up now: Get ST's newsletters delivered to your inbox

Last year, Malaysia passed the Online Safety Act, which aims to strengthen online safety by focusing on deepfakes, financial scams, and cyberbullying.

In 2024, Malaysia passed the Online Safety Act, which aims to strengthen online safety by focusing on deepfakes, financial scams, and cyber bullying.

PHOTO: BERNAMA

Follow topic:
  • Malaysian lawmakers were blackmailed with AI-generated sexual content, exposing legal gaps and raising human rights concerns. Scammers demanded US$100,000 from targeted MPs and senators.
  • Experts warn current laws are inadequate for AI-generated content, suggesting updates to existing laws and clearer recourse for victims is needed.
  • Pending an AI Bill by mid-2026, a watchdog says self-regulation and user awareness are critical, with focus on platform accountability for rapid takedowns.

AI generated

- Malaysian authorities are facing renewed calls to criminalise sexually explicit content generated using artificial intelligence (AI) after several lawmakers were blackmailed with such clips within a span of five days.

Scammers demanding thousands of dollars through threats to release these “deepfake porn” clips have exposed glaring gaps in Malaysia’s legal framework, and raised significant human rights concerns that threaten public trust in the country’s institutions.

As the authorities ponder how to deal with such cases, watchdogs and the civil society have cautioned against looking at laws as a silver bullet.

Ms Mediha Mahmood from the Communications and Multimedia Content Forum (CMCF) said the incident points to a wider issue.

“When lawmakers are intimidated with fake content, the real target is public trust, so it becomes a national concern,” the chief executive of the content watchdog told The Straits Times.

CMCF, also called the Content Forum, is a self-regulatory forum of content industry players under government internet regulator Malaysian Communications and Multimedia Commission (MCMC).

Since Sept 12, 10 MPs and senators, including Communications Minister Fahmi Fadzil and former economy minister Rafizi Ramli, have been targeted by the scam. They said scammers threatened to release deepfake videos depicting them in sex acts unless US$100,000 (S$128,500) was paid. 

This was not the first time public figures have been hit with deepfakes, although this was the first major case involving sexual material.

The authorities have previously had to warn the public against falling for investment scams that spoofed endorsements from Prime Minister Anwar Ibrahim and popular singer Siti Nurhaliza. Former police chief Acryl Sani Abdullah Sani was also targeted in an alleged disinformation campaign using his likeness.

In April, 38 students in Johor – some as young as 12 – were identified as victims of AI-generated obscene images. A 16-year-old student was charged in court later that month after 29 police reports were lodged against him.

The core problem, experts said, is a legal system struggling to keep pace with technology.

While existing laws such as the Penal Code and Communications and Multimedia Act (CMA) can technically be applied, they are not designed to handle synthetic media, or media produced by generative AI.

Experts suggested that Section 292 of the Penal Code, which covers obscene materials, and Section 233 of the CMA, which addresses improper use of network facilities to transmit obscene content, are ill-suited for the nuances of AI-generated content.

Ms Melissa Lim, an AI legal research fellow at Sinar Project, a Malaysian civic tech group promoting transparency, said: “The law is there, but the context of crimes committed using AI does not fit exactly into the definition of the crimes, especially if there are no tangible damages incurred from the abuse of AI.”

The authorities are taking steps to address these legislative gaps. In 2024, Malaysia passed the Online Safety Act, which aims to strengthen online safety by focusing on deepfakes, financial scams and cyber bullying. However, it is yet to be implemented.

Additionally, Digital Minister Gobind Singh Deo has signalled plans to table a dedicated AI Bill by mid-2026.

Pending such laws, the police are investigating the deepfake blackmail case, while the MCMC said it has removed over 40,000 AI-created disinformation posts from social media in the past three years. In 2024 alone, police noted that more than 400 cases of fraud involving deepfakes were reported, with total losses amounting to RM2.27 million (S$695,000).

Left unchecked, such deepfake videos pose a bigger threat to democracy and public trust in institutions, with the lawmakers themselves now facing blackmail.

Ms Mediha said: “When deepfakes are used to intimidate lawmakers, the aim is not just to smear one person, but to sow doubt with intent to want the people to lose confidence in the system.”

In tackling this deepfake threat, legal experts, however, cautioned against creating a standalone law to legislate AI tools themselves. 

Ms Lim said: “The tool for making harmful content need not be criminalised. Porn, for example, is illegal, regardless of whether it is filmed using a camera, drawn or generated with AI. 

“What matters is whether the content itself is a crime.”

Instead, Ms Lim advocated updating existing laws and providing clearer recourse for victims, even when no direct harm occurs. She pointed to how Denmark proposed changes to its copyright law in response to deepfakes, by ensuring that citizens have the right to their own body, facial features and voice. The law is currently in a public consultation phase and may be finalised before Christmas and is expected to be implemented in January 2026.

Ms Wathshlah Naidu, executive director of advocacy group Centre for Independent Journalism, hopes the government will bear in mind that any new AI legislation must be carefully crafted to avoid stifling free expression. Legislation must “define harm narrowly, include exemptions for satire, parody, research, media, critical speech, and ensure judicial oversight”, she said, highlighting the importance of protecting human rights. 

This is where self-regulation and user awareness become critical. Ms Mediha said CMCF has, among other things, revised its Content Code to include AI-generated material, and also introduced faster takedown procedures, clearer labelling for AI-generated material, and a new ethical standard.

She also emphasised the role of platforms as “frontliners” by prioritising user protection through rapid takedowns, transparent labelling and improved detection tools.

“The law moves at the speed of Parliament. Virality moves at the speed of a share button,” Ms Mediha said.

See more on