AI chatbots are ‘clear danger’ to kids: Australian watchdog

Sign up now: Get ST's newsletters delivered to your inbox

Artificial intelligence-powered chatbots that encourage suicide or hold sexually explicit conversations pose a “clear and present danger” to children.

Artificial intelligence-powered chatbots that encourage suicide or hold sexually explicit conversations pose a “clear and present danger” to children.

PHOTO: JACKIE MOLLOY/THE NEW YORK TIMES

Follow topic:

- Chatbots powered by artificial intelligence (AI) that encourage suicide or hold sexually explicit conversations pose a “clear and present danger” to children, Australia’s online safety regulator said, as it rolled out new rules governing the services.

The measures are the latest in a series of stringent digital restrictions in Australia, including a world-first social media ban for under-16s. The law takes effect in December and covers a range of services including Facebook and Instagram, both owned by Meta Platforms, and YouTube.

In a statement on Sept 9, Australia’s eSafety Commissioner Julie Inman Grant said children are being exposed to “awful” age-inappropriate content at an increasingly young age. Ms Inman Grant said she had heard of 10-year-olds engaging sexually with the artificial companions, which are mostly unregulated.

Under new rules, sites that display or distribute pornography, or other “high-impact content”, will have to apply age-checking technology to stop children accessing the material. App stores must ensure that apps are appropriately rated, and that age-assurance measures are in place for any downloads. 

The rules apply to a range of online services including social media platforms, gaming sites and AI services. Breaches can be punished by penalties of as much as A$50 million (S$42 million).

“I do not want Australian children and young people serving as casualties of powerful technologies thrust onto the market without guardrails,” Ms Inman Grant said. BLOOMBERG

See more on