ChatGPT advances are moving so fast regulators can’t keep up

A few public agencies in the US are trying to limit how generative AI tools are used before they take hold. PHOTO: AFP

Calls for governments to regulate artificial intelligence (AI) far predate OpenAI’s release of ChatGPT in late 2022. But officials haven’t come up with an approach to deal with AI’s potential to enable mass surveillance, exacerbate long-standing inequities or put humans in physical danger.

With those challenges looming, the sudden emergence of so-called generative AI – systems such as chatbots that create content on their own – is presenting a host of new ones.

“We need to regulate this, we need laws,” said Ms Janet Haven, executive director of Data & Society, a non-profit research organisation in New York. “The idea that tech companies get to build whatever they want and release it into the world and society scrambles to adjust and make way for that thing is backwards.”

The most developed proposal today for regulating AI comes from the European Union, which first issued its Artificial Intelligence Act in 2021. The legislation, whose final form is still being debated, would put aggressive safeguards in place when the technology is being used for “high-risk” cases, including employment decisions or in some law enforcement operations, while leaving more room for experimentation with lower-risk applications.

Some of the lawmakers behind the Act want to designate ChatGPT as high-risk, an idea others object to. As it is written, the Bill focuses on how technologies are used rather than on the specific technologies themselves.

In the United States, local, state and federal officials have all begun to take some steps towards developing rules. The Biden administration last fall presented its blueprint for an “AI Bill of Rights”, which addresses issues such as discrimination, privacy and the ability for users to opt out of automated systems.

But the guidelines are voluntary, and some experts say generative AI has already raised issues – including the potential for mass-produced disinformation – that the blueprint does not address. There is growing concern that chatbots will make it harder for people to trust anything they encounter online.

“This is part of the trajectory towards a lack of care for the truth,” said Dr Will McNeill, a professor at the University of Southampton in Britain who specialises in AI ethics.

A few public agencies in the US are trying to limit how generative AI tools are used before they take hold: The New York City Department of Education prohibits ChatGPT on its devices and networks. Some US financial institutions have also banned the tool.

For AI more broadly, companies have been rapidly adopting new guidelines in recent years with “no substantial increases” in risk mitigation, according to a 2022 survey by McKinsey & Co.

Without clear policies, the main thing holding back AI seems to be the limits the companies building the tech place on themselves.

“For me, the thing that will raise alarm bells is if organisations are driving towards commercialising without equally talking about how they are ensuring it is being done in a responsible way,” said Mr Steven Mills, chief AI ethics officer at Boston Consulting Group. “We’re still not sure yet what these technologies can do.”

Companies such as Google, Microsoft and OpenAI that are working on generative AI have been vocal about how seriously they take the ethical concerns about their work. But tech leaders have also cautioned against overly stringent rules, with US-based firms warning Western governments that an overreaction will give China, which is aggressively pursuing AI, a geopolitical advantage.

Former Google chief executive Eric Schmidt, now chair of the non-profit Special Competitive Studies Project, testified at a congressional hearing on March 8 that it is important AI tools reflect American values and that the government should primarily “work on the edges where you have misuse”.

On its part, China is already planning rules to limit generative AI and has stopped companies from using apps or websites that route to ChatGPT, according to local news reports.

Some experts believe these measures are an attempt to implement a censorship regime around the tools or to give Chinese competitors a leg-up.

But technologists may be pushing ahead too rapidly for officials to keep up. On March 14, OpenAI released a new version of the technology that powers ChatGPT, describing it as more accurate, creative and collaborative. BLOOMBERG

Join ST's Telegram channel and get the latest breaking news delivered to you.