Humanity must evolve together with AI, says OpenAI chief during Singapore visit

Sign up now: Get ST's newsletters delivered to your inbox

Sam Altman, CEO of ChatGPT creator OpenAI, speaking at a panel discussion at Singapore Management University on June 13, 2023. ST PHOTO: KEVIN LIM occhat13

OpenAI CEO Sam Altman addressed concerns about the rapid roll-out of ChatGPT in a dialogue at Singapore Management University on June 13.

ST PHOTO: KEVIN LIM

Follow topic:

SINGAPORE - Society will be deprived of the time to “co-evolve” with artificial intelligence (AI) if the technology is developed in secret, said the man behind ChatGPT, OpenAI chief executive Sam Altman.

Addressing concerns about the rapid roll-out of the chatbot, which has surpassed 100 million users and brought generative AI tools into the mainstream, he added that making the technology public is necessary to understand how AI can be used and guided to help society.

Mr Altman made these points in an hour-long dialogue at Singapore Management University (SMU) on Tuesday, hosted by AI Singapore and the Infocomm Media Development Authority. The dialogue is part of the 38-year-old’s world tour to discuss AI-related issues.

Speaking to at least 1,000 developers, tech professionals, students and other industry players, he said part of the reason for the tour is to escape the Silicon Valley bubble and understand global issues in AI.

The impact of ChatGPT around the world has surprised him. “The policy conversations and thinking about what happens next are unbelievably sophisticated and thoughtful. That has been a big update for us,” said Mr Altman, who also met Prime Minister Lee Hsien Loong and other leaders in a closed-door meeting to discuss AI-related matters.

Mr Altman’s tour comes as concerns grow over generative AI systems, which produce text, images and other content when prompted, and have accelerated since ChatGPT’s public launch in November 2022.

These systems are the stepping stones in OpenAI’s mission to achieve AI that surpasses human intelligence, or artificial generative intelligence (AGI), in a way that benefits humanity.

The firm, which is backed by billions of dollars from the likes of Microsoft, has said it is impossible to stop the development of AGI so developers need to figure out how to get it right.

But a growing pool of major tech players and experts have called for a pause on AI development, citing concerns that AI will spread disinformation, eliminate jobs, and even threaten humanity.

Mr Altman, as well as executives from Google and Microsoft, were among tech executives who signed

a letter urging for the mitigation of AI’s risks.

In April, Italy

became the first Western country to temporarily

ban ChatGPT

over privacy concerns.

In June, the Singapore authorities

formed an alliance of tech companies

which includes Google and Meta to establish principles and tackle ethical issues in AI. A report by the authorities here highlighted disinformation, the lack of accountability and criminal use of AI among key risks in the sector.

While acknowledging these concerns, Mr Altman said the harms caused by AI models remain manageable in their current scale. “We want to minimise them as much as possible, but we realised that no matter how much testing (we do)... people will use things in ways that we didn’t think about. That is the case with any new technology.”

Releasing ChatGPT through gradual upgrades lets society adapt, he said.

“You can’t learn everything in a lab,” said Mr Altman. “If you don’t deploy this along the way, and you just go build an AGI in secret in a lab and drop it on the world, society doesn’t get the time to co-evolve.

“The fact that the world is having this conversation now, well before we get into AGI, is really important. It wouldn’t have been very effective without us deploying it.”

The launch of ChatGPT kickstarted an AI race in the tech industry. Since then, Microsoft has incorporated OpenAI’s technology into its Bing search engine, and Google

launched chatbot Bard in March

as a key competitor to ChatGPT.

Developers are also using ChatGPT and similar AI models as the basis for new digital tools such as customer service chatbots. The

Singapore Government’s Pair AI bot,

for instance, will soon assist public officers with writing and research.

AI should boost productivity and support decision-making, instead of being left to make choices on behalf of people, said Mr Altman, adding that he aims to improve ChatGPT by incorporating more languages and building its understanding of cultures from around the world.

“I was never excited about the direction of AI... (like a creature) that makes all the decisions. I was worried that was going to happen,” he said. “It now looks more like we can create tools that enhance our ability to do things and generate new knowledge.”

Mr Altman said there is still some way to go to improve AI when it comes to its ability to spread disinformation convincingly.

Answers written by ChatGPT are frequently incorrect or sometimes completely made up. Famously, ChatGPT fabricated a sexual harassment scandal but accused a real law professor.

“We will make progress and it will get better,” said Mr Altman. “In a year or two from now, this won’t be something we talk about so much... We can be more focused on other problems.”

When asked how tech leaders should approach the regulation of AI amid discussions on how to reign in the technology, Mr Altman said he does not have a recommendation until it is clear what governments have planned.

He added that markets and governments will each set their own standards and the world will evolve towards what makes the most sense. “We’ll get to observe the regulatory marketplace and get to the right answer over time.”

See more on