If you think ChatGPT could make Google or knowledge jobs obsolete, wait till GPT-4 arrives

ChatGPT is, quite simply, the best AI chatbot ever released to the general public. PHOTO: PIXABAY

Like most nerds who read science fiction, I have spent a lot of time wondering how society will greet true artificial intelligence (AI), if and when it arrives. Will we panic? Start sucking up to our new robot overlords? Ignore it and go about our daily lives?

So it has been fascinating to watch the Twittersphere try to make sense of ChatGPT, a new cutting-edge AI chatbot that was opened for testing last week.

ChatGPT is, quite simply, the best AI chatbot ever released to the general public. It was built by OpenAI, the San Francisco AI company that is also responsible for tools such as GPT-3 and DALL-E 2, the breakthrough image generator that came out this year.

Like those tools, ChatGPT – which stands for “generative pretrained transformer” – landed with a splash. In five days, more than a million people signed up to test it, according to Mr Greg Brockman, OpenAI’s president. Hundreds of screenshots of ChatGPT conversations went viral on Twitter, and many of its early fans speak of it in astonished, grandiose terms, as if it were some mix of software and sorcery.

For most of the past decade, AI chatbots have been terrible – impressive only if you cherry-pick the bot’s best responses and throw out the rest. In recent years, a few AI tools have become good at doing narrow and well-defined tasks, such as writing marketing copy, but they still tend to flail when taken outside their comfort zones.

But ChatGPT feels different. Smarter. Weirder. More flexible. It can write jokes, working computer code and college-level essays. It can also guess at medical diagnoses, create text-based Harry Potter games and explain scientific concepts at multiple levels of difficulty.

The technology that powers ChatGPT is not, strictly speaking, new. It is based on what the company calls “GPT-3.5”, an upgraded version of GPT-3, an AI text generator that sparked a flurry of excitement when it came out in 2020. But although the existence of a highly capable linguistic superbrain might be old news to AI researchers, it is the first time such a powerful tool has been made available to the general public through a free, easy-to-use Web interface.

Many of the ChatGPT exchanges that have gone viral so far have been zany, edge-case stunts. One Twitter user prompted it to “write a biblical verse in the style of the King James Bible explaining how to remove a peanut butter sandwich from a VCR”.

Another asked it to “explain AI alignment, but write every sentence in the speaking style of a guy who won’t stop going on tangents to brag about how big the pumpkins he grew are”.

But users have also been finding more serious applications. For example, ChatGPT appears to be good at helping programmers spot and fix errors in their code. It also appears to be ominously good at answering the types of open-ended analytical questions that frequently appear on school assignments. Many educators have predicted that ChatGPT, and tools like it, will spell the end of homework and take-home exams.

Most AI chatbots are “stateless” – meaning that they treat every new request as a blank slate and are not programmed to remember or learn from previous conversations. But ChatGPT can remember what a user has told it before, in ways that could make it possible to create personalised therapy bots, for example.

ChatGPT is not perfect, by any means. The way it generates responses – in extremely oversimplified terms, by making probabilistic guesses about which bits of text belong together in a sequence, based on a statistical model trained on billions of examples of text pulled from all over the Internet – makes it prone to giving wrong answers, even on seemingly simple maths problems.

On Dec 5, the moderators of Stack Overflow, a website for programmers, temporarily barred users from submitting answers generated with ChatGPT, saying the site had been flooded with submissions that were incorrect or incomplete.

Unlike Google, ChatGPT does not trawl the Web for information on current events, and its knowledge is restricted to things it learnt before 2021, making some of its answers feel stale. When I asked it to write the opening monologue for a late-night show, for example, it came up with several topical jokes about former United States president Donald Trump pulling out of the Paris climate accord.

Since its training data includes billions of examples of human opinion, representing every conceivable view, it is also, in some sense, a moderate by design. Without specific prompting, for example, it is hard to coax a strong opinion out of ChatGPT about charged political debates; usually, you will get an even-handed summary of what each side believes.

There are also plenty of things ChatGPT will not do, as a matter of principle. OpenAI has programmed the bot to refuse “inappropriate requests” – a nebulous category that appears to include no-nos such as generating instructions for illegal activities. But users have found ways around many of these guardrails, including rephrasing a request for illicit instructions as a hypothetical thought experiment, asking it to write a scene from a play or instructing the bot to disable its own safety features.

The potential societal implications of ChatGPT are too big. Maybe this is, as some commenters have posited, the beginning of the end of all white-collar knowledge work, and a precursor to mass unemployment. Maybe it is just a nifty tool that will be mostly used by students, Twitter jokesters and customer service departments until it is usurped by something bigger and better.

Personally, I am still trying to wrap my head around the fact that ChatGPT – a chatbot that some people think could make Google obsolete, and that is already being compared to the iPhone in terms of its potential impact on society – is not even OpenAI’s best AI model. That would be GPT-4, the next incarnation of the company’s large language model, which is rumoured to be coming out some time next year.

We are not ready. NYTIMES

Join ST's Telegram channel and get the latest breaking news delivered to you.