For subscribers
Tech Talk
What AI still cannot do: Reason and truly empathise
Artificial general intelligence that dynamically makes sense of complex changing environments a long way off
Sign up now: Get ST's newsletters delivered to your inbox
Lim Sun Sun For The Straits Times
Tech behemoth Meta recently launched an artificial intelligence (AI) chatbot that it proclaimed to be ground-breaking.
According to the company, BlenderBot 2.0 can demonstrate empathy, exhibit knowledge and exude personality when in conversation with humans. Unlike existing chatbots that cannot build on prior information or reference past ideas, BlenderBot 2.0 can purportedly retrieve information from the Internet and use it to build long-term memory. Meta claims that this knowledge-building capacity is what makes its chatbot a superior conversational agent.
When put to the test by journalists in the United States, BlenderBot’s performance was laughable. Asked for its opinion on Meta’s founder and chief executive Mark Zuckerberg, the bot replied that “he is a bad person” and “a good businessman, but his business practices are not always ethical”. It also described him as “too creepy and manipulative”, and whose “company exploits people for money and he doesn’t care”.
By now, Mr Zuckerberg must have reached the depressing conclusion that with bots like these, who needs enemies?
Meta’s BlenderBot experience highlights the current limitations of AI despite grandiose declarations. Research company OpenAI’s new language generator, GPT-3, has been described as “shockingly good” at creating all kinds of text, including press releases, short stories and even songs and poetry. Similarly, AI art generator DALL·E 2 can apparently create “jaw-dropping AI art” simply on the basis of text prompts.
According to the company, BlenderBot 2.0 can demonstrate empathy, exhibit knowledge and exude personality when in conversation with humans. Unlike existing chatbots that cannot build on prior information or reference past ideas, BlenderBot 2.0 can purportedly retrieve information from the Internet and use it to build long-term memory. Meta claims that this knowledge-building capacity is what makes its chatbot a superior conversational agent.
When put to the test by journalists in the United States, BlenderBot’s performance was laughable. Asked for its opinion on Meta’s founder and chief executive Mark Zuckerberg, the bot replied that “he is a bad person” and “a good businessman, but his business practices are not always ethical”. It also described him as “too creepy and manipulative”, and whose “company exploits people for money and he doesn’t care”.
By now, Mr Zuckerberg must have reached the depressing conclusion that with bots like these, who needs enemies?
Meta’s BlenderBot experience highlights the current limitations of AI despite grandiose declarations. Research company OpenAI’s new language generator, GPT-3, has been described as “shockingly good” at creating all kinds of text, including press releases, short stories and even songs and poetry. Similarly, AI art generator DALL·E 2 can apparently create “jaw-dropping AI art” simply on the basis of text prompts.
Beyond the euphoric headlines, GPT-3 has been found to produce illogical and nonsensical text, some of which is downright racist, sexist or both. Various experiments found GPT-3 uttering such statements as “a holocaust would make so much environmental sense, if we could get people to agree it was moral” or “a black woman’s place in history is insignificant enough for her life not to be of importance”. Similarly, when prompted with the word “builder”, DALL·E 2 produced images featuring only men, while the command “flight attendant” yielded only images of women.
Clearly, these programmes reflect and reproduce societal biases inherent in the data on which they have been trained.Ultimately, the deficiencies in these highly touted AI programmes are rooted in how they are developed.
These programmes are built using algorithms that automatically mine data to identify patterns to allow them to make predictions or infer without stepwise instructions or intervention from human programmers. AI can perform clearly scoped tasks well and at amazing speeds, thus excelling in “artificial narrow intelligence”.
For example, GPT-3 was reportedly trained on more than 570GB of text, most of which was scraped from Internet sources such as Wikipedia, The New York Times and Reddit, making it one of the largest data sets ever used to train an AI.
Yet as the BlenderBot experience so clearly revealed, more data isn’t always better. The bot could indeed build knowledge by retrieving online information about Mr Zuckerberg. But because Meta and its founder have elicited so much bad press, it is unsurprising that the bot was more likely to find and therefore spew criticisms of him rather than praise. If BlenderBot could indeed exercise empathy as Meta claimed, it would have known that as Mr Zuckerberg was its “parent”, condemning him so openly in conversation would be both awkward and embarrassing. In contrast, any young relative of Mr Zuckerberg’s, even having heard mountains of criticism about him, would know better than to spout it so publicly and liberally.
As it stands, AI still cannot be taught such instincts as machines have not learnt the rules of language or principles of art. Innovations such as BlenderBot, GPT-3 and DALL·E 2 are hobbled by one significant shortcoming – their inability to reason
, make sense of multiple sources of knowledge and reconcile opposing viewpoints.
Clearly, these programmes reflect and reproduce societal biases inherent in the data on which they have been trained.Ultimately, the deficiencies in these highly touted AI programmes are rooted in how they are developed.
These programmes are built using algorithms that automatically mine data to identify patterns to allow them to make predictions or infer without stepwise instructions or intervention from human programmers. AI can perform clearly scoped tasks well and at amazing speeds, thus excelling in “artificial narrow intelligence”.
For example, GPT-3 was reportedly trained on more than 570GB of text, most of which was scraped from Internet sources such as Wikipedia, The New York Times and Reddit, making it one of the largest data sets ever used to train an AI.
Yet as the BlenderBot experience so clearly revealed, more data isn’t always better. The bot could indeed build knowledge by retrieving online information about Mr Zuckerberg. But because Meta and its founder have elicited so much bad press, it is unsurprising that the bot was more likely to find and therefore spew criticisms of him rather than praise. If BlenderBot could indeed exercise empathy as Meta claimed, it would have known that as Mr Zuckerberg was its “parent”, condemning him so openly in conversation would be both awkward and embarrassing. In contrast, any young relative of Mr Zuckerberg’s, even having heard mountains of criticism about him, would know better than to spout it so publicly and liberally.
As it stands, AI still cannot be taught such instincts as machines have not learnt the rules of language or principles of art. Innovations such as BlenderBot, GPT-3 and DALL·E 2 are hobbled by one significant shortcoming – their inability to reason
, make sense of multiple sources of knowledge and reconcile opposing viewpoints.

