SAN FRANCISCO – In December, Mr Larry Page and Mr Sergey Brin, Google’s founders, held several meetings with company executives. The topic: a rival’s new chatbot, a clever artificial intelligence product that looked as if it could be the first notable threat in decades to Google’s US$149 billion (S$197 billion) search business.
Mr Page and Mr Brin, who have not spent much time at Google since they left their daily roles with the company in 2019, reviewed Google’s AI product strategy, according to two people with knowledge of the meetings who were not allowed to discuss them.
They approved plans and pitched ideas to put more chatbot features into Google’s search engine. And they offered advice to company leaders, who have put AI front and centre in their plans.
The re-engagement of Google’s founders, at the invitation of the company’s current CEO, Mr Sundar Pichai, emphasised the urgency felt among many Google executives about AI and that chatbot, ChatGPT.
The bot, which was released by the small San Francisco company OpenAI two months ago, amazed users by simply explaining complex concepts and generating ideas from scratch. More important to Google, it looked as if it could offer a new way to search for information on the Internet.
The new AI technology has shaken Google out of its routine. Mr Pichai declared a “code red”, upending existing plans and jump-starting AI development.
Google now intends to unveil more than 20 new products and demonstrate a version of its search engine with chatbot features in 2023.
“This is a moment of significant vulnerability for Google,” said Mr D. Sivakumar, a former Google research director who helped found a startup called Tonita, which makes search technology for e-commerce companies.
“ChatGPT has put a stake in the ground, saying, ‘Here’s what a compelling new search experience could look like,’” he said. He added that Google has overcome previous challenges and could deploy its arsenal of AI to stay competitive.
In the pipeline
Since stepping back from day-to-day duties, Mr Page and Mr Brin have taken a laissez-faire approach to Google, two people familiar with the matter said. They have let Mr Pichai run the company and its parent company, Alphabet, while they have pursued other projects, such as flying-car startups and disaster-relief efforts.
Their visits to the company’s Silicon Valley offices in the past few years have mostly been to check in on the so-called moonshot projects that Alphabet calls “Other Bets”, one person said. Until recently, they have not been very involved with the search engine.
But they have long been keen on bringing AI into Google’s products.
Mr Vic Gundotra, a former senior vice-president at Google, recounted that he gave Mr Page a demonstration of a new Gmail feature around 2008. But Mr Page was unimpressed by the effort, asking, “Why can’t it automatically write that e-mail for you?”
In 2014, Google also acquired DeepMind, a leading AI research lab based in London.
Google’s Advanced Technology Review Council, a panel of executives that includes Mr Jeff Dean, the company’s senior vice-president of research and artificial intelligence, and Mr Kent Walker, Google’s president of global affairs and chief legal officer, met less than two weeks after ChatGPT debuted to discuss their company’s initiatives, according to the slide presentation.
They reviewed plans for products that were expected to debut at Google’s company conference in May, including Image Generation Studio, which creates and edits images, and a third version of AI Test Kitchen, an experimental app for testing product prototypes.
Other image and video projects in the works included a feature called Shopping Try-on, a YouTube green-screen feature to create backgrounds; a wallpaper maker for the Pixel smartphone; an application called Maya that visualises 3D shoes; and a tool that could summarise videos by generating a new one, according to the slides.
Google has a list of AI programs it plans to offer software developers and other companies, including image-creation technology, which could bolster revenue to Google’s Cloud division.
There are also tools to help other businesses create their own AI prototypes in Internet browsers, called MakerSuite, which will have two “Pro” versions, according to the presentation.
In May, Google also expects to announce a tool to make it easier to build apps for Android smartphones, called Colab + Android Studio, that will generate, complete and fix code, according to the presentation. Another code generation and completion tool, called PaLM-Coder 2, has also been in the works.
Rival to ChatGPT
Google executives hope to reassert their company’s status as a pioneer of AI.
The company aggressively worked on AI over the past decade and already has offered to a small number of people a chatbot that could rival ChatGPT, called LaMDA, or Language Model for Dialogue Applications.
“We continue to test our AI technology internally to make sure it’s helpful and safe, and we look forward to sharing more experiences externally soon,” Ms Lily Lin, a spokesperson for Google, said in a statement.
She added that AI would benefit individuals, businesses and communities, and that Google is considering the broader societal effects of the technology.
For the chatbot search demonstration that Google plans for 2023, getting facts right, ensuring safety and getting rid of misinformation are priorities.
For other upcoming services and products, the company has a lower bar and will try to curb issues relating to hate and toxicity, danger and misinformation rather than preventing them, according to the presentation.
The company intends, for example, to block certain words to avoid hate speech and will try to minimise other potential issues. Google expects governments to scrutinise its AI products for signs of these issues.
The company has recently been the subject of numerous government inquiries and lawsuits accusing it of anti-competitive business practices.
It anticipates, according to the presentation, “increased pressure on Al regulatory efforts because of rising concerns about misinformation, harmful content, bias, copyright”. NYTIMES