Google ups artificial intelligence game at annual developer conference

Google chief executive Sundar Pichai at the opening keynote address of Google I/O 2017 on May 17, 2017. ST PHOTO: LESTER HIO
Google chief executive Sundar Pichai at the opening keynote address of Google I/O 2017 on May 17, 2017. ST PHOTO: LESTER HIO

MOUNTAIN VIEW, California - Artificial intelligence (AI) took centrestage at tech giant's Google annual developer conference in 2017, as the company announced deeper integration of AI into both new and existing services.

Google apps such as Photos and Gmail will soon have AI-powered functions, while Android users can expect a new image recognition feature, called Google Lens.

Google's Assistant software, an AI-powered virtual assistant, has also been enhanced with new features.

In particular, machine learning, which is a form of programming where computers become smarter by learning on their own, dominated the keynote address at the Google I/O 2017 developer conference on Wednesday (May 17), held at the Shoreline Amphitheatre next to the Google campus in Mountain View, California.

Speaking at the opening keynote to kick off the three-day event, Google chief executive Sundar Pichai said the world has shifted from being "mobile-first" to "AI-first", as computing continues to evolve.

"In a AI-first world, we are rethinking all our products and applying machine learning and AI to solve user problems," said Mr Pichai. "And we are doing this across every one of our products.

"So today, if you use Google search, we rank (results) differently using machine learning. If you use Google Maps, Street View automatically recognises restaurant signs, street signs, using machine learning."

The addition of machine learning to existing Google apps will make them smarter to use. For example,Gmail users will soon be able to use the Smart Reply feature, which lets them fire off a quick reply that the software will write for them based on the contents of the e-mail they receive.

AI will also be more evident in the Google Photos app. Users will soon be able to use the software to intelligently pick out photos based on events such as holidays, special occasions or celebrations, which they can then compile into a physical photo book.

Google is also expanding its Assistant software, which was only available on selected Android phones since its launch in May 2016, onto iPhones. Now, Assistant users will also be able to type within the service to make searches on their phones, as it used to be powered solely by voice commands.

Users will also be able to access the new Google Lens through Assistant, which lets them perform searches or ask questions about objects they can see in the world.

For example, users can point their phone's camera to a sign with movie timings. The Assistant software, using AI, will be able to register the movie title and times, where it will then bring up options for the user to save as a calendar event, book tickets immediately, or find out more information about the movie.

The only consumer product announcement made at the two-hour keynote is an upcoming standalone headset that lets users access virtual reality (VR) and augmented reality (AR) without the need to connect a phone or separate device to it.

These headsets, which will be manufactured by HTC and Lenovo, will contain all the hardware required for VR and AR within the device itself and are due to released later in 2017.

Lastly, Google also released the beta version of its latest Android operating system, which still goes by the codename of Android O. New features in Android O include greater security for apps on the Google Play Store, faster start-up times and better battery performance.

Android O will also support picture-in-picture mode, which will allow users to make a video call, for instance, and view it in a minimised window while switching to another app.

Join ST's Telegram channel and get the latest breaking news delivered to you.