Artificial intelligence (AI) took centre stage at the opening of tech giant Google's annual developer conference on Wednesday, as the company announced deeper integration of AI into both new and existing services.
Google apps such as Photos and Gmail will soon have more AI-powered functions added to them, while its Assistant software, an AI-powered virtual assistant, has also been broadened with new features.
Furthermore, Android users can expect a new image recognition feature, called Google Lens, which lets them search the Internet using their phone camera.
These were made possible by advances in and the application of Google's work in machine learning - a form of programming where computers become smarter by learning on their own. This technology dominated the keynote address at the Google I/O 2017 developer conference on Wednesday morning, held at the Shoreline Amphitheatre next to the Google campus in Mountain View, California.
Speaking at the opening keynote to kick off the three-day event, Google chief executive Sundar Pichai said the world has shifted from being "mobile-first" to "AI-first", as computing continues to evolve.
"In an AI-first world, we are rethinking all our products and applying machine learning and AI to solve user problems," said Mr Pichai. "And we are doing this across every one of our products."
The addition of machine learning to existing Google apps will make them smarter to use.
For example, Gmail users will soon be able to use the Smart Reply feature, which lets them fire off a quick reply that the software will write for them based on the contents of the e-mail they receive.
AI will also be more evident in the Google Photos app, which Google announced now has 500 million users. The AI engine will soon be able to enhance photos by intelligently removing, for example, a fence obscuring a human subject.
Photo-sharing will also come to the app, as Photos will learn to recognise photos of your friends and suggest you send these to them.
Google is also expanding its Assistant software - which was available only on selected Android phones since its launch in May last year - to iPhones. Now, Assistant users will also be able to type within the service to make searches on their phones, as it used to be powered solely by voice commands.
Users will be able to access the new Google Lens through Assistant as well, which lets them perform searches or ask questions about objects they can see in the world.
For example, users can point their phone's camera to a sign with movie timings. The Assistant software, using AI, will be able to register the movie title and times, and then bring up options for a user to save it as a calendar event, book tickets immediately or find out more information about the movie.
The only consumer product announcement made during the two-hour keynote was a standalone headset that lets users access virtual reality (VR) and augmented reality (AR) without the need to connect a phone or separate device to it.
These headsets, which will be made by HTC and Lenovo, will contain all the hardware required for VR and AR within the device itself. No release date has been given.
Google also released the beta version of its latest Android operating system, which still goes by the code name of Android O.
New features in Android O include greater security for apps on the Google Play Store, faster start-up times and better battery performance. Android O also supports picture-in-picture mode, which lets users make a video call, for instance, and view it in a minimised window while switching to another app.