MOUNTAIN VIEW (California) • Last spring, a few years after taking a research job at Google, Mr Douglas Eck pitched an idea of building machines that could create their own songs.
The result is Project Magenta, a team of Google researchers who are teaching machines to create not only their own music, but also to make so many other forms of art, including sketches, videos and jokes.
With its empire of smartphones, apps and Internet services, Google is in the business of communication and Mr Eck sees Magenta as a natural extension of this work.
"It's about creating new ways for people to communicate," he said during a recent interview inside the small two-storey building here that serves as headquarters for Google AI research.
In the mid-1990s, he worked as a database programmer in Albuquerque, New Mexico, while moonlighting as a musician.
"My only goal in life was to mix AI (artificial intelligence) and music," he said.
Enrolling as a graduate student at Indiana University, he pitched the idea to cognitive scientist Douglas Hofstadter, who wrote the Pulitzer Prize-winning book on minds and machines, Godel, Escher, Bach: An Eternal Golden Braid.
Hofstadter turned him down, adamant that even the latest artificial intelligence techniques were much too primitive.
But during the next two decades, working on the fringe of academia, Mr Eck kept chasing the idea and, eventually, AI caught up with his ambition.
The project is part of a growing effort to generate art through a set of AI techniques that have only recently come of age.
Called deep neural networks, these complex mathematical systems allow machines to learn specific behaviour by analysing vast amounts of data.
By looking for common patterns in millions of bicycle photos, for instance, a neural network can learn to recognise a bike.
This is how Facebook identifies faces in online photos, how Android phones recognise commands spoken into phones and how Microsoft's Skype translates one language into another.
But these complex systems can also create art. By analysing a set of songs, for instance, they can learn to build similar sounds.
As Mr Eck says, these systems are at least approaching the point - still many, many years away - when a machine can instantly build a new Beatles song or perhaps trillions of new Beatles songs, each sounding a lot like the music the Beatles themselves recorded, but also a little different.
But that end game - as much a way of undermining art as creating it - is not what he is after.
There are so many other paths to explore beyond mere mimicry. The ultimate idea is not to replace artists, but to give them tools that allow them to create in entirely new ways.
For centuries, orchestral conductors have layered sounds from various instruments atop one another.
But this is different. Rather than layering sounds, Mr Eck and his team are combining them to form something that did not exist before, creating new ways that artists can work.
"We're making the next film camera," he said. "We're making the next electric guitar."
Called NSynth, this particular project is only just getting off the ground.
But across the worlds of both art and technology, many are already developing an appetite for building new art through neural networks and other AI techniques.
"This work has exploded over the last few years," said Adam Ferris, a photographer and artist in Los Angeles. "This is a totally new aesthetic."
In 2015, a separate team of researchers inside Google created DeepDream, a tool that uses neural networks to generate haunting imagescapes from existing photography, and this has spawned new art inside Google and out.
If the tool analyses a photo of a dog and finds a bit of fur that looks vaguely like an eyeball, it will enhance that bit of fur and then repeat the process.
The result is a dog covered in swirling eyeballs.