The images are jerky, as dashcam footage often is: A car travels down an American highway, with green grass at the side of the road and leaves on the branches of passing trees. At first glance, the video seems utterly mundane - but its very ordinariness is extraordinary, because this landscape never existed.
Over the past 18 months, researchers from around the world have made huge advances in manipulating pictures, images and sound using "machine learning" - artificial intelligence (AI) programs that continually refine their output. The success suggests that within the next decade we might live in a world full of pixel-perfect fake news. How will we ever trust our eyes again?
The summer road video was created by taking footage from a winter day, with bare branches and snow at the side of the road, and asking a computer to "imagine" how it would look in summer.
The result is virtually indistinguishable from the real thing. That in itself is remarkable enough, but even more astounding is how the team from Nvidia trained its AI.
Most machine learning involves feeding in a data set - say, 2,000 pictures of dogs - in the hope that the computer will be able to discern some intrinsic "dogginess" and then apply that knowledge to correctly label a new picture of a schnauzer or a spaniel. Researchers themselves don't always know how their programs are making decisions.
The Nvidia team has taken this concept a step further, by introducing an element of competition. Two AI programs work together - one creates a fake image, and the other judges it. The first AI then tries again, and again, until the other one is satisfied with the result. (The technical term is a "generative adversarial network".)
A similar duelling technique was used by Google's Deepmind to learn the Chinese game Go: Two copies of its AlphaGo program squared off, but only one could win the match. This training allowed AlphaGo to beat the best human players at a game where there are more possible board positions than atoms in the universe.
A new version of the AI - nicknamed AlphaGo Zero - doesn't need any human interaction, only the rules of the game. It also has another important feature: The same algorithm can learn to play chess, Go or the Japanese game, Shogi. If AI becomes more adaptable, then it's creeping ever closer to true intelligence.
At the moment, most of these AI-generated images and videos are no more than curiosities. Nvidia's program, for example, can also transform pictures of house cats into lions, and German shepherds into corgis. The company hopes that its driving simulator could one day help to train driverless cars, whose sensors struggle in poor weather.
However, the sheer array of research projects in this field suggests that it is a potentially transformative technology. AIs can already create photorealistic pictures of imaginary celebrities out of composites of thousands of existing images.
A Twitter bot, Smile Vector, can make the faces of well-known people smile. A free program called FaceApp can give you a good indication of what you would look like older or younger, or if you were the opposite sex.The implications of all this power are both exciting and concerning. In the summer, the University of Washington published a video of Mr Barack Obama talking about a mass shooting. Sure, it was a little bit jerky, but only to the extent that a viewer might assume there was a problem with their internet connection.
Yet the video was entirely fabricated: It used existing footage of the former US president and an audio track made from previous speeches and statements. The two were meshed together with AI wizardry to manipulate his facial features around the new words.
And this is where I begin to get a bit worried. Sure, there might still be ways to debunk videos like these - looking for small, unsmoothed glitches or artefacts in the raw data. But if the debate over fake news on Facebook taught us anything, it's that a forgery only has to be superficially appealing to our existing beliefs to spread across the internet. The eventual correction, meanwhile, is unlikely to go viral.
Witness the rash of articles claiming that the Pope had endorsed President Donald Trump - a wholly fabricated story - or the fact that polls suggest a majority of registered Republican voters still doubt that Mr Obama is a US citizen. How much more easily will a lie be able to spread when there's video "evidence" to back it up?
Fake news? We've barely got started.