It was only a matter of time: Here comes an app for fake videos

FakeApp makes it free and relatively easy to create realistic face swops and leave few traces of manipulation.
FakeApp makes it free and relatively easy to create realistic face swops and leave few traces of manipulation.PHOTO: SCREENGRAB FROM NYTIMES

(NYTIMES) - The scene opened on a room with a red sofa, a potted plant and the kind of bland modern art you'd see on a therapist's wall.

In the room was Michelle Obama, or someone who looked exactly like her. Wearing a low-cut top with a black bra visible underneath, she writhed lustily for the camera and flashed her unmistakable smile.

Then, the former first lady's doppelganger began to strip.

The video, which appeared on the online forum Reddit, was what's known as a "deepfake" - an ultrarealistic fake video made with artificial intelligence software.

It was created using a program called FakeApp, which superimposed Obama's face onto the body of a pornographic film actress. The hybrid was uncanny - if you didn't know better, you might have thought it was really her.

Until recently, realistic computer-generated video was a laborious pursuit available only to big-budget Hollywood productions or cutting-edge researchers. Social media apps like Snapchat include some rudimentary face-morphing technology.

But in recent months, a community of hobbyists has begun experimenting with more powerful tools, including FakeApp - a program that was built by an anonymous developer using open-source software written by Google.

FakeApp makes it free and relatively easy to create realistic face swops and leave few traces of manipulation. Since a version of the app appeared on Reddit in January, it has been downloaded more than 120,000 times, according to its creator.

Deepfakes are one of the newest forms of digital media manipulation, and one of the most obviously mischief-prone. It's not hard to imagine this technology's being used to smear politicians, create counterfeit revenge porn or frame people for crimes. Lawmakers have already begun to worry about how deepfakes could be used for political sabotage and propaganda.

Even on morally lax sites like Reddit, deepfakes have raised eyebrows. Recently, FakeApp set off a panic after Motherboard, the technology site, reported that people were using it to create pornographic deepfakes of celebrities.

Pornhub, Twitter and other sites quickly banned the videos, and Reddit closed a handful of deepfake groups, including one with nearly 100,000 members.

Before the Reddit deepfake groups were closed, they hosted a mixture of users trading video-editing tips and showing off their latest forgeries.

A post titled "3D face reconstruction for additional angles" sat next to videos with titles like "(Not) Olivia Wilde playing with herself".

Some users on Reddit defended deepfakes and blamed the media for overhyping their potential for harm. Others moved their videos to alternative platforms, rightly anticipating that Reddit would crack down under its rules against nonconsensual pornography. And a few expressed moral qualms about putting the technology into the world.

Then, they kept making more.

The deepfake creator community is now in the Internet's shadows. But while out in the open, it gave an unsettling peek into the future.

"This is turning into an episode of Black Mirror," wrote one Reddit user. The post raised the ontological questions at the heart of the deepfake debate: Does a naked image of Person A become a naked image of Person B if Person B's face is superimposed in a seamless and untraceable way? In a broader sense, on the internet, what is the difference between representation and reality?

The user then signed off with a shrug: "Godspeed rebels."

MAKING DEEPFAKES

After lurking for several weeks in Reddit's deepfake community, I decided to see how easy it was to create a (safe for work, nonpornographic) deepfake using my own face.

I started by downloading FakeApp and enlisting two technical experts to help me. The first was Mark McKeague, a colleague in The New York Times' research and development department. The second was a deepfake creator I found through Reddit, who goes by the nickname Derpfakes.

Because of the controversial nature of deepfakes, Derpfakes would not give his or her real name. Derpfakes started posting deepfake videos on YouTube a few weeks ago, specialising in humorous offerings like Nicolas Cage playing Superman. The account has also posted some how-to videos on deepfake creation.

What I learnt is that making a deepfake isn't simple. But it's not rocket science, either.

Picking the right source data is crucial. Short video clips are easier to manipulate than long clips, and scenes shot at a single angle produce better results than scenes with multiple angles. Genetics also help. The more the faces resemble each other, the better.

I'm a brown-haired white man with a short beard, so Mark and I decided to try several other brown-haired, stubbled white guys. We started with Ryan Gosling. (Aim high, right?) I also sent Derpfakes, my outsourced Reddit expert, several video options to choose from.

Next, we took several hundred photos of my face, and gathered images of Gosling's face using a clip from a recent TV appearance. FakeApp uses these images to train the deep learning model and teach it to emulate our facial expressions.

To get the broadest photo set possible, I twisted my head at different angles, making as many different faces as I could.

Mark then used a program to crop those images down, isolating just our faces, and manually deleted any blurred or badly cropped photos. He then fed the frames into FakeApp. In all, we used 417 photos of me, and 1,113 of Gosling.

When the images were ready, Mark pressed "start" on FakeApp, and the training began. His computer screen filled with images of my face and Gosling's face, as the program tried to identify patterns and similarities.

About eight hours later, after our model had been sufficiently trained, Mark used FakeApp to finish putting my face on Gosling's body. The video was blurry and bizarre, and Gosling's face occasionally flickered into view. Only the legally blind would mistake the person in the video for me.

We did better with a clip of Chris Pratt, the scruffy star of "Jurassic World," whose face shape is a little more similar to mine. For this test, Mark used a bigger data set - 1,861 photos of me, 1,023 of him - and let the model run overnight.

A few days later, Derpfakes, who had been training a model of his own, sent me a finished deepfake he had made using the footage I had sent him and a video of actor Jake Gyllenhaal. This one was much more lifelike, a true hybrid that mixed my facial features with his hair, beard and body.

WHAT THE APP'S CREATOR SAYS

After the experiment, I reached out to the anonymous creator of FakeApp through an e-mail address on its website. I wanted to know how it felt to create a cutting-edge AI tool, only to have it gleefully co-opted by ethically challenged pornographers.

A man wrote back, identifying himself as a software developer in Maryland. Like Derpfakes, the man would not give me his full name, and instead went by his first initial, N. He said he had created FakeApp as a creative experiment and was chagrined to see Reddit's deepfake community use it for ill.

"I joined the community based around these algorithms when it was very small (less than 500 people)," he wrote, "and as soon as I saw the results I knew this was brilliant tech that should be accessible to anyone who wants to play around with it. I figured I'd take a shot at putting together an easy-to-use package to accomplish that."

N. said he didn't support the use of FakeApp to create nonconsensual pornography or other abusive content. And he said he agreed with Reddit's decision to ban explicit deepfakes. But he defended the product.

"I've given it a lot of thought," he said, "and ultimately I've decided I don't think it's right to condemn the technology itself - which can of course be used for many purposes, good and bad."

'NEXT FORM OF COMMUNICATION'

On the day of the school shooting last month in Parkland, Florida, a screenshot of a BuzzFeed News article, "Why We Need to Take Away White People's Guns Now More Than Ever," written by a reporter named Richie Horowitz, began making the rounds on social media.

The whole thing was fake. No BuzzFeed employee named Richie Horowitz exists, and no article with that title was ever published on the site. But the doctored image pulsed through right-wing outrage channels and was boosted by activists on Twitter. It wasn't an AI-generated deepfake, or even a particularly sophisticated Photoshop job, but it did the trick.

Online misinformation, no matter how sleekly produced, spreads through a familiar process once it enters our social distribution channels. The hoax gets 50,000 shares, and the debunking an hour later gets 200. The carnival barker gets an algorithmic boost on services like Facebook and YouTube, while the expert screams into the void.

There's no reason to believe that deepfake videos will operate any differently. People will share them when they're ideologically convenient and dismiss them when they're not. The dupes who fall for satirical stories from The Onion will be fooled by deepfakes, and the scrupulous people who care about the truth will find ways to detect and debunk them.

So, OK. Here I am, telling you this: An AI program powerful enough to turn Michelle Obama into a pornography star, or transform a schlubby newspaper columnist into Jake Gyllenhaal, is in our midst. Manipulated video will soon become far more commonplace.

And there's probably nothing we can do except try to bat the fakes down as they happen, pressure social media companies to fight misinformation aggressively, and trust our eyes a little less every day.

Godspeed, rebels.