AI Could Usher in a New Generation of Catfishing

Credit to Author: Madeleine Gregory| Date: Mon, 17 Jun 2019 14:57:36 +0000

Online dating has never been the safest pursuit. If you need proof: MTV’s reality show Catfish has spent the better part of a decade uncovering people posing as someone they’re not.

Now, technological advances are bringing about a new wave of problems for people looking for love online—the possibility that the person you’re talking to isn’t real, even if you think you’re looking at a video or image of them.

The panic surrounding technology’s power to spread false information has only grown since the 2016 election, when “fake news” became a household term. Computers can now generate fake video of people saying and doing things they would never do, for example. These videos are called “deepfakes” due to their use of a type of machine learning, or artificial intelligence, called deep learning. Recently, a deepfake of Mark Zuckerberg spread throughout the internet, making headlines and deepening public fear of convincing fabrications.

Deepfakes are just one way AI can mess with your perception. Now, computers using deep learning can create AI-generated faces that do not exist in real life. Such photos have already been used in some pretty high-stakes catfishing. Last week, the Associated Press reported that a LinkedIn profile for a seemingly politically-connected woman named Katie Jones was fake, and likely used an AI-generated face image to abet the con. The fake profile successfully connected with dozens of politically-connected users.

Read More: This Deepfake of Mark Zuckerberg Tests Facebook’s Fake Video Policies

“You can generate a lot of fake personae, aimed at appealing to various kinds of people, and see who ‘bites,’” Lawrence Birnbaum, a computer science professor at Northwestern University, told Motherboard in an email. “This is where computer technology really matters the most—doing things at large scale.”

AI-generated faces add a layer of uncertainty for people on dating apps or online forums. Previously, it was possible that whoever you were talking to was using a stolen picture of someone else, but that picture had to belong to someone. With that knowledge, you could use a reverse image search to find where those photos came from, helping to track down whoever was catfishing you. With AI-generated faces, you likely could not trace them in this way.

These faces are becoming more available, too. All you have to do is go to a website that creates new faces every time you refresh. The website, called thispersondoesnotexist.com, relies on a type of machine learning called Generative Adversarial Networks, or GANs. These programs evaluate a huge dataset of images—in this case, human faces—and “learn” how to generate new ones. The website uses GAN code released by chipmaker Nvidia last year.

It doesn’t end with human faces, either. More websites using open source GAN code are cropping up all the time, including ones that create fake rental listings and cats. Someone with time on their hands could feasibly fill a dating profile with AI-generated photos of themselves, their apartment, and their pet.

“It seems to be following the usual trajectory of new technology,” Maurice Turner, a senior technologist at the Center for Democracy and Technology, told Motherboard on a phone call. “It goes from research to paid online services, and soon we’ll see it as an app on people’s phones.”

In some ways, it’s already gotten there. Just last week, a male college student in California used the gender-swap filter on Snapchat to make a Tinder profile, posing as an underage girl. He used this profile to solicit a cop, sending screenshots of their conversations into the police department. That cop was arrested on charges of communicating with a minor for the purposes of committing a felony.

Without careful examination—say, if you were absent-mindedly scrolling through your social media feed—and specialized knowledge, it can be easy to think that these faked faces might be the real deal.

“So many images that we see online already go through a level of enhancement or manipulation through filters or smartphones,” Turner said.

While better technology is making it harder to spot fakes, there are still a few tricks to look out for. Faces created with AI can have smudged or out-of-place hair, facial asymmetry, misaligned or strangely-sized teeth, differences of colour around the edge of a face, and generally painterly or surreal details. With videos, it can be helpful to watch out for blinking, as deepfakes typically blink far less frequently than normal videos. None of these are fool-proof, but are a good place to start if you’re skeptical.

It’s been a long time since the phrase “seeing is believing” held any water on the internet. Now, with easy-to-access tools for creating fake or manipulated faces, people can lie better and more convincingly.

“We don’t want people to be that skeptical of every single interaction,” Turner said. “We’ll lose faith in having basic communication, online or in person.”

Listen to CYBER, Motherboard’s new weekly podcast about hacking and cybersecurity.

This article originally appeared on VICE US.

http://www.vice.com/en_ca/rss