© 2024 South Carolina Public Radio
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Deepfakes: How USC is Fighting to Stay Ahead of Misinformation

Scott Morgan
/
South Carolina Public Radio

Deepfake (noun): Synthetic media in which a person in an existing image or video is replaced with someone else's likeness.

You stumble across a video of Nicholas Cage as Superman. You think, “Wait a minute – Nic never played Superman!”

And you’d be right.

But what if you didn’t already know that? What if instead you saw some obscure movie from the 90s and you didn’t know anyone in it? The face on one of them might look a little weird, but … is that how the person really talks? Would the alarm bells still go off that you’re looking at a faked video?

If you put this same kind of situation into government officials or public health officials or major corporate CEOS, it gets a lot less entertaining really fast, doesn’t it? Imagine watching a major health official extol the miracle vaccine that will save us all from COVID-19. Imagine a video so convincing that no one questions its authenticity, even though it’s not real.

Or, let’s go one better – imagine that it really is a major health official talking about a real vaccine, but because there are so many convincing fakes out there, and because someone with a large enough and loud enough perch could easily call anything he doesn’t like fake, people don’t trust what they see to be the truth.

Such is the weight of public trust on news agencies in an age when anyone with a phone can say absolutely anything to absolutely everyone; can collect videos and photos and sound clips and share them with anyone who will listen – and who in turn can do the same.

For journalists, this is a chronic and ceaseless pressure. For Andrea Hickerson, Matt Wright, and John Sohrawardi, it’s an opportunity to stay ahead of a dangerously disruptive curve.

Hickerson is the director of the University of South Carolina’s School of Journalism and Mass Communications; Wright and Sohrawardi are researchers at the Rochester Institute of Technology in New York. Together, they are building a software designed specifically to help journalists ferret out deepfakes – videos so believable that even those charged with vetting reality could inadvertently share them.

Fundamental to this is the dynamic played out in the 90s movie example above. See, it would be relatively easy to verify whether a presidential candidate or a congressional majority leader said something in front of a room filled with press.

But what, Hickerson asks, about video of a local mayor saying something kind of racist?

“[Most people] don’t know who the mayor is,” she says. And that lack of familiarity with a public figure’s nuances could easily lead people to not question a video’s authenticity.

And this is the level where most journalism operates. The overwhelming majority of reporters cover their towns, their counties – small regions with low voter turnouts and where few people outside of a local paper could name a single person on the town council. These are conditions, Hickerson says, ripe for exploitation.

So in comes DeFake, the software the USC/RIT team is building. Hickerson says it is being designed specifically for news agencies to use as a tool to help identify suspicious videos. It looks for those things that don’t quite seem right – things like misaligned lip sync or fuzzy edges around the face or hair; things we as humans might still be able to intuit as off.

But before we get to the point where AI-driven fakes start outsmarting us, Hickerson, Wright, and Sohrawardi want to build an effective AI-driven fakes detector. The team has spoken to journalists at numerous agencies to get input on how they think through stories and how they would use a tool that can deeply analyze videos (Click here to read about my own part in the team’s research.)

What they’re finding is that journalists have no intention of relying fully on a software program to make their final decisions, but that they are indeed hungry for a tool that gives measured, identifiable results that could be presented to show the likelihood of a video being real or doctored.

“At least if we know what is being developed,” says Wright, director of research at RIT’s Global Security Institute, about the short-to-midrange aim of the DeFake project, “then we can come up with AI that will detect it. The long-term is that eventually we won’t be able to build a detector.”

That sounds defeatist, but this dilemma is what keeps this team up at night. In essence, what Wright is worried about is that any system that could built for a purpose can be used to defeat the original purpose. Wright gives the example of Joe Biden fakes and a program being developed at the University of California – that program can drink in all the real images of Biden to learn his specific quirks, down to the ones he might not even know he has. But if UC can identify those quirks that well to let journalists identify that it’s really him, what’s stopping someone else from building the same program and creating a bogus Biden that exhibits all those quirks?

If you’re head is hurting, imagine being among the team trying to figure out how to stay ahead in an arms race that will eventually be too sophisticated for us as mere mortals to keep up with.

So the DeFake team is not under any illusions that their program will be a forever guardian against misinformation – but they do believe it can be the sentry we need right now and for the foreseeable future; and that if we are able to get ahead of deepfakes before they get to the point that they’re not obvious, journalists charged with delivering reality can learn new ways of looking for the problem.

The catch? Most reporters Sohrawardi has interviewed about deepfakes said they did not realize how sophisticated the tech had already gotten.

For more on how journalists see the future, you can click here to check out this conversation with two senior journalism students at USC.

The team is concentrating its efforts on journalists because they want the technology to stay under the control of those bound by their devotion to telling the truth as they find it and weeding out mis- and disinformation. That’s why there are no specifics about DeFake in this story. The team doesn’t want to risk telling the world how the software is doing its thing only to see someone wield it against them.

But enough of a working version could be in play for this November’s election.

“I’d say we are working on having a version that we feel comfortable with working together with journalists on for any October surprises,” Wright wrote in an email.

‘We hope to be able to provide some helpful analysis on such videos, but would not just let a journalist take the results completely ‘as is’ right now.”

A more complete version of DeFake is projected to be ready by as early as the middle of next year.

This story  is part of a series on a USC/Rochester Institute of Technology project to develop an AI-driven deepfake detection software for journalists.

Click here to learn how journalists of tomorrow think about deepfakes and public trust.

Click here to read my own experience playing a game full of consequences that attempts to show the world how a reporter torments himself to get the information right.

Scott Morgan is the Upstate Multimedia Reporter for South Carolina Public Radio. Follow Scott on Twitter, @ByScottMorgan.

Follow South Carolina Public Radio on Twitter @SCPublicRadio, and on Facebook.

Scott Morgan is the Upstate multimedia reporter for South Carolina Public Radio, based in Rock Hill. He cut his teeth as a newspaper reporter and editor in New Jersey before finding a home in public radio in Texas. Scott joined South Carolina Public Radio in March of 2019. His work has appeared in numerous national and regional publications as well as on NPR and MSNBC. He's won numerous state, regional, and national awards for his work including a national Edward R. Murrow.