© 2024 WUKY
background_fid.jpg
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Why Fake Video, Audio May Not Be As Powerful In Spreading Disinformation As Feared

"Deepfakes" are digitally altered images that make incidents appear real when they are not. Such altered files could have broad implications for politics.
Marcus Marritt for NPR
"Deepfakes" are digitally altered images that make incidents appear real when they are not. Such altered files could have broad implications for politics.

Sophisticated fake media hasn't emerged as a factor in the disinformation wars in the ways once feared — and two specialists say it may have missed its moment.

Deceptive video and audio recordings, often nicknamed "deepfakes," have been the subject of sustained attention by legislators and technologists, but so far have not been employed to decisive effect, said two panelists at a video conference convened on Wednesday by NATO.

One speaker borrowed Sherlock Holmes' reasoning about the significance of something that didn't happen.

"We've already passed the stage at which they would have been most effective," said Keir Giles, a Russia specialist with the Conflict Studies Research Centre in the United Kingdom. "They're the dog that never barked."

The perils of deepfakes in political interference have been discussed too often and many people have become too familiar with them, Giles said during the online discussion, hosted by NATO's Strategic Communications Centre of Excellence.

Following all the reports and revelations about election interference in the West since 2016, citizens know too much to be hoodwinked in the way a fake video might once have fooled large numbers of people, he argued: "They no longer have the power to shock."

Tim Hwang, director of the Harvard-MIT Ethics and Governance of AI Initiative, agreed that deepfakes haven't proven as dangerous as once feared, although for different reasons.

Hwang argued that users of "active measures" (efforts to sow misinformation and influence public opinion) can be much more effective with cheaper, simpler and just as devious types of fakes — mis-captioning a photo or turning it into a meme, for example.

Influence specialists working for Russia and other governments also imitate Americans on Facebook, for another example, worming their way into real Americans' political activities to amplify disagreements or, in some cases, try to persuade people not to vote.

Other researchers have suggested this work continues on social networks and has become more difficult to detect.

Defense is stronger than attack

Hwang also observed that the more deepfakes are made, the better machine learning becomes at detecting them.

A very sophisticated, real-looking fake video might still be effective in a political context, he acknowledged — and at a cost to create of around $10,000, it would be easily within the means of a government's active measures specialists.

But the risks of attempting a major disruption with such a video may outweigh an adversary's desire to use one. People may be too media literate, as Giles argued, and the technology to detect a fake may mean it can be deflated too swiftly to have an effect, as Hwang said.

"I tend to be skeptical these will have a large-scale impact over time," he said.

One technology boss told NPR in an interview last year that years' worth of work on corporate fraud protection systems has given an edge to detecting fake media.

"This is not a static field. Obviously, on our end we've performed all sorts of great advances over this year in advancing our technology, but these synthetic voices are advancing at a rapid pace," said Brett Beranek, head of security business for the technology firm Nuance. "So we need to keep up."

Beranek described how systems developed to detect telephone fraudsters could be applied to verify the speech in a fake clip of video or audio.

Corporate clients that rely on telephone voice systems must be wary about people attempting to pose as others with artificial or disguised voices. Beranek's company sells a product that helps to detect them, and that countermeasure also works well in detecting fake audio or video.

Machines using neural networks can detect known types of synthetic voices. Nuance also says it can analyze a recording of a real, known voice — say, that of a politician — and then contrast its characteristics against a suspicious recording.

Although the world of cybersecurity is often described as one in which attackers generally have an edge over defenders, Beranek said he thought the inverse was true in terms of this kind of fraud detection.

"For the technology today, the defense side is significantly ahead of the attack side," he said.

Shaping the battlefield

Hwang and Giles acknowledged in the NATO video conference that deepfakes likely will proliferate and become lower in cost to create, perhaps becoming simple enough to make with a smartphone app.

One prospective response is the creation of more of what Hwang called "radioactive data" — material earmarked in advance so that it might make a fake easier to detect.

If images of a political figure were so tagged beforehand, they could be spotted quickly if they were incorporated by computers into a deceptive video.

Also, the sheer popularity of new fakes, if that is what happens, might make them less valuable as a disinformation weapon. More people could become more familiar with them, as well as being detectable by automated systems — plus they may also have no popular medium on which to spread.

Big social media platforms already have declared affirmatively that they'll take down deceptive fakes, Hwang observed. That might make it more difficult for a scenario in which a politically charged fake video goes viral just before Election Day.

"Although it might get easier and easier to create deepfakes, a lot of the places where they might spread most effectively, your Facebooks and Twitters of the world, are getting a lot more aggressive about taking them down," Hwang said.

That won't stop them, but it might mean they'll be relegated to sites with too few users to have a major effect, he said.

"They'll percolate in these more shady areas."

Copyright 2021 NPR. To see more, visit https://www.npr.org.

Philip Ewing is an election security editor with NPR's Washington Desk. He helps oversee coverage of election security, voting, disinformation, active measures and other issues. Ewing joined the Washington Desk from his previous role as NPR's national security editor, in which he helped direct coverage of the military, intelligence community, counterterrorism, veterans and more. He came to NPR in 2015 from Politico, where he was a Pentagon correspondent and defense editor. Previously, he served as managing editor of Military.com, and before that he covered the U.S. Navy for the Military Times newspapers.