Security Program Controls/Technologies, Content

Microsoft Combats Deepfake Media with Authentication Tools

Share

Microsoft has developed a new instrument to analyze a still photo or video to determine if it has been artificially manipulated or is an authentic production.

The vendor has also released two new technologies that can detect digitally altered content to assure viewers that the media they're seeing is genuine. Both the video tool and the complementary applications are intended to assure viewers that the media they’re seeing is in its original state.

The video resource, called Video Authenticator, provides a percentage chance, or confidence score, in real time on each frame as the video plays that the media is artificially manipulated or what’s known as a deepfake. It works by detecting the blending boundary of the deepfake and subtle fading or greyscale elements that might not be detectable by the human eye. Video Authenticator was created using a public dataset from Face Forensic++ and was tested on the DeepFake Detection Challenge Dataset. Both are models for training and testing deepfake detection technologies.

Deepfakes, or synthetic media, can be photos, videos, or audio files manipulated by artificial intelligence (AI) in ways that are difficult to detect. “They could appear to make people say things they didn’t or to be places they weren’t, and the fact that they’re generated by AI that can continue to learn makes it inevitable that they will beat conventional detection technology,” wrote Tom Burt, Microsoft corporate vice president of customer security & trust and Eric Horvitz, Microsoft chief scientific officer, in a blog post.

Deepfakes: U.S. Election Concerns

Detection of deepfakes is crucial in the lead up to the U.S. election in November. The timing of Microsoft’s release of the video tool and the associated technologies carries particular weight, considering the impending 2020 election, a history of video and content manipulation in the 2016 national election by foreign adversaries looking to influence voters and, most recently, from a member of Congress, ostensibly for political gain. It’s a factor Microsoft has readily acknowledged as one that has influenced its detection technologies.

“In the short run, such as the upcoming U.S. election, advanced detection technologies can be a useful tool to help discerning users identify deepfakes,” wrote Burt and Horvitz. Microsoft pointed to research conducted at Princeton University that of 96 separate foreign influence campaigns targeting 30 countries between 2013 and 2019, 86 percent “amplified pre-existing content and 74 percent “distorted objectively verifiable facts.” Most recently, a slew of disinformation campaigns about the COVID-19 pandemic have promoted bogus cures.

As for the supplementary tools, the first is built into Microsoft Azure and enables a content producer to add digital hashes and certificates that reside with the content as metadata wherever it travels online. The second is a reader that checks the certificates and matches the hashes, measuring with a “high degree of accuracy” that the content hasn’t been changed. It also identifies who produced it.

Microsoft, Partners Battle Deepfakes

Microsoft has also partnered with a number of associated organizations in a blitz of collaborations, partnerships and initiatives to fight deepfakes and disinformation campaigns. For example, a partnership with the San Francisco-based AI Foundation will tie that organization’s Reality Defender 2020 initiative with Video Authenticator and offer the package to news outlets and political campaigns. Video Authenticator will initially be available only through RD2020.

In addition, Microsoft has partnered with a consortium of media companies including the BBC, CBC/Radio-Canada and the New York Times to test its authenticity technology, and with the University of Washington, Sensity and USA Today on media literacy.

Microsoft has also posted an interactive quiz for voters in the U.S. to learn about synthetic media and has backed a public service announcement that advises people to make sure information comes from a reputable news organization before sharing it on social media ahead of the upcoming U.S. election.

“No single organization is going to be able to have meaningful impact on combating disinformation and harmful deepfakes,” Burt and Horvitz said. “We will do what we can to help, but the nature of the challenge requires that multiple technologies be widely adopted, that educational efforts reach consumers everywhere consistently and that we keep learning more about the challenge as it evolves.”

D. Howard Kass

D. Howard Kass is a contributing editor to MSSP Alert. He brings a career in journalism and market research to the role. He has served as CRN News Editor, Dataquest Channel Analyst, and West Coast Senior Contributing Editor at Channelnomics. As the CEO of The Viewpoint Group, he led groundbreaking market research.