Could This Technology Put an End to Deepfakes?

A new photo and video authentication system could help combat fake news and propaganda. Will it be ready for the 2020 election?

 

 

What Happened?

The Content Authenticity Initiative (CAI) released a white paper on August 3rd outlining a system that would be able to track edits and alterations to original photos. CAI is represented by researchers and members from major media and tech companies, including Adobe, the New York Times, and BBC.

How Does it Work?

The proposed system would capture a user’s original image and embed information within it, such as the location, user identity, and equipment details. This encrypted information would be managed and stored by a verified authority. Any edits made to authenticated photos would be registered and saved, creating an entirely separate claim and encryption that is not linked to the original image. Anyone retrieving the images or content would have access to its history of claims, edits, and alterations from its creation to the displayed version.

Why it Matters:

 

Challenges Debunking Fake News

Deepfakes in the form of altered videos and photos have been used against celebrities and political figures for both humorous and devious means. In 2019, Nancy Pelosi was the target of a viral deepfake video that slurred her speech to make her appear drunk. Despite obvious evidence in the form of the original video, Donald Trump’s retweeting of the deepfake further entrenched its believability for his alt-right followers.

Current Deepfake Detection is Limited

Actors engaged in creating and circulating deepfakes have had a leg up on detection systems for years. The leading model from Facebook’s Deepfake Detection Challenge was able to identify videos with just 65.18% accuracy. The competition drew more than 2,000 participants who tested their system against a unique data set of more than 100,000 videos.


connect