Intel thinks it can kill off deepfakes for good

Tech giant Intel thinks it has a solution for the growing deepfake problem. 

Earlier this week, the company unveiled FakeCatcher, a brand-new software solution that uses a novel approach to deepfake video analysis. Allegedly, it can spot deepfake videos with a 96% accuracy. 

Just like previous deepfake analysis solutions, this one leverages the power of machine learning. However, instead of looking for inconsistencies in the video itself, FakeCatcher analyzes the content to determine whether the person in the video is an actual human being that was recorded at some point or a synthetic product. 

(In)visible changes on the face

How does it achieve that? According to Intel Labs senior staff research scientist, Ilke Demir, it can see whether the person in the video has a beating heart, or not.

“When our hearts pump blood, our veins change color,” Intel’s report states. “These blood flow signals are collected from all over the face and algorithms translate these signals into spatiotemporal maps. Then, using deep learning, we can instantly detect whether a video is real or fake.” 

The method is also known as photoplethysmography (PPG), a proven way to measure the amount of light that blood vessels residing in living tissue either absorb or reflect.

Speaking to VentureBeat, Demir said the color changes are invisible to the human eye, but not to a computer. “PPG signals have been known, but they have not been applied to the deepfake problem before.” 

She also explained that FakeCatcher gathers PPG signals from 32 different places on the face.

“We take those maps and train a convolutional neural network on top of the PPG maps to classify them as fake and real,” Demir said. “Then, thanks to Intel technologies like [the] Deep Learning Boost framework for inference and Advanced Vector Extensions 512, we can run it in real time and up to 72 concurrent detection streams.” 

Demir built FakeCatcher together with Umur Ciftci from the State University of New York at Binghamton. Apparently, deepfakes are a growing concern, as the barrier for entry lowers, and creating highly convincing videos becomes even easier.

Go to Source