The deepfake detectors

The deepfake detectors

Sophisticated deepfake scams present major challenges for the cyber security industry – as well as major opportunities.

Four billion people will vote in 2024, the biggest election year in history. Unfortunately, this milestone in suffrage coincides with the coming of age of artificial intelligence-powered deepfakes that could spread misinformation to sway voters. 

A deepfake is a form of synthetic media that reproduces a person’s likeness through the use of an AI technique called deep learning. They are produced by training a machine learning model on a large dataset of images or videos of a person, such as a political candidate or a company leaders, to learn and duplicate their unique features. The more data is available, the more realistic the fake, which is why politicians and celebrities are among the most exposed. As risks of deep fakes grow, they fuel demand for investment in advanced cybersecurity.

In Slovakia last year, a deepfake audio of an election candidate discussing ways to rig the vote was released 48 hours before the polls opened, not long enough to debunk. Similarly, on Bangladesh’s January election day, an AI-generated video circulated in which an independent candidate announced his withdrawal from the election; in the same month, US voters received robocalls imitating the voice of US president Joe Biden telling them not to vote in the state’s presidential primary. US political figures have recognised the need for investment; when USD400 million was allocated to election security in 2020, many state officials explicitly prioritised cybersecurity in their grant requests.

Deepfakes aren’t just entering the political fray. One survey found that 66 per cent of cyber security professionals saw deepfakes used as part of a cyberattack in 2022.1 In the same year, 26 per cent of small companies and 38 per cent of large companies were targets of AI-generated identity fraud.2 One business recently lost USD25.6 million due to a scam in which a fake video showed the company’s chief financial officer requesting funds from an employee. 

All told, in just one year, between 2019 and 2020, the amount of deepfake online content increased by 900 per cent.3 Looking ahead, the vast majority of online content could be synthetically generated by 2026, which will make it more difficult to distinguish between authentic and fraudulent AI-generated content.

“The generative space is getting better and better every day, so the quality and realism of deepfakes is also getting better,” says Ilke Demir, a research scientist at Intel Studios. Companies are accelerating investment in cybersecurity technologies in order to get ahead of the bad actors, with a tenth of IT spending, on average, dedicated to cybersecurity in 2022.4 This demand has created a USD2 trillion market opportunity for cybersecurity providers, according to consultancy McKinsey.

Deepfake defence

Still, the bad actors remain in front. “It’s a cat and mouse game, and at the moment, the amount of resources being poured into developing new forms of generative content is significantly higher than the amount being invested into developing robust detection techniques,” says Henry Ajder, an expert adviser on deepfakes and AI.  

Deepfake detection tools are continuously evolving and adapting to new contexts. For example, many have traditionally been trained on Western-based data, but police in South Korea have developed a software trained on Korean data ahead of elections. This tool will use AI to compare potential hoaxes to existing content to determine whether it has been manipulated using deepfake techniques; it claims an 80 per cent accuracy rate.

Many deepfake detection tools take this approach, aiming to spot minor glitches in audio and video to identify them as manipulated. But one of the most effective techniques so far takes the opposite approach, deriving unique qualities of real footage that deepfakes cannot capture. Intel’s “FakeCatcher” is a leading tool that identifies fake images of humans using photoplethysmography (PPG), a technique that detects changes in blood flow in the face. “When the heart pumps blood into the veins, the veins change colour,” says Ilke Demir, who devised the idea. “This is not visible to the naked eye, but it is computationally visible.” 

The software collects PPG signals from everywhere on the face and creates maps from the spatial and temporal properties of those signals. Those maps are used to train a neural network to classify videos into real and fake. The benefit of this software, according to Dr Demir, is that PPG signals cannot be duplicated. “There is no approach that can plug it into a generative model and try to learn it,” she says. “That’s why FakeCatcher is so robust.”

FakeCatcher has a 96 per cent accuracy rate. The British broadcaster, the BBC, tested the technology on both fake and real footage and found that it correctly identified all but one fake video. However, it also delivered some false positives, identifying real videos as fakes due to fuzzy pixelation or because they were filmed from side-on. “We try to give as many interpretable results as possible,” says Dr Demir. “The system user should make the end decision by looking at the different signals.”

Ajder says that each deepfake detection approach has pros and cons. He sees promise in the initiative of the Coalition for Content Provenance and Authenticity (C2PA) – effectively a digital signature, which has industry support from the likes of Google, Intel and Microsoft. “There are technologies that can attach, in a cryptographically secure manner, metadata that provides transparency about how a piece of media was created and what tools were used,” he says. “That’s by far and away the best approach for security purposes, but it’s a really ambitious task.”

C2PA is creating a standard for attaching content credentials. “There’s definitely an increasing push for these technologies,” says Ajder. “If there isn’t broad cooperation and alignment, particularly amongst the big companies, we’re going to end up with too many different authentication products and platforms, making consumers’ lives harder.”

Moreover, while this approach has benefits, it could pose ethical issues regarding privacy, with concerns about the standard revealing too much data about the image’s provenance, such as the location or time it was taken. This could be a risk in, say, a context in which a human rights organisation is operating. 

Ajder adds that technology alone can never fully solve the deepfake problem and all consumers will need to adopt a more critical outlook towards the media they consume. “We need to change our behaviour as consumers, instead of trusting anything we see, and get to a place where we’re looking for content credentials and an extra layer of authentication,” he says. “That’s no small feat.”

[1] VMware Inc, “Global incident response threat report”, 2022
[2] Regula, “The state of identity verification in 2023”
[3] Sentinel, “Deepfakes 2020: the tipping point”
[4] Ians, Artico, “Security budget benchmark summary report 2022” 
Bitte bestätigen Sie Ihr Profil
Um fortzufahren, bestätigen Sie bitte Ihr Profil
Oder wählen Sie ein anderes Profil
Confirm your selection
By clicking on “Continue”, you acknowledge that you will be redirected to the local website you selected for services available in your region. Please consult the legal notice for detailed local legal requirements applicable to your country. Or you may pursue your current visit by clicking on the “Cancel” button.

Willkommen bei Pictet

Sie befinden sich auf der folgenden Länderseite: {{CountryName}}. Möchten Sie die Länderseite wechseln?