Why deepfake phishing is a disaster waiting to happen
With deepfake phishing on the rise organizations need to be on the lookout for threat actors who are exploiting AI to impersonate CEOs. …
Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.
Everything isnât always as it seems. As AI technology has advanced, individuals have exploited it to distort reality. Theyâve created synthetic images and videos of everyone from TomCruise and Mark Zuckerberg to President Obama. While many of these use cases are innocuous, other applications, like deepfake phishing, are far more nefarious.
A wave of threat actors are exploiting AI to generate synthetic audio, image and video content thatâs designed to impersonate trusted individuals, such as CEOs and other executives, to trick employees into handing over information.
Yet most organizations simply arenât prepared to address these types of threats. Back in 2021, Gartner analyst Darin Stewart wrote a blog post warning that âwhile companies are scrambling to defend against ransomware attacks, they are doing nothing to prepare for an imminent onslaught of synthetic media.â
With AI rapidly advancing, and providers like OpenAI democratizing access to AI and machine learning via new tools like ChatGPT, organizations canât afford to ignore the social engineering threat posed by deepfakes. If they do, they will leave themselves vulnerable to data breaches.
Event
Intelligent Security Summit
Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.
The state of deepfake phishing in 2022 and beyond
While deepfake technology remains in its infancy, itâs growing in popularity. Cybercriminals are already starting to experiment with it to launch attacks on unsuspecting users and organizations.
According to the World Economic Forum (WEF), the number of deepfake videos online is increasing at an annual rate of 900%. At the same time, VMware finds that two out of three defenders report seeing malicious deepfakes used as part of an attack, a 13% increase from last year.
These attacks can be devastatingly effective. For instance, in 2021, cybercriminals used AI voice cloning to impersonate the CEO of a large company and tricked the organizationâs bank manager into transferring $35 million to another account to complete an âacquisition.â
A similar incident occurred in 2019. A fraudster called the CEO of a UK energy firm using AI to impersonate the chief executive of the firmâs German parent company. He requested an urgent transfer of $243,000 to a Hungarian supplier.
Many analysts predict that the uptick in deepfake phishing will only continue, and that the false content produced by threat actors will only become more sophisticated and convincing.
âAs deepfake technology matures, [attacks using deepfakes] are expected to become more common and expand into newer scams,â said KPMG analyst Akhilesh Tuteja.
âThey are increasingly becoming indistinguishable from reality. It was easy to tell deepfake videos two years ago, as they had a clunky [movement] quality and ⊠the faked person never seemed to blink. But itâs becoming harder and harder to distinguish it now,â Tuteja said.
Tuteja suggests that security leaders need to prepare for fraudsters using synthetic images and video to bypass authentication systems, such as biometric logins.
How deepfakes mimic individuals and may bypass biometric authentication
To execute a deepfake phishing attack, hackers use AI and machine learning to process a range of content, including images, videos and audio clips. With this data they create a digital imitation of an individual.
âBad actors can easily make autoencoders â a kind of advanced neural network â to watch videos, study images, and listen to recordings of individuals to mimic that individualâs physical attributes,â said David Mahdi, a CSO and CISO advisor at Sectigo.
One of the best examples of this approach occurred earlier this year. Hackers generated a deepfake hologram of Patrick Hillmann, the chief communication officer at Binance, by taking content from past interviews and media appearances.
With this approach, threat actors can not only mimic an individualâs physical attributes to fool human users via social engineering, they can also flout biometric authentication solutions.
For this reason, Gartner analyst Avivah Litan recommends organizations âdonât rely on biometric certification for user authentication applications unless it uses effective deepfake detection that assures user liveness and legitimacy.â
Litan also notes that detecting these types of attacks is likely to become more difficult over time as the AI they use advances to be able to create more compelling audio and visual representations.
âDeepfake detection is a losing proposition, because the deepfakes created by the generative network are evaluated by a discriminative network,â Litan said. Litan explains that the generator aims to create content that fools the discriminator, while the discriminator continually improves to detect artificial content.
The problem is that as the discriminatorâs accuracy increases, cybercriminals can apply insights from this to the generator to produce content thatâs harder to detect.
The role of security awareness training
One of the simplest ways that organizations can address deepfake phishing is through the use of security awareness training. While no amount of training will prevent all employees from ever being taken in by a highly sophisticated phishing attempt, it can decrease the likelihood of security incidents and breaches.
âThe best way to address deepfake phishing is to integrate this threat into security awareness training. Just as users are taught to avoid clicking on web links, they should receive similar training about deepfake phishing,â said ESG Global analyst John Oltsik.
Part of that training should include a process to report phishing attempts to the security team.
In terms of training content, the FBI suggests that users can learn to identify deepfake spear phishing and social engineering attacks by looking out for visual indicators such as distortion, warping or inconsistencies in images and video.
Teaching users how to identify common red flags, such as multiple images featuring consistent eye spacing and placement, or syncing problems between lip movement and audio, can help prevent them from falling prey to a skilled attacker.
Fighting adversarial AI with defensive AI
Organizations can also attempt to address deepfake phishing using AI. Generative adversarial networks (GANs), a type of deep learning model, can produce synthetic datasets and generate mock social engineering attacks.
âA strong CISO can rely on AI tools, for example, to detect fakes. Organizations can also use GANs to generate possible types of cyberattacks that criminals have not yet deployed, and devise ways to counteract them before they occur,â said Liz Grennan, expert associate partner at McKinsey.
However, organizations that take these paths need to be prepared to put the time in, as cybercriminals can also use these capabilities to innovate new attack types.
âOf course, criminals can use GANs to create new attacks, so itâs up to businesses to stay one step ahead,â Grennan said.
Above all, enterprises need to be prepared. Organizations that donât take the threat of deepfake phishing seriously will leave themselves vulnerable to a threat vector that has the potential to explode in popularity as AI becomes democratized and more accessible to malicious entities.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.