Securing the AI frontier: Protecting enterprise systems against AI-driven threats
It’s the weaponized AI attacks targeting identities, unseen and often the most costly to recover from that most threaten enterprises. …
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
By 2025, weaponized AI attacks targeting identitiesâunseen and often the most costly to recover fromâwill pose the greatest threat to enterprise cybersecurity. Large language models (LLMs) are the new power tool of choice for rogue attackers, cybercrime syndicates and nation-state attack teams.
A recent survey found that 84% of IT and security leaders say that when AI-powered tradecraft is the attack strategy for launching phishing and smishing attacks, theyâre increasingly complex to identify and stop. As a result, 51% of security leaders are prioritizing AI-driven attacks as the most severe threat facing their organizations. While the vast majority of security leaders, 77%, are confident they know the best practices for AI security, just 35% believe their organizations are prepared today to combat weaponized AI attacks that are expected to increase significantly in 2025.
In 2025, CISOs and security teams will be more challenged than ever to identify and stop the accelerating pace of adversarial AI-based attacks, which are already outpacing the most advanced forms of AI-based security. 2025 will be the year AI earns its role as the technological table stakes needed to provide real-time threat and endpoint monitoring, reduce alert fatigue for security operations center (SOC) analysts, automate patch management and identify deepfakes with greater accuracy, speed and scale than has been possible before.
Adversarial AI: Deepfakes and synthetic fraud surge
Deepfakes already lead all other forms of adversarial AI attacks. They cost global businesses $12.3 billion in 2023, which is predicted to soar to $40 billion by 2027, growing at a 32% compound annual growth rate. Attackers across the spectrum of rogue to well-financed nation-state attackers are relentless in improving their tradecrafts, capitalizing on the latest AI apps, video editing and audio techniques. Deepfake incidents are predicted to increase by 50 to 60% in 2024, reaching reaching 140,000-150,000 cases globally.
Deloitte says deepfake attackers prefer to go after banking and financial services targets first. Both industries are known to be soft targets for synthetic identity fraud attacks that are hard to identify and stop. Deepfakes were involved in nearly 20% of synthetic identity fraud cases last year. Synthetic identity fraud is among the most difficult to identify and stop. It is on pace to defraud financial and commerce systems by nearly $5 billion this year alone. Of the many potential approaches to stopping synthetic identity fraud, five are proving the most effective.
With the growing threat of synthetic identity fraud, businesses are increasingly focusing on the onboarding process as a pivotal point in verifying customer identities and preventing fraud. As Telesign CEO Christophe Van de Weyer explained to VentureBeat in a recent interview, âCompanies must protect the identities, credentials and personally identifiable information (PII) of their customers, especially during registration.â The 2024 Telesign Trust Index highlights how generative AI has supercharged phishing attacks, with data showing a 1265% increase in malicious phishing messages and a 967% rise in credential phishing within 12 months of ChatGPTâs launch.
Weaponized AI is the new normal â and organizations arenât ready
âWeâve been saying for a while that things like the cloud and identity and remote management tools and legitimate credentials are where the adversary has been moving because itâs too hard to operate unconstrained on the endpoint,â Elia Zaitsev, CTO at CrowdStrike, told VentureBeat in a recent interview.
âThe adversary is getting faster, and leveraging AI technology is a part of that. Leveraging automation is also a part of that, but entering these new security domains is another significant factor, and thatâs made not only modern attackers but also modern attack campaigns much quicker,â Zaitsev said.
Generative AI has become rocket fuel for adversarial AI. Within weeks of OpenAI launching ChatGPT in November 2022, rouge attackers and cybercrime gangs launched gen AI-based subscription attack services. FraudGPT is among the most well-known, claiming at one point to have 3,000 subscribers.
While new adversarial AI apps, tools, platforms, and tradecraft flourish, most organizations arenât ready.
Today, one in three organizations admits that they donât have a documented strategy to take on gen AI and adversarial AI risks. CISOs and IT leaders admit theyâre not ready for AI-driven identity attacks. Ivantiâs recent 2024 State of Cybersecurity Report finds that 74% of businesses are already seeing the impact of AI-powered threatsâ. Nine in ten executives, 89%, believe that AI-powered threats are just getting started. Whatâs noteworthy about the research is how they discovered the wide gap between the lack of readiness most organizations have to protect against adversarial AI attacks and the imminent threat of being targeted with one.
Six in ten security leaders say their organizations arenât ready to withstand AI-powered threats and attacks today. The four most common threats security leaders experienced this year include phishing, software vulnerabilities, ransomware attacks and API-related vulnerabilities. With ChatGPT and other gen AI tools making many of these threats low-cost to produce, adversarial AI attacks show all signs of skyrocketing in 2025.
Defending enterprises from AI-driven threats
Attackers use a combination of gen AI, social engineering and AI-based tools to create ransomware thatâs difficult to identify. They breach networks and laterally move to core systems, starting with Active Directory.
Attackers gain control of a company by locking its identity access privileges and revoking admin rights after installing malicious ransomware code throughout its network. Gen AI-based code, phishing emails and bots are also used throughout an attack.
Here are a few of the many ways organizations can fight back and defend themselves from AI-driven threats:
- Clean up access privileges immediately and delete former employees, contractors and temporary admin accounts: Start by revoking outdated access for former contractors, sales, service and support partners. Doing this reduces trust gaps that attackers exploitâand try to identify using AI to automate attacks. Consider it table stakes to have Multi-Factor Authentication (MFA) applied to all valid accounts to reduce credential-based attacks. Be sure to implement regular access reviews and automated de-provisioning processes to maintain a clean access environment.
- Enforce zero trust on endpoints and attack surfaces, assuming they have already been breached and need to be segmented immediately. One of the most valuable aspects of pursuing a zero-trust framework is assuming your network has already been breached and needs to be contained. With AI-driven attacks increasing, itâs a good idea to see every endpoint as a vulnerable attack vector and enforce segmentation to contain any intrusions. For more on zero trust, be sure to check out NIST standard 800-207.
- Get in control of machine identities and governance now. Machine identitiesâbots, IoT devices and moreâare growing faster than human identities, creating unmanaged risks. AI-driven governance for machine identities is crucial to prevent AI-driven breaches. Automating identity management and maintaining strict policies ensures control over this expanding attack surface. Automated AI-driven attacks are being used to find and breach the many forms of machine identities most enterprises have.
- If your company has an Identity and Access Management (IAM) system, strengthen it across multicloud configurations. AI-driven attacks are looking to capitalize on disconnects between IAMs and cloud configurations. Thatâs because many companies rely on just one IAM for a given cloud platform. That leaves gaps between AWS, such as Googleâs Cloud Platform and Microsoft Azure. Evaluate your cloud IAM configurations to ensure they meet evolving security needs and effectively counter adversarial AI attacks. Implement cloud security posture management (CSPM) tools to assess and remediate misconfigurations continuously.
- Going all in on real-time infrastructure monitoring: AI-enhanced monitoring is critical for detecting anomalies and breaches in real-time, offering insights into security posture and proving effective in identifying new threats, including those that are AI-driven. Continuous monitoring allows for immediate policy adjustment and helps enforce zero trust core concepts that, taken together, can help contain an AI-driven breach attempt.
- Make red teaming and risk assessment part of the organizationâs muscle memory or DNA. Donât settle for doing red teaming on a sporadic schedule, or worse, only when an attack triggers a renewed sense of urgency and vigilance. Red teaming needs to be part of the DNA of any DevSecOps supporting MLOps from now on. The goal is to preemptively identify system and any pipeline weaknesses and work to prioritize and harden any attack vectors that surface as part of MLOpsâ System Development Lifecycle (SDLC) workflows.
- Stay current and adopt the defensive framework for AI that works best for your organization. Have a member of the DevSecOps team stay current on the many defensive frameworks available today. Knowing which one best fits an organizationâs goals can help secure MLOps, saving time and ensuring the broader SDLC and CI/CD pipeline in the process. Examples include the NIST AI Risk Management Framework and the OWASP AI Security and Privacy Guideââ.
- Reduce the threat of synthetic data-based attacks by integrating biometric modalities and passwordless authentication techniques into every identity access management system. VentureBeat has learned that attackers increasingly rely on synthetic data to impersonate identities and gain access to source code and model repositories. Consider using a combination of biometrics modalities, including facial recognition, fingerprint scanning and voice recognition, combined with passwordless access technologies to secure systems used across MLOps.
Acknowledging breach potential is key
By 2025, adversarial AI techniques are expected to advance faster than many organizationsâ existing approaches to securing endpoints, identities and infrastructure can keep up. The answer isnât necessarily spending moreâitâs about finding ways to extend and harden existing systems to stretch budgets and boost protection against the anticipated onslaught of AI-driven attacks coming in 2025. Start with Zero Trust and see how the NIST framework can be tailored to your business. See AI as an accelerator that can help improve continuous monitoring, harden endpoint security, automate patch management at scale and more. AIâs ability to make contributions and strengthen zero-trust frameworks is proven. It will become even more pronounced in 2025 as its innate strengths, which include enforcing least privileged access, delivering microsegmentation, protecting identities and more, are growing.
Going into 2025, every security and IT team needs to treat endpoints as already compromised and focus on new ways to segment them. They also need to minimize vulnerabilities at the identity level, which is a common entry point for AI-driven attacks. While these threats are increasing, no amount of spending alone will solve them. Practical approaches that acknowledge the ease with which endpoints and perimeters are breached must be at the core of any plan. Only then can cybersecurity be seen as the most critical business decision a company has to make, with the threat landscape of 2025 set to make that clear.