Zscaler finds enterprise AI adoption soars 600% in less than a year, putting data at risk

According to Zscaler’s report, manufacturing generates the most AI traffic, accounting for 20.9% of all AI/ML transactions, followed by finance and insurance (19.9%) and services (16.8%). …

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.


Enterprisesā€™ reliance on AL/machine learning (ML) tools is surging by nearly 600%, escalating from 521 million transactions in April 2023 to 3.1 billion monthly by January 2024. Heightened concerns about security have led to 18.5% of all AI/ML transactions being blocked, a 577% increase in just nine months. 

CISOs and the enterprises they protect have good reason to be cautious and block a record amount of AI/ML transactions. Attackers have fine-tuned their tradecraft and now have weaponized LLMs to attack organizations without their knowledge. Adversarial AI is also a growing threat because it is a cyber threat no one sees coming.  

Zscalerā€™s ThreatLabz 2024 AI Security Report published today quantifies why enterprises need a scalable cybersecurity strategy to protect the many AI/ML tools they are onboarding. Data protection, managing the quality of AI data and privacy concerns dominate the surveyā€™s results. Based on more than 18 billion transactions from April 2023 to January 2024 across the Zscaler Zero Trust Exchange, ThreatLabz analyzed how enterprises are using AI and ML tools today. 

Healthcare, finance and insurance, services, technology and manufacturing industriesā€™ adoption of AI/ML tools and their risk of cyberattacks provide a sobering look at how unprepared these industries are for AI-based attacks. Manufacturing generates the most AI traffic, accounting for 20.9% of all AI/ML transactions, followed by finance and insurance (19.9%) and services (16.8%).

VB Event

The AI Impact Tour ā€“ Atlanta

Continuing our tour, weā€™re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.

Request an invite

Blocking transactions is a quick, temporary win 

CISOs and their security teams are choosing to block a record number of AI/ML tool transactions to protect against potential cyberattacks. Itā€™s a brute-force move that protects the most vulnerable industries from an onslaught of cyberattacks. 

ChatGPT is the most used and blocked AI tool today, followed by OpenAI, Fraud.net, Forethought, and Hugging Face. The most blocked domains are Bing.com, Divo.ai, Drift.com, and Quillbot.com.

Credit: Between April 2023 and January 2024, enterprises blocked more than 2.6 billion transactions.

Manufacturing only blocks 15.65% of AI transactions, which is low, given how at-risk this industry is to cyberattacks, especially ransomware. The finance and insurance sector blocks the largest proportion of AI transactions at 37.16%, indicating heightened concerns about data security and privacy risks. Itā€™s concerning that the healthcare industry blocks a below-average 17.23% of AI transactions despite processing sensitive health data and personally identifiable information (PII), suggesting they may be lagging in efforts to protect data involved in AI tools. 

Causing chaos in time- and life-sensitive businesses like healthcare and manufacturing leads to ransomware payouts at multiples far above other businesses and industries. The recent United Healthcare ransomware attack is an example of how an orchestrated attack can take down an entire supply chain.

Blocking is a short-term solution to a much larger problem  

Making better use of all available telemetry and deciphering the massive amount of data cybersecurity platforms are capable of capturing is a first step beyond blocking. CrowdStrike, Palo Alto Networks and Zscaler promote their ability to gain new insights from telemetry. 

CrowdStrike co-founder and CEO George Kurtz told the keynote audience at the companyā€™s annual Fal.Con event last year, ā€œOne of the areas that weā€™ve really pioneered is that we can take weak signals from across different endpoints. And we can link these together to find novel detections. Weā€™re now extending that to our third-party partners so that we can look at other weak signals across not only endpoints but across domains and come up with a novel detection.ā€  

Leading cybersecurity vendors who have deep expertise in AI and in many of them, decades of experience in ML include Blackberry Persona, Broadcom, Cisco Security, CrowdStrike, CyberArk, Cybereason, Ivanti, SentinelOne, Microsoft,  McAfee, Sophos and VMWare Carbon Black. Look for these vendors to train their LLMs on AI-driven attack data in an attempt to stay at parity with attackersā€™ accelerating use of adversarial AI. 

A new, more lethal AI threatscape is here  

ā€œFor enterprises, AI-driven risks and threats fall into two broad categories: the data protection and security risks involved with enabling enterprise AI tools and the risks of a new cyber threat landscape driven by generative AI tools and automation,ā€ says Zscaler in the report.

CISOs and their teams have a formidable challenge defending their organizations against the onslaught of AI attack techniques briefly profiled in the report. Protecting against employee negligence when using ChatGPT and ensuring confidential data isnā€™t accidentally shared should be a topic of the board of directors. They should be prioritizing risk management as core to their cybersecurity strategies. 

Protecting intellectual property from leaking out of an organization through ChatGPT, containing shadow AI, and getting data privacy and security right are core to an effective AI/ML tools strategy. 

Last year, VentureBeat spoke with Alex Philips, CIO at National Oilwell Varco (NOV), about his companyā€™s approach to generative AI. Phillips told VentureBeat he was tasked with educating his board on the advantages and risks of ChatGPT and generative AI in general. He periodically provides the board with updates on the current state of GenAI technologies. This ongoing education process is helping to set expectations about the technology and how NOV can put guardrails in place to ensure Samsung-like leaks never happen. He alluded to how powerful ChatGPT is as a productivity tool and how critical it is to get security right while also keeping shadow AI under control.  

Balancing productivity and security is critical for meeting the challenges of the new, uncharted AI threatscape is essential. Zscalerā€™s CEO was targeted with a vishing and smishing scenario where threat actors impersonated the voice of Zscaler CEO Jay Chaudhry in WhatsApp messages, which attempted to deceive an employee into purchasing gift cards and divulging more information. Zscaler was able to thwart the attack using their systems. VentureBeat has learned this is a familiar attack pattern aimed at leading CEOs and tech leaders across the cybersecurity industry.

Attackers are relying on AI to launch ransomware attacks at scale and faster than they have in the past. Zscaler notes that AI-driven ransomware attacks are part of nation-state attackersā€™ arsenals today, and the incidence of their use is growing. Attackers now use generative AI prompts to create tables of known vulnerabilities for all firewalls and VPNs in an organization they are targeting. Next, attackers use the LLM to generate or optimize code exploits for those vulnerabilities with customized payloads for the target environment.

Zscaler notes that generative AI can also be used to identify weaknesses in enterprise supply chain partners while highlighting optimal routes to connect to the core enterprise network. Even if enterprises maintain a strong security posture, downstream vulnerabilities can often pose the greatest risks. Attackers are continuously experimenting with generative AI creating their feedback loops to improve results in more sophisticated, targeted attacks that are even more challenging to detect.

An attacker aims to leverage generative AI across the ransomware attack chainā€”from automating reconnaissance and code exploitation for specific vulnerabilities to generating polymorphic malware and ransomware. By automating critical portions of the attack chain, threat actors can generate faster, more sophisticated and more targeted attacks against enterprises.

Credit: Attackers are using AI to streamline their attack strategies and gain larger payouts by inflecting more chaos on target organizations and their supply chains.