AI Weekly: Machine learning could lead cybersecurity into uncharted territory

Security firms are coming to rely on more AI, but we don’t know how things will escalate once the perpetrators of cyberattacks start using machine learning. …

Once a quarter, VentureBeat publishes a special issue to take an in-depth look at trends of great importance. This week, we launched issue two, examining AI and security. Across a spectrum of stories, the VentureBeat editorial team took a close look at some of the most important ways AI and security are colliding today. It’s a shift with high costs for individuals, businesses, cities, and critical infrastructure targets — data breaches alone are expected to cost more than $5 trillion by 2024 — and high stakes.

Throughout the stories, you may find a theme that AI does not appear to be used much in cyberattacks today. However, cybersecurity companies increasingly rely on AI to identify threats and sift through data to defend targets.

Security threats are evolving to include adversarial attacks against AI systems; more expensive ransomware targeting cities, hospitals, and public-facing institutions; misinformation and spear phishing attacks that can be spread by bots in social media; and deepfakes and synthetic media have the potential to become security vulnerabilities.

In the cover story, European correspondent Chris O’Brien dove into how the spread of AI in security can lead to less human agency in the decision-making process, with malware evolving to adapt and adjust to security firm defense tactics in real time. Should costs and consequences of security vulnerabilities increase, ceding autonomy to intelligent machines could begin to seem like the only right choice.

We also heard from security experts like McAfee CTO Steve Grobman, F-Secure’sMikko Hypponen, and Malwarebytes Lab director Adam Kujawa, who talked about the difference between phishing and spear phishing, addressed an anticipated rise in personalized spear phishing attacks ahead, and spoke generally to the fears — unfounded and not — around AI in cybersecurity.

VentureBeat staff writer Paul Sawers took a look at how AI could be used to reduce the massive job shortage in the cybersecurity sector, while Jeremy Horwitz explored how cameras in cars and home security systems equipped with AI will impact the future of surveillance and privacy.

AI editor Seth Colaner examines how security and AI can seem heartless and inhuman but still relies heavily on people, who are still a critical factor in security, both as defenders and targets. Human susceptibility is still a big part of why organizations become soft targets, and education around how to properly guard against attacks can lead to better protection.

We don’t know yet the extent to which those carrying out attacks will come to rely on AI systems. And we don’t know yet if open source AI opened Pandora’s box, or to what extent AI might increase threat levels. One thing we do know is that cybercriminals don’t appear to need AI to be successful today.

I’ll leave it to you to read the special issue and draw your own conclusions, but one quote worth remembering comes from Shuman Ghosemajumder, formerly known as the “click fraud czar” at Google and now CTO at Shape Security. “[Good actors and bad actors] are both automating as much as they can, building up DevOps infrastructure and utilizing AI techniques to try to outsmart the other,” he said. “It’s an endless cat-and-mouse game, and it’s only going to incorporate more AI approaches on both sides over time.”

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers and AI editor Seth Colaner — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI Channel.

Thanks for reading,

Khari Johnson

Senior AI Staff Writer

Live Updates for COVID-19 CASES