What Does ‘AI in Security’ Actually Mean?
For leaders in the security space, there’s a lot of pressure to incorporate generative AI into security systems. If we don’t use AI, the thinking goes, the attackers eventually will.
Large language models are being used to increase productivity in many domains, and, unfortunately, they can also be used by bad actors to generate increasingly sophisticated attacks. This point is a major concern among security leaders like Guy Podjarny, founder of the cybersecurity company Snyk.
As a member of Snyk’s board, I’ve had a front-row seat to the company’s conversations introducing AI into security. When it comes to adopting AI, Snyk has been well ahead of the curve. Only a few years ago, “we’d say the word ‘AI’ and people would roll their eyes,” said Guy. “So we started downplaying the role of AI in our security analysis when we talked about it publicly.” (It probably goes without saying that Snyk no longer feels the need to downplay its use of AI.)
Earlier this month, Guy gave an expansive interview with Boldstart founder Ed Sim at a dinner hosted by IVP and Boldstart Ventures. The conversation was so insightful that I wanted to share it with my readers. Both Guy and Ed were kind enough to agree.
Guy kicked off the conversation with how Snyk thinks about incorporating generative AI into its system.
How Snyk uses AI
Snyk first began thinking about incorporating AI in security more than three years ago, when it acquired a company called DeepCode in 2020. DeepCode, founded by machine learning researchers at the Swiss university ETH Zurich, uses AI to learn from open-source software in order to assist developers in writing better code.
A hybrid approach
Acquiring DeepCode led Snyk to embrace what Guy calls a “hybrid approach” to security: an AI-assisted system that’s fortified by additional security measures and human insight. DeepCode's AI engine uses its symbolic AI to find vulnerabilities, surgically uses LLMs, with exact guidance, to create fixed code, and then uses its symbolic AI engine again to verify the code is correct and secure.
This second part–checking the code produced by LLMs–is critical to Snyk’s security measures.
“On the one hand LLMs are very powerful and allow you to do a lot of things,” Guy said. “But on the other hand, they’re unpredictable and shallow. Right now, It’s hard for me to imagine security programs heavily relying on LLMs in any aspect without incorporating an additional layer of safety.”
LLMs work best when it comes to solving problems where it’s easy to verify results or, in the case of writing security code, ensuring that the code produced will protect a given system’s vulnerabilities. The problem is that “security, almost by definition, is difficult to test, especially when it comes to testing for false negatives,” said Guy. “Often, you don’t know what problems you’re not being told about. You never know about the alerts you’re not getting.”
LLMs are likely to detect fewer security flaws “because it doesn’t know which issues it missed,” Guy continued. “Then you won’t know which issues it was supposed to find, and sometimes these are glaring issues.”
What does the phrase ‘AI in Security’ actually mean?
Guy has found (and I can attest to this personally) that the phrase “AI in security” often means very different things, depending on who’s talking. “‘AI in security’ today is one of those topics where the meaning varies drastically from person to person,” he said.
Guy categorizes “AI in Security” into three distinct buckets:
Securing AI-assisted development: This has to do with securing code that is automatically generated using AI tools like GitHub Copilot. There’s still a lingering (and warranted) distrust when it comes to AI-generated code–it’s something that security systems should regularly review to ensure that it’s not introducing vulnerabilities. “These programs often produce vulnerable code, which is hard for devs to spot, resulting in over-trust,” said Guy. “We see a lot of opportunity in this area.”
AI-assisted security: This bucket involves powering security systems with AI and thinking strategically about where to best incorporate AI in order to prevent vulnerabilities. “If you don’t use AI, the attackers and your competitors will,” said Guy. “So, fundamentally, you’ve got to use it yourself.” It’s just this sort of thinking that led Snyk to acquire DeepCode in the first place.
Securing AI applications: This is an ongoing task which ensures that new AI interfaces (AI-assisted chat, for instance) are secure. “There’s a whole set of security mistakes associated with these, from new attacks like prompt injection to variants of classic AppSec risks, like vulnerable libraries or insecure code,” said Guy.
Securing AI applications is a primary concern among CISOs, especially as the capabilities of large language models grow increasingly sophisticated, a point brought up by Ian Swanson, founder of the security company Protect AI. “In the future, the biggest risk will involve machines telling other machines to do things on the behalf of attackers, enabling them to execute increasingly sophisticated attacks like prompt injection or money transfers between bank accounts,” Ian said. “In terms of outcome, there's going to be a whole new brand of attack vector.”
Advice for founders building in the security space
Guy is an active angel investor who often shares the same piece of advice to founders:
“One of my favorite things to tell founders that I work with is ‘Nobody cares about your product. They care about the problem you’re solving for them.’ ”
Building great products in the security space requires a certain number of “leaps of faith,” he said. “Too many leaps of faith–where the solution doesn’t seem possible–are clearly bad. But too few leaps is also not good. If it’s too obvious, it means that there’s a dozen others doing exactly what you’re doing. For a solution to be original and effective, it needs to be a bold move, or a slightly discontinuous line of thought.”
It’s likely that AI will create an entirely new set of challenges and opportunities for security professionals, upping the stakes in the cat-and-mouse game of security. On one hand, it can be leveraged to build better detection tools to prevent security breaches. On the other hand, it can be used by bad actors to infiltrate a company’s most sensitive data.
In the years to come, we will need even bolder solutions from ambitious founders creating new security tools. I look forward to the wave of innovation unfolding in the space.
Thanks for reading,
Tamar