State of Security Automation

SAST tools overlook more than 85% of CVEs in real-world scenarios. Outdated security automation can't keep pace with rapid code development. There is the hidden cost of security automation - validating false positives.

· 3 min read
State of Security Automation

If I told you that I created a scanner capable of detecting only 1 out of 10 security issues, you would say it's useless and call me crazy. This is actually a success rate of top SAST (static application security testing) tools that are currently on the market.

Sadly, it is not a secret - it is a known limitation. At Vidoc we believe you deserve better tools.

Developers develop code twice as fast with copilots, yet, security tools stay the same.

TL;DR; of what what we are working on

TL;DR;

SAST tools overlook more than 85% of CVEs in real-world scenarios. Outdated security automation can't keep pace with rapid code development. There is the hidden cost of security automation - validating false positives. Vidoc is stepping up to address these challenges by creating AI-based security tools designed to perform at a level comparable to human security experts.

Why are SASTs so limited?

SASTs are glorified grep, they will look for insecure patterns in your codebase. They will search for strings to find the use of unsafe functions in your code - like eval, or os.execute.

State-of-the-art SASTs (like CodeQL) will even try to analyze the data flow of the code to reduce the number of false positives.

There is a key component that all of the SASTs are missing - context.

SASTs can't understand the context

They can't understand the code, and they can’t reason.

The systems and technologies made by humans are too complex to be matched with a simple set of rules. Each application and program has it is own unique threat model, yet security automation does not take it into account.

It is a race that SASTs can't win. There are hundreds of technologies and languages, and new ones are being created every week. New vulnerability types are discovered just as fast. For each new vulnerability in every single technology or language, there needs to be a human security engineer who will write a rule to detect it.

The problem can't be solved with regular code.

Hidden cost of security automation

The main selling point of current SASTs is that they will increase the efficiency of your team, but nothing could be further from the truth.

I will present you with the hidden cost of security automation that scales with the size of your company.

Assumptions

For the sake of simplicity all calculations will be for medium-sized companies that have 100 employees and 70 of them are developers.

Organizations this size can have at least 150 code repositories.

In each repository, we can safely assume an average of 50 security issues.

The math

For each security issue detected by an automated tool, you need on average 15 minutes to validate it.

Your organization will spend 150 * 50 * 15 min = 112500 min (1875 hours!!) on the validation of these security issues.

Do not forget that your developers develop code twice as fast with copilot, so every week there will be hundreds of new security issues to verify.

The average hourly rate for a security engineer is 50$ in California.

Your company will spend 94 218$ on the validation of security issues alone. This does not include fixing the security issues and validation of new security issues introduced by developers.

Snake oil - proving effectiveness is hard

How do you prove that one solution is better than another? You can't. (yet)

Because of how hard the problem of detecting security issues in code is to solve, many companies offer the same capabilities with just a small twist. There is no way to objectively measure the effectiveness of security tools for many different languages and technologies. You have to trust their marketing.

Synthetic benchmarks like OWASP Benchmark exist, but these benchmarks do not reflect efficacy in real-world scenarios. Like most security benchmarks, it includes examples that rely on outdated technologies and are specifically vulnerable.

[...] (SAST) tools overlook more than 85% of CVEs (false
negatives), although performing well against synthetic benchmarks.

[...] Meanwhile, over 70% of vulnerabilities still remain undetected when combining the results of SAST tools [...] we observed that these tools generally overstate their detection capabilities, even with 90.5% overstatement on our real dataset. [https://sen-chen.github.io/img_cs/pdf/fse2023-sast.pdf]

We need diverse datasets with real-world cases of vulnerable code - for all major languages.

Closing the gap between humans and security automation

We are building a new generation of security tools - tools not powered by simple rules but by specialized AI models.

Sneak peek of our platform:

https://youtu.be/mfcAJJIJW0o

We will introduce:

  1. A new dataset to benchmark security tools and understand their effectiveness in real-world scenarios
  2. VIDOC - a new generation of tools capable of Automated Code Review that will aim to match human performance in detecting unknown security issues in the code

Signup for the beta

We have a closed beta for new features on the VIDOC platform. It is not another scanning tool, it is an AI Security Engineer. Join the waiting list

________________________________________________________________________


Check our other social media platforms to stay connected:‎

Website | www.vidocsecurity.com
Linkedin | www.linkedin.com/company/vidoc-security-lab
X (formerly Twitter) | twitter.com/vidocsecurity
YouTube | www.youtube.com/@vidocsecuritylab
Facebook | www.facebook.com/vidocsec
Instagram | www.instagram.com/vidocsecurity