December 3rd, 2025 | OWASP Berlin Meetup
Last night I had the opportunity to speak at the OWASP Berlin meetup about a topic that’s been causing massive pain for security teams everywhere: vulnerability alert fatigue and how we can fix it with smarter prioritization.

The Problem We’re All Facing
Let’s be honest—vulnerability management is broken. The average enterprise gets hit with 10,000+ vulnerability alerts every month. SAST tools have false positive rates of 40-70%. SCA tools flag thousands of CVEs in transitive dependencies. Container scans return 100+ vulnerabilities per base image.
And what do teams actually fix? Less than 5%.
The root cause? We’re still prioritizing based purely on CVSS scores, which completely ignore whether a vulnerability is actually exploitable in your specific context.
A Better Way: Risk-Based Prioritization
I presented a framework that combines three critical data points:
1. EPSS (Exploit Prediction Scoring System)
- ML-based prediction of exploitation probability in the next 30 days
- Tells you what attackers are actually targeting
2. CISA KEV Catalog
- Known Exploited Vulnerabilities
- Confirmed active exploitation in the wild
3. Reachability Analysis
- Is the vulnerable code actually reachable in YOUR application?
- Call graph analysis for SCA
- Dataflow analysis for SAST
The CVE-2023-44487 Story
The most powerful example I shared was CVE-2023-44487 (HTTP/2 Rapid Reset Attack) found in nginx:stable-bookworm-perl:
- CVSS Score: 5.3 (LOW)
- EPSS Score: 94.42% (99.98th percentile)
- KEV Status: TRUE (actively exploited)
Traditional CVSS-based approach would mark this as P3 or ignore it entirely. But with EPSS + KEV, it’s clearly a P0 - despite the “LOW” severity rating. This vulnerability was used to take down major services and generate 201M requests/second attacks.
Out of 145 total vulnerabilities in that image, only 5 were actually P0/P1 after proper risk-based analysis.
Technical Deep Dives
SAST: Context is Everything
I walked through how modern SAST tools use:
- Dataflow analysis: Tracking taint from source → sanitizer → sink
- Call graph analysis: Understanding security controls across function boundaries
- Tree-sitter AST parsing: Recognizing framework-specific patterns (Django decorators, Spring Security, etc.)
Result: False positive reduction from 70% → 20%
SCA: Reachability Changes Everything
Used Log4Shell (CVE-2021-44228) as the prime example:
- You have log4j-core 2.14.1 (CVSS 10.0, EPSS 97.4%, KEV=true)
- But your code only uses
logger.info()- NOT the vulnerableJndiLookupclass - Call graph analysis proves it’s not reachable
- Priority: P2 (plan upgrade) instead of P0 (emergency)
Also covered a Node.js axios example showing how 80% of dependency vulnerabilities are in code paths you never execute.
Container: Runtime Context Matters
Showed how layer analysis and runtime security context change priorities:
- 145 vulnerabilities in base image
- Only 5 require action after considering: exposed ports, running processes, security policies (runAsNonRoot, readOnlyRootFilesystem)
- Most “HIGH/CRITICAL” CVEs are in packages that aren’t even running
Real-World Impact
Teams implementing this approach see:
- 91% reduction in alert noise (57 alerts → 5 actionable findings)
- 80% time savings for security teams
- <48 hour MTTR for genuine P0 issues (was 45+ days)
- False positive rate: 20% (was 70%)
Our Journey at ScanDog
This isn’t just theoretical—at ScanDog, we’ve been building these concepts into our vulnerability management platform. We implemented EPSS/KEV enrichment and reachability analysis across our SAST, SCA, and container scanning pipelines.
The results have been transformative for our customers:
- Reduced alert fatigue by automatically filtering non-reachable vulnerabilities
- Integrated EPSS scoring directly into priority calculations
- Built automated CI/CD gates that only block on genuinely exploitable issues
Our experience developing and deploying these techniques at scale informed much of this talk—what works in practice versus theory, where teams struggle with adoption, and how to make risk-based prioritization actionable rather than just conceptual.
The Community Threat Model Project
I also briefly introduced our Community Threat Model Repository project—a collaborative effort to build service-specific threat models for AWS, Kubernetes, databases, and more. Instead of generic frameworks, teams get ready-made security scenarios, attack vectors, and mitigations that match how modern infrastructure actually works.
Key Takeaways
- CVSS alone is insufficient - CVE-2023-44487 proves this dramatically
- Context is everything - Dataflow, call graphs, runtime environment all matter
- EPSS + KEV + Reachability - This combination cuts through the noise
- Automate prioritization - Don’t make humans review 10,000 findings manually
- Measure what matters - MTTR for P0/P1, not total vulnerability count
Thanks OWASP Berlin!
Huge thanks to the OWASP Berlin organizers and everyone who attended. The questions and discussions afterward were fantastic—especially around implementing reachability analysis at scale and handling legacy code with minimal sanitizer coverage.
If you’re interested in the slides or want to discuss vulnerability prioritization challenges, feel free to reach out!
Resources: