September 16, 2020

Vulnerability Management Metrics

I’m being tasked with documenting our current process for vulnerability management, but want to (and have received permission) create a more fleshed out policy to include metrics. What are some metrics that you think are useful in regards to vulnerability management?

Comments

tweedge

The absolute biggest one you **must** have an **incredible** understanding of is time to resolution (once we knew about this issue, how long did it take us to fix it?), binned by CVSS (do critical, high, medium, and low). Establish a timeline that each class of vulnerability should be *completely* remediated in production by – say 2 days for critical vulns, 1 week for high, etc.

Graph the everloving hell out of that.

When you do have a vuln which misses the deadline for its class, do a postmortem. What contributed to this missing the deadline? Was this miscategorized and nobody took it seriously? What could we as a security team have done better? What could engineering teams have done better? Were there any blockers we didn’t think about until too late? Use these to improve your internal processes, and drive change around the organization.

With postmortems and graphs, you have enough evidence to show why broader engineering changes need to happen should you need to push for them. “One subject matter expert was out and we totally whiffed this deadline, now we have a vulnerability that has added risk for way too long! We need to make sure that people are cross-training in x, y, and z areas! Security leadership, can you drive these changes?” And wham – all your demands are met. Or, at least you have a much better chance of having those demands met ;)

Your target should be to resolve **all** new vulnerabilities according to the standard you define, and gradually pay down existing security debt on a separate timeline. Build a plan for that as well, and graph successes. Reward teams for driving down your security debt and reinforce the race to the bottom. You could track this with the same metric (instead of “when did we find out about it” you could try “when did we tell a team to start working on that”), and it’s a fine hardline stance to take if you know it’s reducing risk, but that might be a mixed bag for your reputation and would definitely mess with your data a bit.

There are other metrics I’ve tracked before, but none that were even half as important to get right.

KStieers

So raw numbers are poor because you can’t control how many new vulns show up.

But once you find them, you need to look at it the same way you look at accounts receivables… how many are outstandimg and for how long split up in buckets based on risk and criticality.

E.g. 15 critical vulns found. 5 high risk, 7 med risk, 3 low risk.. and then track how long it takes to address them, either actually fixed or otherwise addressed.

Leave a Reply

Your email address will not be published. Required fields are marked *

Note: By filling this form and submitting your commen, you acknowledge, agree and comply with our terms of service. In addition you acknowledge that you are willingly sharing your email address with AiOWikis and you might receive notification emails from AiOWikis for comment notifications. AiOWiksi guarantees that your email address WILL NOT be used for advertisement or email marketting purposes.

This site uses Akismet to reduce spam. Learn how your comment data is processed.