Measuring information security is an exercise in total frustration. Well, maybe not total frustration but it can increase the number of wrinkles in the face, thin the hair, and turn what is left to a lighter shade of gray. Eventually, everyone taken with this passion will sport the Einstein look.
So what is the big deal anyways? How is measuring security programs any different than other IT or production programs? The heart of the problem is in trying to measure what does not occur. Security initiatives strive to prevent loss. So in effect they try and make something not happen or to lessen the outcome. And if something does not occur, how can you measure it?
The security drums. Every company should have a set:
I walked into the office to find our security operations analyst beating on drum, working hard to keep a rhythm.
I asked him what he was doing and he replied "I am beating the new security drum to ward off the computer viruses. Management just bought it from the vendor and they say it adds another level of protection."
"Is it working?" I asked.
"I'm sure it is, we have not had an infection all morning!"
Just then the security manager walked by and reported two new viruses were detected on the network and offered this advice "beat faster!"
Many falsehoods exist. In my days I have seen many wildly inaccurate, bordering on pure fictional, value assessments for security programs. Every security vendor has something to show, but none can answer the simple question: how much loss will this prevent. As the threat environment is so chaotic, is a reduction in losses due to security programs or just a simple drop in attacks? Does management understand the challenges or are they reinforcing illogical behaviors and still expecting miracles? And what should a security program achieve?
These and many more questions I intend to delve into by theorizing, discussing, tempering, and ultimately shedding light on the frustrating topic of measuring information security. Anyone want to come along for the ride?