The current field of information security is largely one of arcana, vagueness, arbitrary views, philosophy, mountaintop sages, a general lack of reliable data, and legions of vendors selling “best practices.”
It was my hope that I could help out a little by giving a talk on my take of how our industry can best navigate during these turbulent and weird times and come toward relevance and transparency.
That’s enough of a preface. Here’s the talk I gave at the Seattle NAISG meeting this month.
I started this talk with a brief clip of the last two minutes of Bruce Potter’s opening to Shmoocon 2010.
Do you find this kind of talk discouraging? Do you take it personally? I do.
This is what I said when I was watching the streams.
Is this really the state of data in the information security industry?
So, Bruce says that we’re “Failing at our jobs every day.” Does he have a point? I think that he does.
Largely information security, and at times IT in general, has not been relevant or contributing to IT goals. We’ve been largely thought of as a cost center or a hole that you have to dump money into to avoid not being in compliance and possibly going to jail.
Since we have been unable to have effective communication with business and larger industry, we have spawned the technology compliance industry as a byproduct of our failures to have a meaningful dialogue and back up our assertions with data.
Half of the problem here, I believe, is one of basic terms. Some people I have met think these things mean the same thing. Some other people are attached to some kind of security warrior monk philosophy where they are honor bound to defeat all insecurity and risk no matter the cost.
This is not how the world works. This is not how a successful risk management program works either.
It is about what is in the best interest and practical means of the organization.
There has been some unnecessary drama in PCI lately. The crux of the disagreement on conference panel discussions and elsewhere really boils down to this:
Compliance is not a governance model.
PCI exists because without it, some environments would not take the minimum steps to secure their data. PCI is a whip to those lagging behind on the bell curve industry average.
I have been informed recently and from a quality source that the average PCI assessment costs between 250k and 500k on average. These funds are straight out of IT and/or security budgets. These are funds that could be used to improve a security program instead of spinning the hamster wheel of pain.
So in effect, PCI punishes security programs that are already at the minimum standard and causes even greater problems for leaders the information security space who are being proactive and doing the prudent thing in that, depending on the QSA firm/consultant retained, there may be a disagreement on the compensating controls in place or visionary risk management decisions made.
But compliance has improved things, right?
A paper by Forrester Research, commissioned by Microsoft and RSA, the security division of EMC, found that even though corporate intellectual property comprises 62 percent of a given company’s data assets, most of the focus of their security programs is on compliance with various regulations. The study found that enterprise security managers know what their companies’ true data assets are, but find that their security programs are driven mainly by compliance, rather than protection. — Threatpost
In short: we are protecting the wrong things and we know it. Why are we doing it this way? Because we have failed to have a relevant conversation on risk as an industry for too long, now others are doing it for us.
Additionally, PCI is now considered a stream of revenue not only to auditors, but to the card issuing industry in general.
Think about what this means.
QSAs pick the fruit from the money tree, but the roots are the card issuers. The tree is going to get bigger and its fruit heavier.
So let us review the hamster wheel and its many problems.
We should be proactive, not reactive. We should lead the discussion with the rest of IT in what the data means and what to do about it instead of who is to blame for a gap in an audit map.
I boldly contend that no hamster wheel effort is a governance program as it is detached from the other processes at work. If an auditor is finding systematic flaws in your governance program, something is very wrong.
Please post your disagreements in the comments.
Process should yield something. The result of an information security program should be an increasingly favorable risk position, not a new process to keep everyone busy as a cost center.
A risk management program should not enforce the status quo. It should produce data and discussions should be based on that data when it is new and not at the next quarterly, yearly, or root cause failure meeting.
If you are not being proactive by designing tests for development, finding configuration and application errors, and assessing your threat and architecture landscape, you are not running a governance program. You are likely only compliant.
Focus on what is possible, not what is allowed. Do not overly rely on any one mechanism or technology to protect you. Test or evaluate each piece of your architecture (defense in depth is good plug here) on its own. Better yet, find a way for it to prove that it is working. Collect this data for your compliance people and those whos work product has generated what it is measuring.
No one should have a problem with data in itself. If rewards are given to those trying to figure out who to blame for problems instead of correcting the problems themselves, something isn’t working right.
Other toothless and unsupported maturity models and governance frameworks are not much better off than just relying on arbitrary standards and compliance efforts. They need someone to have their back and have real consequences baked in.
Risk management is the yin to the yang of quick-deploy-and-fix-later-maybe philosophy.
This is the same fight that quality assurance had twenty years ago and won. We have the same battles to make on the very same ground. All of the statistics about security flaws in software and systems are out there and undisputed; bugs are inexpensive to fix inline with development and orders of magnitude more expensive to fix later on. Choosing a fundamentally insecure architecture to base your business on and then using piecemeal efforts to mitigate risk after the launch is also a pretty bad, but common, idea.
The business decision is the weighing of the risk of opportunity to get to market first and viability of the business due to flaws after launch. To feed this decision, we need to give the business straight forward information and not snake oil, fear, doubt, or frantic hand waving.
Frameworks at least put leadership for security issues at the table instead of a project footnote, but is it enough?
We need more data, to be credible based on this data, and we need to be backed by executive leadership based on our credibility and data.
We need to stop being the philosopher sages of IT and start having actual justifications for the methods and solutions we, as an industry, are advocating employing.
If we don’t do these things, how do we know if we’re doing a good job?
We need to collect and share data.
Part of the big compliance discussion has been the argument of “they were breached, so they must not have been compliant at time of incident.”
What do you say to that if you don’t have a lot of data backing up your risk management decisions?
Some schools of risk management dismiss all measurements as arbitrary and worthless. I don’t see how they can call themselves risk managers at all unless they base their decisions on at least the attempt to take a proactive stance by measurement and estimation instead of the baseline of the minimum standard of not being provably negligent.
Not surprisingly, there is a variety of opinion even on this.
Mike’s argument in favor of Donn and mountaintop sages.
Adam’sAlex’s argument against mountaintop sages.
There is a lot to win by being in a leadership position in reducing the number of flaws and inefficiency in an environment.
Here are some more wins.
..and a few more.
We had better figure this out soon before our environments get too complex for us to manage or assess. If we’re not there already, we’ll be there soon.
I contend that part of risk management is the ability to simplify and optimize. Do things for a reason and have some data to justify it. Don’t just to things because some other people you talked to at a conference once said it was a good idea or because it was in a magic quadrant in a leadership document you bought from someone else.
I thought that this was a good quote. Here’s a lot of Kabay’s work.
We, as an industry, have really talked about this for a very long time without much achievement. Most of the commercial product space hasn’t been interested and we haven’t made them be interested.
Most talks I hear stop there. So what do we actually do about it?
Not only should we transparently collect and base our decisions on data, but we should do it in a way that doesn’t make us look like a bunch of egotistical babies.
Work with people to improve things instead of Conan the Barbarian approach to program management; use the carrot instead of the stick. Help fix problems instead of just complaining how everything is trash and broken. Make some friends instead of beating them over the head with the compliance hammer.
Make things better. We can do it.
Here are some new sources of traditional metrics:
You should be aware of them because people talk about them a lot. They might not be very useful for you, but at least you’ll have something to talk about.
Things that are generic to the entire IT world may not be interesting to the place where you are working. If it’s not interesting to people in your realm, they are likely useless.
I think I totally lifted these two slides from a Metricon talk as I completely don’t talk this way. You should read all of the Metricon talks. They are all interesting and we don’t hear enough of this kind of talk.
Instead we have people wondering where they can click for regulatory compliance or if they can buy Compliance as a Service. [CaaS]
This is straight out of the NIST document. It’s what they’re working on. It’s worth knowing where your tax dollars are being spent.
..and I’m back with basics on what makes a good metric.
Metrics should be inexpensive. This means automated generation and gathering. This removes collection as a major source of errors and puts the “it magically happens” sense of wonder into the system.
Metrics should be interesting. If they’re not relevant, why did you bother collecting them?
Awesome. Please everyone, do more things like this.
So you want to have a relevant metric program and not only show what empirically needs to be improved, but to show people why they should keep you around and continue paying you money?
Glad to hear it! It’ll be useful, I promise.
Some things are hard to measure, but people have found ways of finding indicators of symptoms anyway. A great example is of public health metrics.
Another tricky example is financial risk management because everyone finds money to be interesting. Those models are usually an entire talk in themselves and it’s been done many many times.
If you have tools, working reporting methods in your organization, and/or a framework to make use of, make things easier on yourself and use what you have available. Don’t make perfect the enemy of good.
Any tricks you can use to make information intuitive and digestible should be used.
Infographics are a good way to do it. Scorecards might work too.
Report only what is of interest and present solutions, not huge lists of problems. Keep the data that derived these interesting bits around in case someone wants more information. Use data to make your case for why you have come to these recommendations, conclusions, policy decisions, or staffing levels.
Data is the answer. It is the way.
So you want a metrics program but don’t know where to start? I have a couple of things for you to try.
First, think about what data sources you have around. Read this talk. Do you have application data or logfiles? What about a SIEM? Chances are you have loads and loads of data sources from which to glean metrics.
Ok. So how do you do it?
Look at some business intelligence software. There was one talked about at Metricon, but I suspect that I may like this one more. This may be just because they have a cool demo and can grab data from a variety of sources.
Don’t have a SIEM? Try playing around with the free license of Splunk.
Can’t figure any of this out? I used to work with a guy who started a company to help you out called Bitwork. They can give you metrics gleaned from your internal data and delivered in a SaaS model. Tell them what you need and let them figure it out for you.
Did you read this far? Cool!
I was a little unhappy with how it turned out as I thought that it was a bit vague and confusing, much like the current state of the industry, but I was told that many in attendance enjoyed it. Good enough.
Here’s some resources and additional reading.
securitymetrics.org and Metricon.
A big thanks to the many people who were kind enough to discuss this topic with me for untold hours. I appreciate it!