The Art of Keeping Things Done

The current field of information security is largely one of arcana, vagueness, arbitrary views, philosophy, mountaintop sages, a general lack of reliable data, and legions of vendors selling “best practices.”

It was my hope that I could help out a little by giving a talk on my take of how our industry can best navigate during these turbulent and weird times and come toward relevance and transparency.

That’s enough of a preface. Here’s the talk I gave at the Seattle NAISG meeting this month.

The Art of Keeping Things Done.001.jpg

Opening slide

The Art of Keeping Things Done.002.jpg

I started this talk with a brief clip of the last two minutes of Bruce Potter’s opening to Shmoocon 2010.

The whole video is available here. Other Shmoocon media is available here. My poor quality two minute clip can be found here.

The Art of Keeping Things Done.002.jpg

Do you find this kind of talk discouraging? Do you take it personally? I do.

This is what I said when I was watching the streams.

The Art of Keeping Things Done.003.jpg

Is this really the state of data in the information security industry?

The Art of Keeping Things Done.004.jpg

So, Bruce says that we’re “Failing at our jobs every day.” Does he have a point? I think that he does.

Largely information security, and at times IT in general, has not been relevant or contributing to IT goals. We’ve been largely thought of as a cost center or a hole that you have to dump money into to avoid not being in compliance and possibly going to jail.

Since we have been unable to have effective communication with business and larger industry, we have spawned the technology compliance industry as a byproduct of our failures to have a meaningful dialogue and back up our assertions with data.

The Art of Keeping Things Done.005.jpg

Half of the problem here, I believe, is one of basic terms. Some people I have met think these things mean the same thing. Some other people are attached to some kind of security warrior monk philosophy where they are honor bound to defeat all insecurity and risk no matter the cost.

This is not how the world works. This is not how a successful risk management program works either.

It is about what is in the best interest and practical means of the organization.

The Art of Keeping Things Done.006.jpg

There has been some unnecessary drama in PCI lately. The crux of the disagreement on conference panel discussions and elsewhere really boils down to this:

Compliance is not a governance model.

PCI exists because without it, some environments would not take the minimum steps to secure their data. PCI is a whip to those lagging behind on the bell curve industry average.

I have been informed recently and from a quality source that the average PCI assessment costs between 250k and 500k on average. These funds are straight out of IT and/or security budgets. These are funds that could be used to improve a security program instead of spinning the hamster wheel of pain.

So in effect, PCI punishes security programs that are already at the minimum standard and causes even greater problems for leaders the information security space who are being proactive and doing the prudent thing in that, depending on the QSA firm/consultant retained, there may be a disagreement on the compensating controls in place or visionary risk management decisions made.

But compliance has improved things, right?

Sadly, no.

A paper by Forrester Research, commissioned by Microsoft and RSA, the security division of EMC, found that even though corporate intellectual property comprises 62 percent of a given company’s data assets, most of the focus of their security programs is on compliance with various regulations. The study found that enterprise security managers know what their companies’ true data assets are, but find that their security programs are driven mainly by compliance, rather than protection. — Threatpost

In short: we are protecting the wrong things and we know it. Why are we doing it this way? Because we have failed to have a relevant conversation on risk as an industry for too long, now others are doing it for us.

Additionally, PCI is now considered a stream of revenue not only to auditors, but to the card issuing industry in general.

Think about what this means.

QSAs pick the fruit from the money tree, but the roots are the card issuers. The tree is going to get bigger and its fruit heavier.

The Art of Keeping Things Done.008.jpg

So let us review the hamster wheel and its many problems.

We should be proactive, not reactive. We should lead the discussion with the rest of IT in what the data means and what to do about it instead of who is to blame for a gap in an audit map.

I boldly contend that no hamster wheel effort is a governance program as it is detached from the other processes at work. If an auditor is finding systematic flaws in your governance program, something is very wrong.

Please post your disagreements in the comments.

The Art of Keeping Things Done.009.jpg

Process should yield something. The result of an information security program should be an increasingly favorable risk position, not a new process to keep everyone busy as a cost center.

A risk management program should not enforce the status quo. It should produce data and discussions should be based on that data when it is new and not at the next quarterly, yearly, or root cause failure meeting.

The Art of Keeping Things Done.010.jpg

If you are not being proactive by designing tests for development, finding configuration and application errors, and assessing your threat and architecture landscape, you are not running a governance program. You are likely only compliant.

Focus on what is possible, not what is allowed. Do not overly rely on any one mechanism or technology to protect you. Test or evaluate each piece of your architecture (defense in depth is good plug here) on its own. Better yet, find a way for it to prove that it is working. Collect this data for your compliance people and those whos work product has generated what it is measuring.

No one should have a problem with data in itself. If rewards are given to those trying to figure out who to blame for problems instead of correcting the problems themselves, something isn’t working right.

The Art of Keeping Things Done.007.jpg

Other toothless and unsupported maturity models and governance frameworks are not much better off than just relying on arbitrary standards and compliance efforts.  They need someone to have their back and have real consequences baked in.

Risk management is the yin to the yang of quick-deploy-and-fix-later-maybe philosophy.

This is the same fight that quality assurance had twenty years ago and won. We have the same battles to make on the very same ground. All of the statistics about security flaws in software and systems are out there and undisputed; bugs are inexpensive to fix inline with development and orders of magnitude more expensive to fix later on.  Choosing a fundamentally insecure architecture to base your business on and then using piecemeal efforts to mitigate risk after the launch is also a pretty bad, but common, idea.

The business decision is the weighing of the risk of opportunity to get to market first and viability of the business due to flaws after launch. To feed this decision, we need to give the business straight forward information and not snake oil, fear, doubt, or frantic hand waving.

The Art of Keeping Things Done.011.jpg

Frameworks at least put leadership for security issues at the table instead of a project footnote, but is it enough?

We need more data, to be credible based on this data, and we need to be backed by executive leadership based on our credibility and data.

We need to stop being the philosopher sages of IT and start having actual justifications for the methods and solutions we, as an industry, are advocating employing.

The Art of Keeping Things Done.012.jpg

If we don’t do these things, how do we know if we’re doing a good job?

We need to collect and share data.

Part of the big compliance discussion has been the argument of “they were breached, so they must not have been compliant at time of incident.”

What do you say to that if you don’t have a lot of data backing up your risk management decisions?

Some schools of risk management dismiss all measurements as arbitrary and worthless. I don’t see how they can call themselves risk managers at all unless they base their decisions on at least the attempt to take a proactive stance by measurement and estimation instead of the baseline of the minimum standard of not being provably negligent.

Not surprisingly, there is a variety of opinion even on this.

Mike’s argument in favor of Donn and mountaintop sages.

Adam’sAlex’s argument against mountaintop sages.

Also Alex’s talk about why we’re hosed having to pick between the two (and more).

The Art of Keeping Things Done.013.jpg

There is a lot to win by being in a leadership position in reducing the number of flaws and inefficiency in an environment.

The Art of Keeping Things Done.014.jpg

Here are some more wins.

The Art of Keeping Things Done.015.jpg

..and a few more.

Much of this is based on an ITIL model and the IT Process Institute’s findings which they would like to sell you.

The Art of Keeping Things Done.016.jpg

We had better figure this out soon before our environments get too complex for us to manage or assess.  If we’re not there already, we’ll be there soon.

I contend that part of risk management is the ability to simplify and optimize. Do things for a reason and have some data to justify it.  Don’t just to things because some other people you talked to at a conference once said it was a good idea or because it was in a magic quadrant in a leadership document you bought from someone else.

The Art of Keeping Things Done.017.jpg

I thought that this was a good quote.  Here’s a lot of Kabay’s work.

We, as an industry, have really talked about this for a very long time without much achievement.  Most of the commercial product space hasn’t been interested and we haven’t made them be interested.

The Art of Keeping Things Done.018.jpg

Most talks I hear stop there. So what do we actually do about it?

The Art of Keeping Things Done.019.jpg

Not only should we transparently collect and base our decisions on data, but we should do it in a way that doesn’t make us look like a bunch of egotistical babies.

Work with people to improve things instead of Conan the Barbarian approach to program management; use the carrot instead of the stick.  Help fix problems instead of just complaining how everything is trash and broken.  Make some friends instead of beating them over the head with the compliance hammer.

Make things better.  We can do it.

The Art of Keeping Things Done.020.jpg

Here are some new sources of traditional metrics:

You should be aware of them because people talk about them a lot. They might not be very useful for you, but at least you’ll have something to talk about.

The Art of Keeping Things Done.021.jpg

Things that are generic to the entire IT world may not be interesting to the place where you are working. If it’s not interesting to people in your realm, they are likely useless.

The Art of Keeping Things Done.021.jpg

I think I totally lifted these two slides from a Metricon talk as I completely don’t talk this way. You should read all of the Metricon talks. They are all interesting and we don’t hear enough of this kind of talk.

Instead we have people wondering where they can click for regulatory compliance or if they can buy Compliance as a Service. [CaaS]

The Art of Keeping Things Done.022.jpg

This is straight out of the NIST document. It’s what they’re working on.  It’s worth knowing where your tax dollars are being spent.

The Art of Keeping Things Done.023.jpg

..and I’m back with basics on what makes a good metric.

Metrics should be inexpensive. This means automated generation and gathering. This removes collection as a major source of errors and puts the “it magically happens” sense of wonder into the system.

Metrics should be interesting. If they’re not relevant, why did you bother collecting them?

The Art of Keeping Things Done.024.jpg

Maverick metrics!

Verizon Business is pretty cool for releasing, not only the data, but a framework so that you too can release like data.

Awesome. Please everyone, do more things like this.

The Art of Keeping Things Done.025.jpg

So you want to have a relevant metric program and not only show what empirically needs to be improved, but to show people why they should keep you around and continue paying you money?

Glad to hear it!  It’ll be useful, I promise.

The Art of Keeping Things Done.026.jpg

Some things are hard to measure, but people have found ways of finding indicators of symptoms anyway.  A great example is of public health metrics.

Another tricky example is financial risk management because everyone finds money to be interesting.  Those models are usually an entire talk in themselves and it’s been done many many times.

If you have tools, working reporting methods in your organization, and/or a framework to make use of, make things easier on yourself and use what you have available.  Don’t make perfect the enemy of good.

The Art of Keeping Things Done.027.jpg

Any tricks you can use to make information intuitive and digestible should be used.

Infographics are a good way to do it. Scorecards might work too.

Report only what is of interest and present solutions, not huge lists of problems. Keep the data that derived these interesting bits around in case someone wants more information.  Use data to make your case for why you have come to these recommendations, conclusions, policy decisions, or staffing levels.

Data is the answer.  It is the way.

The Art of Keeping Things Done.028.jpg

So you want a metrics program but don’t know where to start? I have a couple of things for you to try.

First, think about what data sources you have around. Read this talk. Do you have application data or logfiles? What about a SIEM? Chances are you have loads and loads of data sources from which to glean metrics.

Ok. So how do you do it?

Look at some business intelligence software. There was one talked about at Metricon, but I suspect that I may like this one more. This may be just because they have a cool demo and can grab data from a variety of sources.

Don’t have a SIEM? Try playing around with the free license of Splunk.

Can’t figure any of this out? I used to work with a guy who started a company to help you out called Bitwork. They can give you metrics gleaned from your internal data and delivered in a SaaS model. Tell them what you need and let them figure it out for you.

The Art of Keeping Things Done.029.jpg

Did you read this far? Cool!

I was a little unhappy with how it turned out as I thought that it was a bit vague and confusing, much like the current state of the industry, but I was told that many in attendance enjoyed it. Good enough.

Here’s some resources and additional reading.

CIO Mag: The Metrics Trap

Security Metrics: Replacing Fear, Uncertainty, and Doubt

Information Security Management Metrics: A Definitive Guide to Effective Security Monitoring and Measurement

securitymetrics.org and Metricon.

Truth to Power

A big thanks to the many people who were kind enough to discuss this topic with me for untold hours.  I appreciate it!

securitybullshit-cartoon022.png

11 responses to “The Art of Keeping Things Done

  1. Pingback: Tweets that mention The Art of Keeping Things Done « Bad Penny -- Topsy.com·

  2. While I’m happy to steal credit for it, that’s Alex Hutton’s argument against mountaintop sages. Mine is the New School (the book, not the blog).

    (Still reading the post as a whole.)

    • I cleaned up a couple points and credited Alex. I kept thinking that the blog and the book were of the same origin. I think it has finally sunk in now.

      Thanks for reading, and I’ll look forward to any comments you care to share.

  3. Overall, a good presentation to get people thinking in the right direction about implementing actual information security practices, instead of security theater.

    However, I think the points about PCI are a bit off. I’ve seen the reports about PCI compliance costing $250-$500K. The basis for the information is a bit sketchy (at best) and, shall we say, misleading. An organization that has gone far down the path of implementing an actual information security program will have minimal impact to actually complying with PCI. In those instances, there is little to no draw on the infosec budget which would divert money towards useless or ineffective tools/technologies/processes. If you look at it, PCI provides a set of controls which are, at bottom, reflective of industry best practices for information security. There are more than a few things about PCI which I would consider onerous to an organization (100% compliance only, scoping language, just to name a couple).

    When I’ve approached clients about PCI, my main point for them to understand is that there is a difference between “security” and “compliance” and there is a world of difference between the concepts. PCI is “compliance,” only.

    • Here is the CSO article about the average cost of PCI.

      I do like much that is in the PCI DSS requirements, of course. As you say, PCI DSS is largely recommended industry practices.

      Some of them are great:
      – Using the OWASP development guide
      – Having a clear delineation between development and production
      – Least privileged concepts (7.1.1, 7.1.2, 7.2.2 for example)
      – Having a retention policy that is actually employed and enforced (9.10)

      Some that I question a mandate to implement in all environments include:
      – Having Antivirus on all production hardware, specifically servers (5.1, 5.11)
      – Acceptable use of WEP for the last several years (4.1.1)
      – Hamster wheel activities (1.1.6, 12.1.2, 12.1.3 for example)
      – Many aspects of 8.5 are hit or miss to me.
      – 9.5 is debatable because of things like this and this.
      – I have a lot of problems with the thinking behind most of 10 and 11.5. I have seen first hand millions spent on extremely bad and ineffective practices in attempts to reach goals similar to these.
      – Has awareness training (12.6) been proven to be effective?
      – Why is 12.9 at milestone 6? Is it because it is hard and many don’t do it?
      – Since A.1.4 may not be feasible in cloud environments, does that kill the entire cloud industry for cardholder data?

      These are not new criticisms by any means, but I figured that I should make my own list in the comments here.

      AV has proven to be a poor control. Installing a trusted computing base would be, in my opinion, better. Insert whitelisting vs blacklisting argument or host vs network layer mitigation here.

      My opinion is that hamster wheel activities would be better served to be recurring and audited individually. Having a huge batch of meetings to discuss all of the network controls on a periodic basis never seems to work out well. This encourages a large top-heavy management process which, in my experience, draws a lot of heat in organizations. Periodic short interactions and focused discussions work way better.

      My point here is that if we’re spending all of this time talking about how many angels can dance on the head of a compliance mandate, PCI or otherwise, it is wasting time that could be spent in a more useful way.

      • With regard to AV, I only tentatively agree. AV tends to be fairly superfluous, and forcing it onto critical platforms (ATMs, application servers, primary data repositories) can be more than a bit absurd. However, there are creative ways to solve the problem–such as what you’ve suggested with a trusted computing base. While many see PCI as being a dead-dumb set of requirements, they overlook the overall intent and some of the ways the requirements can be addressed within the context of a serious security program.

        I could (and have) applied the same approach to validating WEP implementations as compliant. As I tell clients, “You don’t have to do stupid things, but you do have to address the requirements in some fashion.”

        I disagree on the hamster-wheel things. I’ve come across multiple instances where things like review of firewall rules and review/republication of infosec policies serve a more useful purpose than ticking a box come audit time. If nothing else, technology and security are constantly evolving, and internal processes must be in place to ensure that the organization is able to keep up.

        Your points are good for addressing organizations that take security seriously and actually try to accomplish intelligent, effective security. For those kinds of organizations, I think PCI is a very poorly written standard and the compliance approach shoved down the throats of processors, merchants, and service providers forces people into security theater and away from more intelligent choices. However, keep in mind that the PCI DSS is written for the lowest common denominator. Having seen how some organizations have tried to split semantic hairs and avoid doing *anything* for compliance–and seeing the horrific state of their security program/infrastructure–PCI is an effective tool to make intractable organizations move the security bar a little bit higher.

      • Most of the issue that most ninja types have with PCI is that it becomes a checklist instead of reinforcing sound practices.

        You are, of course, right about regular interval review being useful some of the time. To me, it tends to be a huge mess if left for a quarterly or annual, so I’d advise some sort of automation.

        If you leave things to become hamster wheels, most times (in my experience), they become a process that does not serve a goal, but a process done for no reason; “because its the rule.” I really wish it wasn’t so, but as long as the concept of business sufficiency and bare minimum quality that won’t kill the business is in favor, I don’t see this changing.

        Now if there was an advanced measurement for PCI, or that you could submit aggregated metrics instead of a parade of random QSAs, or some kind of escape for people/orgs who actually do their jobs well, I think most of the complaints would go away.

        Something like an à la carte or combo meal selection for different areas would be pretty excellent. Perhaps something like this:

        App/System Integrity:

        Signed app code only allowed to execute
        Trusted computing base (or verified equivalent)
        PCI Certified App Whatever (I thought this was supposed to be here already)
        Mature SDL development
        SELinux/BSDsecure levels/Solaris Trusted Extensions
        WAF whitelisted implementation (don’t yell at me, Andre)
        Pick at least three and provide supporting evidence

        That would be better than “Do blah or argue with random QSA about mitigation while you pay by the hour.”

        Hopefully the compliance industry will not always be the “You didn’t do your job, so we must mandate that you accomplish the bare minimum standards” task business. I’m pretty sure that almost no one likes doing it.

      • For those organizations that are doing things right, I’ve always wanted to be able to have some kind of weighted metrics, like a CMM scoring which would allow the QSA to say, “these guys are doing items A, B, and C REALLY well, so the fact that they don’t do item D to perfection isn’t really an issue.” However, there would have to be concrete measurement metrics or it would be equally abused as any other relaxed validation approach. That being said, for those organizations that are doing a lot of things really well, it would give the QSA some flexibility about how to approach compliance issues.

        On the other hand, some of the crappy organizations I’ve visited who approach security as some sort of haphazard, scattershot, band of monkeys activity would take such an approach and abuse the living daylights out of it.

        In the end, as a security professional, I take a step back and think that so long as the standard is moving security forward, it’s a good thing. In my experience, for organizations that are doing things well, compliance issues tend to be more about QSAs who can’t think creatively or intelligently (or see the bigger picture), than it is about not meeting the letter of the standard.

      • I’d also add that being able to see the bigger picture and apply rational analysis is what separates true information security professionals from self-proclaimed, box-ticking, security nazis. I’ve occasionally come across a few of the former, and far too many of the latter.

        If I was an organization burning hours with the QSA to argue about compliance issues that shouldn’t be issues, then I’d escalate to the QSA’s management and ask for someone more experienced, or request the QSA’s management to intervene.

  4. Pingback: A Window Into The Art Of Application Security « Supply Chain Technology·

  5. It’s very important to force your QSA into buying into YOUR religion where “Your Religion” == custom, perfect, compensating and alternate controls.

    For PCI DSS Requirement 6.6, the “perfect balance of controls” is to implement a security controls system based on OWASP ASVS, and instead of verifying them with “SAST, DAST, or WAF” — verify the implementation in the application components e.g. OWASP ESAPI or force-dot-com-esapi. Further verification of components outside of the pre-verified state (or another unverifiable state) should use Burp Suite Professional and Fortify PTA with integration testing done by dev-test, not “Staging/Regression old-school style QA” or “Penetration-testing”. It’s not just the users (i.e. HTTP/TLS data and execution flows) that need to be tested, but also the data and execution flows that come from Web Services, Ajax Proxies, EAI/SOA components, et al.

    A WAF cannot see outside of the HTTP/TLS data and execution flows. Therefore, it is not a full control to meet the principles inherent in PCI DSS Requirement 6.6, even if the standard and your QSA currently say that it does.

    You said, “WAF whitelisted implementation (don’t yell at me, Andre)”. I’m not going to yell at you. I agree that whitelist-based web application firewalls can add value under certain circumstances, although you are ADDING additional points of failure and sources of breaches/compromise when you ADD a WAF pair. If this is a trade-off that the organization is willing to make — WAFs can provide security controls such as whitelisting input fields that the app does not currently whitelist or CSRF protection. WAFs can also provide missing logging/monitoring controls (however, it is now the source of these dangerous kinds of insertion points, at a much more risky and critical layer). WAFs always affect the performance of the app they are monitoring/blocking — yes, indeed when only in monitoring mode (note: this has been proven time and time again — go test it for yourself). There are other major issues with deployment of them. The primary issue that I see is that they break the app architecture. Network firewalls break the app architecture in better ways, because they provide boundary partitioning. App firewalls can only provide boundary partitioning through whitelist data validation — although they are very immature at doing so and most security control problems are not data validation related. The other features that WAFs provide, such as appsec monitoring, virtual patch, et al — these are signature based exactly like Intrusion Prevention Systems. IPS and WAF in this way are not boundary partitions, but instead are defeatable signature or anomaly technologies

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s