ITCi 2007

This is the presentation that I gave earlier this week at the ITCi Conference in San Diego, California. It was well received and fostered a lot of interesting discussion.

My recording of the event on my laptop had enough problems as to be distracting, so I gave up on using it to export a real-time presentation. Instead I will try to give my speaking points inline with my individual slides. If a good audio recording becomes available, I will kick out a video format of this presentation synched with discussion audio. I was hoping to make use of some of the new Keynote functionality, but the audio and speaking position setup was a little questionable and I was unable to see my speaking notes, so I winged it freestyle. Everything seems to go well in a free form way.

Anyway. On to my presentation.


Everyone loves a title slide.

Fifteen seconds about who I am. I wanted to make it clear that I am a technologist and can actually discuss solutions to these problems as I am a huge geek; I live, eat, and breathe this stuff.

A couple of good quotes to get the audience into the mindset I’m going for here.

An overview of how I’m going to address the topic at hand.

Speaking points:

  • Anything that is measured in “Low, Medium, and High” should be included with “bad metrics.”
  • If metrics are too challenging to understand, they will not be readily business relevant.


A great graphic from one of the definitive works in this subject area which I make frequent reference in this presentation:

“Security Metrics: Replacing Fear, Uncertainty, and Doubt” (Andrew Jaquith)

Though I disagreed with some of his advice toward the latter half of the book, I found him right on through much of it. I gave away a copy of this book at the end of the discussion to the person in the audience that contributed the most. I let the audience decide who that was, and it was pretty fun.
Speaking points:

  • Mention: Continuous Audit & Risk Assessments – Paul Reymann, Norbert Kuiper (a previous talk at the conference)
  • The hamster wheel methodology of periodic identification, freak out, remediation and new tool identification lacks valuation and prioritization and, Andrew Jaquith suggests, is only the easy part of risk management.
  • Symptomatic problems, not systematic. Root causes remains elusive.

Speaking points:

  • Assets on servers, workstations and mobile devices? In aggregate?
  • So really what you want from metrics and data is making your organization versatile, flexible and quick to adjust to change and adversity.
  • I think of metrics as being like vectors; not only does it need to have a value, but it needs to show direction.
  • Through measuring, your organization will be able to react quicker to change. You will know if your projects/controls/implementations are successful, how much they cost in terms of real dollars, and be able to track operational efficiency.
  • What use is deploying these costly frameworks, technical implementations, and policies if you can not track their effectiveness and make effective changes to improve them?

I pick on the vagueness of ‘threat’ metrics.

So I covered what the bad data problem is all about. What do we want instead?


What is key here is to identify the vital essence of your organization. What is key to the success of your business?There should be at least one and not more than a few key metrics.

Also worth thinking about are key metrics for your business unit or department. What shows your effectiveness and success most effectively?

Granularity of data, which is collected in greater amounts than any company I have been able to find, allows Amazon to find errors based on behavior of this metric.

If traffic spikes, someone may have listed an ipod for $5 and word is getting around. If traffic drops off, there may be an outage or a performance hit somewhere.

Activity outside the standard delta or standard deviation can be quickly detected and analyzed to the benefit of the agile organization.

The aggregate score of vulnerability scanning of their production network. Focus on monthly delta to determine handling of risk and effort allowances.This was sufficient for his board in measurement of exposure and change in risk in their critical environment.

Amazon’s SOX and PCI compliance requirements followed their financial systems. Other systems were out of scope, not business critical, and therefore not in scope for compliance. Compliance, since they already had lots of evidence and effective diligence in these matters, was a fairly simple matter.

Speaking points (some optional notes seen in my post-it style note in the slide which would not be visible to the audience):

  • ISO 27004, but it will very likely be more of the same that is already available in NIST SP 800-55
  • Frameworks offer no practical recommendations on managing or monitoring and are highly open to interpretation. That would be why we are here at this conference and compliance is a billion dollar industry full of hand-waving
  • ALE may show that valuations of A > B, but that’s about it. Long rant found about ALE found in Security Metrics book. [Single Loss Expectancy, Annual Rate of Occurrence]

Not a huge fan of this template. I noticed that hardly any of the other presenters used it.

I went through this slowly step by step to illustrate how this is a conventional, unclear, and possibly meaningless process.

Speaking points:

  • This is popular in government. As seen on c-span. Is measurable, but it can be unclear what exactly it is measuring.
  • The Assessment vs Audit is a perennial topic with me. The different goals are the important differentiation.

Speaking points:

  • Security risks are especially variable
  • What unified platform is available? This seems to be where most talks leave off. But not today.

Dun dun dunnn.

This is the centralized platform strategy I had alluded to many times in my talk previous to getting to this point.

  • Data sources are usually ready to integrate out of the box assuming application uses standard conventions
  • I would recommend having at least one metric to record and track the progress of every major deployment. Automated generation should make this a minimal cost and your organization will be able to track it and be able to prove its success or correct its failure in real time instead of waiting for the next self assessment, audit, or tangential operational indicator (where many organizations with lacking systems actually detect anomalous behaviors; when they impact production systems because of capacity or instability)

Speaking points:

  • Increased cost may be in the cost of managing multiple platforms for the same data generating tasks
  • Lacking enterprise vision can be fine, but not if there is the possibility of duplication of effort yielding inconsistent results. This validates the benefits of a single architecture.
  • Though SIM solutions tend to already have a very robust offering of that functionality

Forrester has some interesting market projections here speaking about how the growth of the SIM market will start to capture the attention of moderately-sized businesses. I think that, if this happens, it will be because of the smaller and more affordable options in this market that do not require a capital investment in appliances or dedicated infrastructure.

My presentation selecting an enterprise SIM is available here.

If SIM deployments become the industry standard and you do not have a system that performs as well deployed, your organization may be at risk of appearing not to be in keeping with industry norms if an event occurs. Your legal council, and possibly compliance teams, should be on point in this vague (to me) area.

The Jaquith book pushes the Balanced Scorecard pretty hard, which is sound advice. My point of disagreement is that I do not like to advise clients, or anyone, to attempt to revolutionize all behaviors in one fell swoop. Additional reporting frameworks are risky to implement, because people (and therefore organizations) already have ways they feel comfortable doing things.

SIM reporting can be incorporated into any existing reporting structure and, through a series of mockups and pilot reporting methods, you can warm your executives into desiring this information instead of ramming it down their throats by c-level mandate.

In much the same way I advocate using whatever compliance framework is the best fit for the organization, instead of whatever framework your advisor is the biggest fan of, in order to have the least risk of adoption and easiest transition into routine use.

I should have covered all of these points repeatedly in the discussion, but it’s always good to point out key points that I’m endeavoring to express again at close.

Empowering insiders to surmount these challenges and goals are the best way to have them conclude in a successful result.

This may be an unpopular opinion, but I believe it to be an important one. I’ve seen too many resources applied to these challenges without sufficient leadership and internal knowledge in the past. It leads only to ineffective situations, an inefficient workflow, and a large bill.

Here I made time for anything more to discuss. Like the location of this presentation, for instance. (Here it is!)

Some of the references mentioned without which this presentation would have been considerably more difficult to put together.

..and thanks for coming. Please feel free to contact me regarding any lingering questions or advice. I’m happy to help.

1 comment to ITCi 2007

Leave a Reply

  

  

  

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>