Trustwave’s Spiderlabs issued their annual report of their work efforts in the last year; 220 data breach investigations and more than 2,300 penetration tests. They were kind enough to share their findings with the rest of us which I for one really appreciate. It has been a recurring theme of mine that we as professionals and organizations need to share data in order to better understanding and awareness of the challenges we face and also how we, and other industry, are performing compared to others in similar fields.
Naturally some of the problems with sanitized datasets are that you must estimate its relevance to the sectors that they are a part of, ponder about what of the causes were most common in that client slice of the pie, and then daydream about what source of the breach was most common to that segment of industry. If you’ve chosen to think about how other people have published their own version of sanitized data. The best example is the VERIS framework and their resulting data publication and analysis. The best part about Verizon’s work is that they’ve been transparent and open about how they’re using the data and everyone can contribute to it via their community web application in conjunction with ICSA Labs. I recommend that everyone perform their own diligence before pushing data, sanitized or otherwise, to any third party. That being said, if the US Secret Service finds it to be a worthy exercise, it’s likely a good enough scenario for your dataset as well.
Some of the clear comparisons between the Spiderlabs and Verizon data and respective conclusions have already been made, so I’ll try not to go there again since Verizon’s Security Blog gave their own basic comparison between the numbers. The difference in the Spiderlabs and Verizon customer base seems quite distinct based on the variance of their data findings. Alex Hutton gave a hint on a insider threat panel today as to why this may be based on the context where external agencies are engaged and provided insight into why the Secret Service insider threat numbers were so much higher than the other sources. In addition to the vendor disparity, when the USSS is often engaged when it is known that an insider threat was involved a breach incident. Clearly depending on circumstances, people call the pen testers, the business technology incident response types, or the feds. It shouldn’t be any surprise how their numbers look different with this in mind.
If all this wasn’t enough to account for the disparity in data and resulting findings, it also seems that Spiderlabs has based their report on a 10:1 ratio of pen tests to incident investigations as mentioned in their BlackHat briefing last year.
Hopefully there will be more collaboration in sharing data and reporting efforts in the future so use a single standard for publishing this data to remove at least some of the guesswork out of analysis of these totally interesting findings.
A few points which merit further discussion (or posthumous equine flagellation):
- Segmented networks are flat again thanks to MPLS! In 16% of Spiderlabs findings, they concluded that attackers were able to gain greater access to penetrated networks because of this transparency. Your systematic precautions are still as weak as your lamest third party solution or implementation that you don’t centrally manage and monitor. That the old segmented and controlled network zone philosophy is making way for the new flat landscapes of anything-goes (again) and it isn’t helping. “Industry visionaries” still need to live on current networks who don’t have unlimited funds or the ability to retain top talent and re-deploy their entire infrastructure stances.So yes, perimeter controls alone are not sufficient. Yes, endpoint security is key. However, this whole “perimeter is dead” movement is overplayed. There is still a lot of value in ingress/egress filtering and netflow analysis. Perimeter controls are still useful in basic prevention and detection in the general case.
- Malware as a generalized vector and datapath is the most successful at gaining access, harvesting, and valuable data and sending it back to Russia, the largest destination for expropriated valuable data. It is known how most malware infects clients and what to do to prevent it, so why is it so overwhelmingly successful in these reports? It’s because they’ve gone for the minimum standard of functionality. Security failings in the future will need to close processing ability. We’ll be there as soon as the market or regulators demand it. Likely not soon at all.
- Commentary in the Spiderlabs report is not readily apparent to many. Specifically it speaks to the in-flight data capture that is common to most breaches that will be encountered in present-day incidents. Archive data and back-end infrastructure appears to be better protected if only because malware is doing a good enough job harvesting enough of value from the low-hanging fruit of the world. Why work harder and have more expense in processing power opening up avenues for detection and removal if you’re getting what you need without it? Additionally it may be the case that PCI-DSS and other efforts have reduced the utility of archive data to attackers. That may be the silver-lining to what compliance efforts have done to information security management programs; a cleaner back-end.
I mean it’s interesting to hear information about assorted different malware methods, but if you work defense and response, is this really useful information to who it is being presented? Shouldn’t it really be more about detection and suspicious activity? Who besides malware writers, forensic/incident response people, ninjas, and crack CISOs are going to effectively utilize this information? It’s great to have reference material for people for why some emerging guidance on how implementations should be run, but leaving off the course corrections or leaving it up to the local neighborhood QSA superhero[ine] is leaving too much up to chance.
So it’s cool and all to hear about the latest weapons of datatheft in generality, but who should be consuming this information?
- Most net/system admins just don’t have the comprehensive cluebase to deal with this kind of data. They end up with wild theories about what they should be doing on their network and, invariably, consultants will appear to try to clean up the mess at some point in the future. (Been there. Seen that. Have the t-shirt.)
- Non-technical business stakeholders will only be overly frightened by this information and act rashly devoting large amounts of funds and effort into areas that may not be where they would be best served in acting. This can reduces attention and funding for areas that are more in keeping with the real risk/threat model that the organization should be tracking. This may be great for vendors selling magical compliance boxes, but likely will hurt the organizations ability to effective execute their security program.
- Auditors and even reasonably hardcore specialized technologists don’t have the overall clue to develop standards in these areas. Hopefully they will not take it into their own hands to develop their own ad-hoc standards and checklists
It’s kind of like undereducated parents who, between watching daytime soap operas, finding raw study data, random blog conjecture, looking at Wikipedia, and then concluding that vaccines cause autism. I would advise to most to check with your favorite ninja before proceeding too far down an expensive course of action.
Let me tell you what it all of these data programs and publications are though. It’s what real practitioners have needed; data and lots of it. Lots of people (myself included) have done a lot of complaining about how the industry has been driving itself instead of being meaningful and relevant to protecting data and communication systems. Sadly this hasn’t changed much yet, but it does appear like we’re getting somewhere with our herd immunity compliance and business/technology metrics programs even at the amazing expense and astonishing amounts of time investment that we have paid for them.
Finally we’re starting to see some data which shows how the markets are moving instead of random self appointed experts pontificating and stroking their sagely beards about what should happen next. With data we as an industry can make a case for what are the most important areas of concern, rank what problems should be tackled and funded before others, and track improvement and better ourselves in provable ways instead of taking the word of the self appointed expert of the month club.
In sort, we as information security practitioners can begin to prove that we’re doing a good job or not as the case may be. We can show by dataset that we’re making a difference. We can show why we deserve to be the expensive alpha-nerds we think ourselves to be. Why? Because our data and custom metrics say so. If you disagree, show your work.
- People in business are still mostly concerned with their ability to do business. There is still not a compelling need to be secure in business operations as long as functionality is present. Quality of operational security is still very much secondary to ability to execute commerce. Not a shocker here, but many people have lost sight of this. A McAuditor isn’t going to save you. A vendor selling you a product isn’t going to either, though many would argue that their DLP/SIEM/Logging product will. It will not as the malware and botnets will just impose slightly more expensive encryption and obfuscation processes to circumvent them. Throw in common encryption and behaviors that mimic authorized behavior and most turnkey solutions will be ineffective. Even smaller customers should enlist the skills of a seasoned professional if only to review their enterprise plan our or assess their environments for flaws.
- The threat of fines and business impact is still insufficient to garner enough interest in doing the right thing to protect customer data. The motives are clear; the cost of getting caught are still perceived to be less than the cost of an effectively planned and executed operation. The lack of will in credit card companies to move to a more secure technology is needed, but as no one seems to be directly financially responsible presently, the exponential fraud trend will continue until something gives. We’ll have to wait to see what that something is in the next few years.
- Antivirus remains nearly completely ineffective in combating modern malware. Malware authors remain ahead of the game in that they test their work product in labs before release to targets. Most malware is commoditized where the minority are advanced creations with multiple levels of anti-forensics, encryption, and/or specialized purpose. This analysis shows that common malware and botnets are cobbled together Frankenstein monsterish creations or purchased off the shelf of a crime howto marketplace.
- Specific illicit data acquisition tools are still fairly conventional; memory dump, keyloggers, and sniffers, but also purpose built POS targeted specialized software.