Application Penetration and Code Analysis for Non-Developers

Application security competence found at bottom

As most techsec professionals, I’ve been asked to do more and more in application security matters, an area that I’ve usually seen specialist ninjas dominate due to the often extreme technical depth required of the realm.

I’ve written previously about how [in]competence seems to be very hard to self-identify in most humans even when not dealing with areas of high complexity, so why not have some quality guidance so that one does not sound like so many of our kind; talking out of their depth and giving bad advice.

So okay.   How far should someone who wants to remain generalized, but wants to be able to offer some kind of limited competent analysis and dialogue on application security, go without going off the rails into overreaching bad advice?   Appsec is pretty much the biggest deal in the information security field since everyone adopted garbage technologies as lame development platforms a few years back.   Writing secure code has always been important, but things used to be a little different.   The appstack is now wide and deep   and, if you own the appstream, you can pretty much access/subvert/rob/blind everything that matters in most environments.

What I should really do, I thought, is ask for advice on how much confidence one who doesn’t specialize in this area should reasonably expect to have when putting together solutions to solve problems in this realm where the best experts spend most of their waking lives knee deep in it and I’m all over the place.   I thought that who better to comment on the reach of a generalist in this area than a ninja who specializes in appsec and fights with the other experts on public mailing lists.   To this end I asked a friend of mine who is one of those specialized appsec ninjas what he thought a generalist should know about application security, how far they should go when they talk about about subjects therein, and what were the most important areas that they should be familiar without making themselves part of the problem.

Not surprisingly, he had a lot to say on the subject.

Most senior accomplished people you'll encounter

With my  background as a sysadmin, network engineer, integrator, enterprise implementer, general technology consultant, and process/workflow/control/whateverist, people have come to me to solve complicated problems on a routine basis.  The most interesting of these scenarios tend to be when I’m presented with a half-solution from very competent people based on how they would approach a problem, but leave the details to me to pull together and fit those square pegs in their matching round holes.

Big developers tend to think that their skillset is the pinnacle of all technology competence and nerd achievement and also, perhaps secretly, all other technology people are aspiring to be them.  If you have visited at any big dev shop, but especially Microsoft, Amazon, or Google (and I have), you will have seen this first hand.

Additionally, might also discover the same god-of-all-the-lesser-nerds mindset with many talented high end network architects and nanog-type network engineers, SAGE-type big bearded UNIX admins, and other specialized people.   They, perhaps, under-appreciate that there is a lot more art and craft to managing complicated systems than just writing code, routing packets, or server uptime statistics exclusively.

To review:

  • As a generalist, how far can I go in being effective in this space?
  • What are some good authored sources of methodologies that people can read for them to begin thinking about how best to approach this iceberg situationaly.
  • How is it best to employ experts effectively: how to pivot from analysis of unit test results, running automated processes using assorted codebase analysis tools, and how far should a generalist go in   performing the analysis themselves before punting to a specialist?
  • What would you have to say on the balancing act between developer time and secure coding practices; making the business case for quality and developer maturity instead of an absolute ship schedule.

This principal from a major application security SaaS vendor had this to say:

Network sysadmin types with those backgrounds and that have limited experience with any programming language including Unix shell, BASIC, LOGO, or hardware robotics/electronics and that can’t understand concepts like recursion or loops simply not need waste their time touching anything related to appsec or pen-testing.   This is inclusive of pen-testing, social engineering, as well as appsec, and even physical pen-testing. Social engineering newbies have Metasploit and/or SET as the #1 tool in their toolbox, even if it means paying $15k to Rapid7 for the Pro version of Nexpose and some mid-to-senior level pen-tester a few grand to automate SET with a custom GUI.

THEY STILL NEED TO KNOW BASICS!

That being said, I know very few network or sysadmin or even audit people that want to do pen-testing or appsec that DO NOT have at least a little scripting experience. A little scripting experience is going to get someone reasonably far in pen-testing and appsec.

A word on the tools: they suck.

Even when you run all of the commercial ones, they still suck in the wrong hands. They say a fool with a tool is still a fool and it is 100 BILLION PERCENT TRUE. I do not want to cite specific people in our industry, but they exist. They have social engineered companies who want appsec and they run every commercial appsec tool (all app scanners, fuzzers, network vulnerability scanners, and static analysis tools) on everything they can get a URL, build button to drop companyname.com on, or box to “make all” with. They are bunk, but people keep buying their garbage, which in turn gives them more ability to buy more tools, which makes everyone in the appsec industry look bad.

Then you’ve got other bunkos that have a SaaS offerings, who hasn’t updated their product in 5 years. It’s a perl script that fault-injects HTTP parameters and pumps the HTTP responses through a massive regex. Then, they have 19-23 year old kids with no degrees and no infosec experience, but perhaps some QA experience at A Major Game Company for three months, and verify the true positives by running through checklists. The concept has been tried in various other organizations and it is well known as a “scan factory” as if that’s somehow a good thing.

Clearly a dig at people who know who they are but no one else does.   I love it when people do that.

Again, the tools blow. You have to use your brain even if you are writing all of the tools yourself or not. It’s the way that you use the tools or your own tools is what matters. Training kinda helps, but the problem is that SANS and Ethical Hacker programs just tells people the names of the tools and the slightest introduction to them.   Really, they tell you nothing about how to use them. Nobody tells anyone how to use the tools.   It’s simply not written down or available on YouTube. Offensive Security has some cool training for network pen-testing, but they don’t really get into appsec proper.

The best training on appsec comes from Aspect Security and by reading the beejesus out of the OWASP website (and going retarded on the OWASP community, both online and IRL), and perhaps memorizing the MITRE CWE/CAPEC. Perhaps combined with a good pen-test mentality (from Offensive Security and learning how to use the tools on your own by playing with them on test grounds or real-world environments i.e. YEARS OF EXPERIENCE), this can make a ship leave the dockyards.

If you want to set sail with appsec, you are in for quite a disappointment.

NOBODY gets it. NOBODY.

You can memorize TAOSSA and WAHH but this isn’t going to do anything for you. You can write these books and still be totally clueless when it comes to fixing root causes and root issues in our industry. This is where even the best and brightest fail.

Anyone that’s done their tenure at Aspect Security is very rooted in awesomesauce, and certainly there are other people that kick all of Aspect Security’s butts, but they are one-offs and usually esoteric specialists in some arcane appsec art form.

I’ve heard examples made of people who everyone knows who specialize in XSS, SQLi, know everything there is to know about code audit in a particular favorite language, etc.

There are no appsec generalists. There are some super rockstars, but they are too busy at kicking ass and taking names to talk to anyone. So you have to totally learn this on your own.

YOU ARE ON YOUR OWN.

I suggest just becoming a rockstar. A search for “application security” on books24x7.com lists over 1500 books. I suggest reading all of those, and then doing the same on Safari Books Online. It should only take about two years of straight reading and doing nothing else to accomplish this, so it’s not that hard because it’s just work on your eyes and you can get a Kindle if your eyes strain too much. You can glue it to your stairmaster controls and stay in shape at the same time while reading all of them!

After reading and knowing where to find the info you’ll need when you get stuck performing real world bid’ness, download every testing ground and every tool and use everything on everything until you get results that you like. This is likely going to take at least five years, and you’ll also probably want to be testing real world things during this time, too. Think of it like earning five different CCIE programs while being the only guy running the BGP traffic engineering at Level 3. You have to work sick hard and play around with things a lot until you get them right.   Also, you’ll want to use things like statistics and math (at least a little bit) in order to judge if you are going in a good direction and are being successful.

Using metrics and statistics to assess success?   Hooray!

Finally, you’ll need to write your own tools, tool wrappers, and books. It’s like studying for law school finals and having to outline every chapter of sixteen four thousand page reporters, which basically means you’re writing a four thousand page book that summarize the sixteen reporters and the lectures and work you’ve done, but really it’s all about getting into the psychology of the court cases and who was involved and trying to pretend you were in that court room.   Legal metaphors aside, capturing the mindset is key.

But there are some shortcuts and things I’ve learned to help others succeed. It actually helps quite a lot to know something about their background and what they are capable of doing. Becoming an appsec (or IMO, any infosec) expert means being a trusted adviser to an organization.   Fulfilling this need requires multiple people in multiple roles on multiple teams.

Yes.   Teams.

Building your dream team is basically what this is all about; you can’t be a lone hacker cowboy. It’s not like that. You get a tiger team like in the movie Sneakers. Robert Redford did not do it without that crazy blind guy and Dan Aykroyd as the conspiracy nutball.

More over, in reality, the leaders are never as good as Hannibal or Redford. Everyone has to pull equal weight. When you work for a CISO, you need to have the same title as everyone: “Assistant to the CISO”. Flat hierarchy. Everybody can probably do some of everything, but people count on each other to get their bit done and done right the first time and all that jazz.

I usually see a random title and suffix of working for “the office of the CISO” when I run into members of these teams.

So when I say to a team guy/gal, “I am going to help you with appsec and pen-test,” it means a ton of loading on them is going to happen. I am going to assume it’s someone like me 7 years ago, or maybe someone like Ian Gorrie, or somewhere in-between but also representative of others skillsets that are somewhat like us — and probably as cool, but probably never going to be a ninja or super rockstar, even though Ian and I might be that cool someday.

I think I’m alright at some things.

On fundamentals:

Basically, if you really want to get good at appsec, get good at security first. Everyone else keeps saying the other way around, but I disagree. There are certain things that need to be learned in the right order. Starting out with tons of network security experience is good, as long as network security is not the end-goal of this person’s learning pursuits.

That said, a person needs some serious investment in the instructional capital necessary to get up to speed. The Secure Programming with Static Analysis book is not enough by a long shot. The Web Application Hacker’s Handbook 2nd Edition is merely scratching the surface, however the accompanying labs at mdsec.net are basically a must-have experience for getting started with app pen-testing.

Here’s the primary 5 investments necessary to be a good appsec dood in 2012:

  1. Appdev.com Java landStart with Java EE Programming: Servlets and JSP Fundamentals. Build all of the projects and code in the course. Supplement learning servlets, JSP, JavaBeans, and JSTL with other online or book resources.   Move on to the servlets, JSP, and JSF content. This content will go further past the MVC model and more into framework specifics like JSF and Struts.   You’ll probably want to take all of the additional courses in the Java Enterprise section, but be sure to take the XML Development in Java before hitting the EJB or Spring stuff too hard. Supplement it with the Java Brains free online courses. Do not pass up the Hibernate courses before moving on from the Appdev.com courses on Java Enterpise, make sure you have a solid foundation of knowledge of the JAX-WS stuff. You are now nearly ready to assess the OWASP Webgoat and GoatDroid codebases
  2. Play with the OWASP Webgoat and GoatDroid implementations to understand them fully. Note that you’ll want to import these into many build tools and environments — not just Eclipse and Jenkins, but also IntelliJ IDEA, Netbeans, and many other CI servers.   Find other code to build. Build it in every build tool. Being a build and release engineer is a primary skill set of any appsec professional worth their weight. You’re going to get 100 questions on the first day to any client site about this kind of stuff. You have to know your stuff when it comes to building and releasing Java Enterprise code.
  3. Rock on with the PluralSight learning offerings. I suggest to start with the Android courses in order, and then move on to the iOS learnings before anything else.
  4. Check out raywenderlich.com and other sites like it.
  5. Build your own projects and then tear them apart.   Mobile apps and the web services that they consume are really paramount in 2012. They are our new low-hanging fruit and they are important. Expect to see more and more use of self-modifying/checking code everywhere (mobile/cloud/web-service apps). You’ll need to know how to deal with this stuff at some point, so you might as well start learning now. Get yourself a copy of Surreptitious Software and Practical Malware Analysis after dusting off your IDA Pro Book kid-gloves.

A special note here: Javascript and HTML5 are insanely good investments. haXe will continue to be a niche language that is “nice-to-have” because of its future-looking capabilities. John Resig recently stated that Khan Academy will have most of its programming courses in JavaScript. This does not necessary mean that Scratch (really Squeak which is really SmallTalk) isn’t a useful or better first programming language. People need to learn the basics of OO, especially once they reach a certain point. Someone who has followed my 5 step approach above will be at this point somewhere near the middle or end (depending on how much previous experience they have).

I highly recommend Object-Oriented Software Construction as well as Core Security Patterns at this time in the career. Almost everything built should be in Java (or JSR complete language such as JRuby or Jython). Yes, you will also want to know enough Obj-C to strangle yourself, as well as glue frameworks such as PhoneGap. Don’t bother with MonoTouch, at least not yet. We’ll get to that.

There will be a new book on Web Security Patterns around June 2012. This book or one like it will probably be pivotal in the way that System Assurance: Beyond Detecting Vulnerabilities should have been. Eventually, some developers will start to “get it” when it comes to appsec problems. They’ll put in the work ahead of time. Most won’t. Most will try to hide behind seemingly secure frameworks such as .NET.

.NET will remain a revolutionary secure approach to handling the appsec issue. This is why you must learn it last. Not only is it very secure out of the box — almost all of the issues that require looking at during a pen-test and/or secure code review involve intricacies that can only be learned after Java Enterprise, Android SDK, and iOS SDK are learned. I also highly suggest learning .NET with Mono instead of without. There are you many benefits to doing this how I described, but to put it in language that you won’t understand — let me give this retarded analogy: you learn to play the clarinet before the flute. You just do because that’s how it’s done. You wouldn’t want to learn the flute first, because that would mess it up.

So, there you have it. Learn Java before .NET. It will also be easier to break things when you use a system that is easily breakable. Using this logic, it may be best to learn Perl before Java, but really Java is the most important because anyone who does any static analysis will be focused primarily on Java Enterprise.

Most other things to learn are more generally a waste of time and should be considered optional components in the learning ladder. For example, there’s not a lot of reason to learn Struts2 if you learned Struts; or to learn iBATIS if you learned Hibernate. You can deal with those problems when you run across them in the real world (that is if you run across them at all). This also brings me to discuss things that developers don’t know that are there, but these things are really there and you do have to learn them. Things like memcached. Good developers do learn these things, and as a security professional, it makes sense to stay at least one step ahead of those geeks. So learning the external components that connect apps is paramount. You’ll need to read a bunch of Oracle, TIBCO, TeraData and similar books, as well as get some hands-on with these components — at least in a lab or wherever your build environment is for learning purposes. Try to get as many of these unique solutions into your head such as cache, cluster, and grid technologies.

He had more to say later about appsec in general and if one should make the extreme effort to be a practitioner, he provides more detail about what specific things should be done and learned.

Dood. It’s easy.

#1 : Don’t be a douchebag.

If you really can’t code at ALL, especially on ANY/EVERY platform in nearly EVERY language — don’t bother doing appsec. I don’t mean you have to know the differences between assemblers, but it would be nice to know who or where to go to if you need immediate answers to programming questions. Try like SafaiBooksOnline, Books24x7, Ebrary, a digital library, etc. Build a meta search engine and your own dorking libraries.

DON’T sell yourself on being an appsec guy like a douche who doesn’t know that he can accomplish the same from a Python script as with a Unix or NT shell. (See also most QSAs)

Complaining about he general badness of auditors has been done before, but since so many that I run into view assessment = audit = pen test = appsec (which should pretty much deprecate any other opinion they might deliver after such a claim) it bears repeating.

#2 : Learn HTML, XML, Flash, Flex, AIR, Web Services, SQL clauses, SQLi, Javascript, XSS, CSRF, Ajax, RIA frameworks, Node.js, MongoDB, CouchDB, Project Voldemort, et cetera

Best to know these before you know every detail of HTTP and SSL/TLS. Too much network turns you into a network guy. At some point, try to learn about HTTP Header Injection or SSL vulnerabilities (especially if you get stuck or bored on any of these other suggestions), especially since header injections involve HTTP response splitting and file download injection vulnerabilities.

I suggest you bust out the xmllint shell on HTML and XML files, including live sites via HTTP. Kill two birds with one stone. Libxml2 comes with xmllint(1) and is installed under Mac OS X and probably your Unix of choice (e.g. Linux, Cygwin, etc) or can be within seconds or minutes unless you are on a soft keyboard or your keyboard and operating systems are in a different language. If there are real life distractions, then you need to get rid of them.

Download SWFs into SWFScan. It’s free. You like free. You might also want to automate with Nemo 440 or check out the commercial Flash disassemblers/debuggers. You’ll be right on your way to FlexBug in no time! There are OWASP videos on Vimeo. They talk about stuff that you should listen to.

Learn how HTTP communicates inside an Enterprise. It’s usually cleartext REST or just plain XML serialized data (DEFLATE compressed) that is sessionless — it’s so easy that you don’t even need Firesheep. At this point you might want to learn everything about Base64, compression, and light forms of encryption and session management (besides HTTP session management).

SQL has clauses. Know which ones you can inject into and why. Know it for every popular language and RDBMS. Read a few hundred SQLi cheatsheets. SVN down the latest sqlmap, run it everyday, and read the code. Know how to save an HTTP request to a file and automate it. At the very least, run Havij Free Edition through Burp or Fiddler and reverse engineer what it is doing so that you can learn how to do that stuff. I want to reverse engineer the commercial copy of Havij because it claims to support SQLi through ModSecurity!

Learn Hackvertor and study Javascript obfuscation/compression/minimization. Good. Now learn those concepts without Hackvertor and Javascript by implementing your own Javascript language that’s exactly like Javascript and Actionscript. Study the OSSP C-based interpreter source code and think about what it would take to build a Javascript fuzzer. Keep learning more Hackvertor this whole time. XSS will be easy to understand by this point. I suggest you also check out Casaba x5s and all of the work by thornmaker, .mario, and sirdarkcat. There’s also cool tools such as DOMScan worth checking out.

The rest will be easy by this point. If you can write a Javascript fuzzer, you should be able to also write one that targets unmanaged code. Now you will want to learn how to put backdoors in managed code, including their frameworks. Learn the callbacks and interactions between all of the major VMs with various kinds of code e.g. Java Enterprise, Dalvik, C# ASP.NET, PHP, etc. Learn how to “fuzz” VMs with various code such as running/compiling anything with Parrot VM, as well as the weird stuff you can do with JaCIL, a CLI to JVM Compiler (or IKVM.NET). Learn Reframeworker.

At some point, should you be looking for other ideas: learn PHP security. PHP is very popular with people who don’t have time to learn things right. It’s also fortunate (or not fortunate depending on your perspective) that PHP is highly configured and lots of application security mistakes can be made in the configuration of PHP in addition to the code. This is great for those with a LAMP background.

# 3 : Learn App Pen-testing

They love it

If you want to do pen-testing, please try not to get suckered into the whole ego trip of it. It is cool. Your mom even thinks it’s cool. Hot members of your sexual preference think it’s cool. My girlfriend thinks it is cool but ignores me. There’s nothing wrong with that. I kind of like it that way.

Pen-testing is a joke. It’s cake. You steal all of the ideas that laresconsulting.com has presented. You combine them with your SQLi, XSS, and read/write inclusion knowledge. I would probably use Core Impact if I oversold non-appsec-pen-tests to clients.

Appsec is the bottom of the vomit bucket. Every target has an XSS or a SQLi — or SOMETHING ELSE. Load up XSSF or sqlmap-dev and get to work. But first — you need to do what laresconsulting.com cannot do: find an XSS or SQLi in the first place (let alone that SOMETHING ELSE, which is usually a read/write include attack, file upload attack, business logic flaw, etc). For this, you only need to do a few things. I’ll run down them here as fast as possible, but it will be confusing. I don’t want to make it too easy for you. Run Netsparker CE in Fast (No JS, max speed) mode through Fiddler. Configure Watcher to process sessions offline when you are done. Look for High Issues labelled User Controllable HTML Attribute. Double click them. Configure x5s (you have to learn these tools a bit first AND you have to know XSS and SQLi). Replay the requests in Fiddler a few at a time until you hit XSS jackpot in x5s or XSSRays. Start emailing targets with URL shortened and HTTP/JS obfuscated links that go to XSSF. Exciting. Next you need to find SQLi, which Netsparker CE can do against MySQL or MS SQL (sometimes). Burp Pro Scanner will find SQLi in lots more things than Netsparker CE (focus on the user controllable suggestions from Watcher first because you’re sooner to hit jackpot than with zero knowledge), but the cool thing about Burp is the ability to play around, view things (it’s a good idea to know EVERYTHING about the data going into Burp so that you can carefully play out decisions on what you are going to send in HTTP requests — Burp Pro has a Search feature btw), tweak the settings, and automate.

Browser tools:

I suggest the Multi Links and All-in-One Gestures add-ons for Firefox, especially good when combined with the Web Developer Populate Form Fields drop-down or FireBug. I also like Fireforce with the skullsecurity lists (rockyou-75.txt is my current favorite, but I tend to mix it up with L517 and interesting related data). Fireforce is useful for when you don’t have a login, and haven’t yet stolen credentials with XSS or wherever else.

You tried to login with SQLi, right? If you use any other Firefox add-ons, then you’re an idiot. Learn how to do it with a bookmarklet. Also see rule #1: Don’t write a 40 line Python script to do what you can do on-the-fly with a 14 character Unix or NT shell-fu command line. Burp will be able to handle all of that. Repeater is your new browser.   You use Proxy History and the Comparer tool, too. Or Intruder. I use all of these tools. I hate Target and Spider and you will, too. Better to automate stuff faster using APIs and fast things you find around. I am guessing that O2 does this probably ok, but I’d probably just figure out how to do it at a Unix or NT shell (wherever I happen to be at the time, although as a side note, it’s nice to have some statically compiled tools for various platforms especially something like that previously mentioned Parrot VM). I really prefer to have Ruby, PHP, Python, bash, Perl, OSSP js, OpenSSL, and a few other things around whenever I’m on a command line, especially calculators like bc(1) and dc(1).

It’s nice to know more than a few interpreters by heart. You might want to learn some statistics and encryption along with your shell-fu. If you can’t figure out a potentially vulnerable URI with potentially vulnerable insertion points (parameter names, parameter values, URI/REST/arbitrary parameters, known or arbitrary cookies, known or arbitrary headers, etc) using Burp Pro, then you may want to go back to x5s. I prefer attacking insertion points that I know affect the app itself, which may require a lot of workflow using your browser. Work for it! Use QA techniques such as equivalence classification and pairwise testing to provide some focus and attention to what you are doing. Get some books on exploratory testing and apply the techniques to security testing. Make sure you understand your target environment the best you can with the resources you have (also see laresconsulting.com techniques or similar). Yes, RDP hijacking is cool — especially when you’ve got Forest Admin and recreate what the current NT admins think is their Domain infrastructure. What would be better would just be to have system access to all of the target’s public-facing RDBMS servers — and an XSS Proxy or XSS Tunnel inside a few key employees’ browsers, so that you can constantly spit DriveSploitMSFbrowser_autopwn Fast-Track fu all over their target OS environment. You probably got RDBMS server access through a UDF injection or silly stored procedure, didn’t you? UDF is a cooler acronym than RDP. XSS is a lot cooler of an acronym than most people are willing to credit it.

Remember, laresconsulting.com has to do a lot of pen-tests local (e.g. in the target’s Porsche while sitting in their parking space). You can do this stuff from space.

–update–: Ok, as for the above, I barely do any of it anymore. I just don’t have the time while on assessments. I look for higher value AuthN/AuthZ targets most of the time. For the basic bug checks, I just slam the host with RAFT and SVNDigger lists (directory config and lists first, followed by files) using a properly configured DirBuster. While that’s running, I prop up SearchDiggity on the site if it’s an Internet one (or do similar digging myself if it’s something I have to Intranet or VPN into). Best of case scenario is that I have root access and can grab the source code, throw it into Fortify SCA (or run SecurityScope), and/or run audit tools, especially cvechecker.

I also tend to look for high value vulnerabilities first, and try to fit parameters to them. For example, if I see a “/” in any parameter values, then I hit them with path traversals. If I see any “orderby/desc/asc” parameter names, then I hit their values with SQLi. Anything that includes user-controllable HTML gets some custom XSS checks by hand.

Final point (since I get source code a lot) is that Fortify SCA can be made to easily do things when you specify custom analyzer views with the taint:web and diagrams showing.
–end update

A PHP example:

When you hack your PHP-dev-would-be-friend, jump on their box and install modsec, ossec, and set them up together — probably get them talking to OSSIM. Whitelist those parameter key names and their reasonable values, as well as any other insertion points. Whitelist URIs and REST parameters if applicable. Setup ossec to monitor the integrity of the frameworks, containers, content, et al. Show your PHP buddies how to run fimap. If they are total webnerds and can’t do command line, get them going with Arachni’s web portal and RIPS web friendly thing. If you have money, probably get them HoneyApps, Metasploit Pro, Netsparker Pro, and some WhiteHat Security Sentinel API action (for things like HTTP Response Splitting testing). Go back to your days of installing rpcapd on everything. Import Netsparker Pro and pcap data into Metasploit Pro.

As either attacker or defender, setup some nice Fierce-v2, nmap, Medusa/etc, Nikto, DirBuster, skipfish, and OpenVAS automation with any of your other existing tools. Sometimes these tools can be wicked fast at spotting something that you want to find. Or stuff like GoogleDiggity/BingDiggity, SHODAN, et al.

Lots of this type of automation.

# 4 : Learn how to go from a bunch of ownable code to a bunch of less ownable code.

The key here is to be a total badass with every static analysis tool. You won’t be able to do this because you probably didn’t spend enough time learning the language callbacks to their frameworks. Without callback knowledge, you’re kind of stuck — so I’d recommend learning Ounce [Appscan] first if you are stuck in this rut. Unfortunately, getting Fortify means you’ll have access to tons of languages you probably won’t use. Ounce is cool because you will get information about lost sinks. Other tools are less common, but people are badasses at all 4 of these: Ounce, Fortify, Armorize, and Checkmarx. You could probably run Klocwork, Coverity, GrammaTech, and others on Java Enterprise code but probably not C# or PHP. You’ll learn to use fast techniques to scale and learn these, but it takes time. Nobody writes about it, but Cigital, Gotham Digital Science, iSecPartners, and Stach & Liu seem to know way more about these tools than others, including Denim Group and AsTech Consulting, or ISS for that matter. Heck, I’d say those 4 know more about the top 4 static analysis tools than the vendors themselves. It’s similar with Web Application Security Scanners – but in this case, the free one works better than most of the best commercial ones. Remember that this technology isn’t rocket science — it’s a satisfiability solver or perhaps an automated-theorem prover at best, but usually just mostly (lots of adverbs here, sorry, but it’s true) an AST parser.

This gets really annoying, so the first thing to do is go to ohloh.net and find all of the largest open-source projects, especially ones with lots of developers and lots of churn, and build them in every available build tool for that project or language(s) that the project components can be built with. Then, try your own free AST parsers on them, such as all of the ones found in Yasca (but try tons of static analysis tools). NCSS is one nice thing to know for scalability purposes here. If you can get yours hands on very powerful continuous integration tools, I think that will help your cause — but the important part is to keep building code, especially code under constant churn. Every directory on your OS should be linked to a GIT or SVN repo. Maven should be automated like mad.

I’ve gone a few different directions on how I view security testing for developers. I have always felt there would be value to adding fault-injection checks to HtmlUnit or a Javascript or Flex/Actionscript component testing framework. However, there are a lot of subtle flaws in web application security that I think are best demonstrated in Burp Pro by saving a session file of just the Repeater in order to share it with others. The SAZ file in Fiddler is another option — it’s cool that Fiddler can also output XML and Microsoft Test Professional test cases. I am inclined to think that these are great tools for pen-testers, quality testers, and development testers — they will have much more lasting value than custom integration scripts.

# 5 : Start combining everything.

Mix it up. You’re an AppSec DJ. Girl Talk will say your mashups are a bunch of crackups.

I decided to update this again. I am installing RIPS for PHP today. I saw it in action earlier today and compared the results to the most recent versions of HP Fortify SCA and Armorize that were tuned by Vinnie Liu and his team.

I am in shock that an open-source web portal security-focused static analysis tool would BEAT OUT the pro commercial tools categorically. It hands-down took the first prize by a million points.

–update-- Fortify has since gotten a little better, but RIPS is still quite good. It failed to run on me a few times lately when Fortify worked.

Another tool that works primarily on runtime, but also scans PHP (and CFM, JSP, JSPX, ASP, ASPX) files is inspathx. It is in development and I am working with the developers of it. It’s open-source, command line, and written in Ruby.

Arachni, another open-source project written in Ruby, will have a web portal that rivals the HP Assessment Management Platform (AMP).

Between RIPS and Arachni, it appears that sysadmins, network engineers, network security types, and SOC people can now help out and/or learn application security including code review.   I would also combine these tools with ModSecurity feeding OSSEC feeding OSSIM (in addition to other hooks in OSSIM such as PADS, Snort, and Nessus/OpenVAS).

update– this paragraph above is still fairly good, although I have heard of some improvements to log management like components in the past year. Metasploit Community Edition is also really nice

On working in enterprise environments:

So… um… it’s apps. XML is like 51% of data exchanged by apps in enterprises and it’s a lot like HTML. So learn that right away! I suggest if you have a Unix background, install libxml2 binaries (this is default on like Ubuntu, Cygwin, and Mac OS X I presume — or it SHOULD be) and type xmllint. Then read the –help and man page. Then start playing with it and XML and HTML files. You can even feed it a URL and drop to a shell. I recommend doing that first and using “help” inside the shell and you’ll figure it out real quick. Then learn to program more with XML — there’s a few other ways of doing that CLI with like xmlstream or some other lame commands, but you’ll probably want to learn lxml for Python or whatever language you want to learn or already know. I highly suggest the Refactoring HTML book for learning this too.

Then you can go in multiple directions with that XML/HTML knowledge. You can use it for code knowledge, server configurations, as well as reading the crap in HTTP responses. HTTP is pretty big. Besides the Offensive Security network pen-test stuff, you’ll want to enhance your ability to apply this to the apps. Sure, SSL is important but ssllabs.com almost precludes getting too fancy with the openssl CLI or stunnnel. RFC 2616 and related are penultimate to understanding HTTP, but really you want to learn the web proxies, especially Fiddler2 and Burp Suite Free/Pro. You can skip this knowledge and become super elite, but why? Web proxies and MITM proxies in general are awesomesauce for speaking language with others and showing them things. The Microsoft Press Hunting Security Bugs book is great and it comes with web-based companion content. Really, you want to master something like Mallory instead, but a broad knowledge of all types of Web/MITM proxies is going to be a boon. You’ll probably want to learn uhooker, Echo Mirage, and WPE Pro as well while you’re at it. ImmDbg/Olly or what not might be great. You can play with protocols like an appdev designing and implementing the protocols in apps or better. It’s RE awesomesauce. There’s not too many good classes on RE, but I’ve seen the SANS material and it’s not horrible. Their RE and adv exploit dev classes are much better than their webapp pen-test classes. There’s a million ways to skin these cats and I recommend learning a bunch and moving on to getting closer and closer to the apps you intend to understand.

Since you got HTTP under your belt, try out the different test grounds and real-world apps. See the names and types of parameters and how they work together — their associations to page and other parameters. Try to understand how the parameters might work on the backend, whether a web service, other service like LDAP, XML query structure (e.g. XPath, which you can also learn via xmllint), database, or files. Fall back to that SMTP know-how you learned from the Offensive Security stuff — email is common in app forms and is heavily service-oriented. I think it really helps to learn basic QA 101 here — knowing how to not repeat testing the same stuff over and over (removing equivalent classes, or recurring idioms — to say it another way). Know how to get to the “meat” of the problem that you are trying to solve. This is your brain at work! Combinatorial explosions are a neat concept — it’s like mashing buttons and finding awesome combos! Mash up some parameters and see how they go. Start with things like login and password fields. Sometimes the authentication is done via a single query string, therefore you can combine stuff like ‘ OR 1=1/* (for the login) and */– for the password. Understand that if you want to do security, there are no shortcuts and that everyone MUST be tested, but realize that some kinds of testing will give you quicker results (and better/faster understanding). Build some fuzzers and fault-injectors now!

What if someone doesn’t want to spend a decade learning all of that?

There are some really cheap wins in appsec that just brute-force awesomeness. Like ProxyFuzz. IMO, some of the webapp fault-injectors are classes above the rest, and the ones that are more actively developed are nice, such as arachni or w3af. However, stability can be a super differentiator, which is why many people rely on tools such as Burp Pro and techniques as seen in Gray Hat Python. The funny thing is that even these are not stable — you have to gauge their stability at handling any one particular task depending on a lot of factors. Knowing how to scale and do network/system performance stuff can really help here. I wish I used iftop more to see how fast any given brute-force, fuzz, or fault-injection tool is performing (and we all wish we could just test everything from a localhost — not a bad strategy.   Often RE’ing and making your own version of something in a lab is faster than testing it in a prod, dev, or test system). Ease-of-use combined with power are also nice. It shows in all versions of Metasploit and definitely in Burp Pro and Netsparker Pro. I fight the typical mindshare/mindset stuff usually, and perhaps I’m wrong about that, but Appscan, WebInspect, and Codenomicon just don’t do it for me when the free Wapiti and skipfish + the cheaper Netsparker and Acunetix seem to do so much better with the testing grounds and real world apps.

Static analysis is straightforward: one loads Fortify into Eclipse for small-to-medium sized projects that involve almost all code except corner cases or unsupported languages. Languages like SFDC Apex are best done with Checkmarx, especially when there are just snippets of code and not a buildable system (good for cloud based apps). Knowing how to build the code is important. Know your Eclipse/Ant and VisualStudio/MSBuild well by building tons of code — easy to find in the world of open-source. Especially try large or difficult projects which you can find via ohloh.net. If you can’t get Fortify, then do your best with FindBugs and CAT.NET, perhaps Pixy or others. The C language stuff is straightforward if you are using VisualStudio — tons of resources out there. But for others, it’s probably ideal to try the Hacking Exposed Linux 3E way of using either Klocwork or Coverity, depending on your industry’s code types. Embedded/aviation/etc should probably be Klocwork and Enterprise is usually Coverity with tons of rigor being applied to these. Also see the SATE analysis stuff, especially if you are hitting C/C++ hard.

If you don’t understand the code parts, you’ll have to go towards it slowly over time as static analysis isn’t the best approach if you already can pen-test stuff well from a full-knowledge system style test (blackbox is a total misnomer here). I suggest learning how to do server configuration traversals matching inputs (e.g. URLs and parameters) to the XML config file, to the jar/container/etc file, to the objects/source files, to the line of code [LOC]. Then it’s a matter of understanding how to read code (try the Code Reading book) and understand a good security approach (try TAOSSA Code Audit Strategies, Chapter 4). There are good books on debugging and unit testing that may also be worthwhile in this study.

Once you get to the LOC, you can either use a Fortify-in-IDE to trace the dataflow, or alternatively, just learn how to do this work using things like the OWASP code review guide, searching millions of books at once on books24x7 and safaribooksonline, etc. Maybe it’s best to work with developers. Sometimes the code is in a stored procedure in a database that makes an exec call or similar — so the code may not even be included in the buildable containers. There is a problem with static analysis tools and even manual code review that will lead to “lost sinks.”

Their practical advice for non-devs in code review: Don’t.

More and more, I think non-devs should skip over the code review, automated partially/fully or NO, and stick with understanding the framework (this assumes managed code). Managed code is awesome and the VMs themselves can be tweaked. Instead of just playing with Eclipse/Ant or VisualStudio/MSBuild warnings/errors/bug-finding-plugins — one can jump directly to the underlying framework and change the VM (i.e. harden the VM like one would harden a Cisco router or Linux box) using a tool like Reframeworker (see the Managed Code Rootkits book). So then, a security dood can type up all of the mistakes his/her developer-friends are making and tell the VM to spit out an exception, say, when the developer builds the code on a system with the hardened VM and uses, say, Statement instead of preparedStatement (and the other billion things to check for). This could potentially be a nice human-facing blacklist, much like the banned functions list that you can throw into VisualStudio.

Developers can work around these little intracacies, which changes the human factor up again. They can do lots of good and bad things. It’s good to know what they are doing, so you really have to be a developer in order to do this — or at least read their code and start to understand the psychology. Maybe you just need to interview them and learn a lot about liars and their lying (like that TV show with Tim Roth).

I like to take an architecture perspective because it’s totally high level and lead devs (who like to talk about this stuff) grok it well. This is again, usually only possible with OO and probably managed code languages. It  has to do with patterns.   Patterns are certainly something that netengs/sysadmins or whoever else can usually get, and if not, at least the basics.   One can dig pretty deep here. I have tons of ideas, but I’m not sure they will pan out. I think this is a huge area for excitement and improvement.

This leads me to threat-modeling discussion, which is really a horrible term. The BSI / Cigital is aligned with the old style OOAD, which I think has been ultimately replaced by DDT in the same way that CASE has been replaced by prototyping/refactoring, modern unit/component testing, and static analysis. They called it something silly like Software Architectural Risk Analysis.

Which brings me to BSSIM / OpenSAMM / Microsot SDL, etc. This stuff is trash.

Replace the above with actual real dev/QA improvements. I think appsec people should be adding appdev, apptest, and appperf value! It’s all about the trusted adviser role and teams, as before said over and over.

One might be surprised how often it is, in fact, not said over and over in the context where it needs to be said.   Instead what is often implemented is a hamster wheel of testing, detecting a few defects, fixing those low hanging fruit, certifying the resulting code product as “good enough” and then moving on without actually improving the process that generates flaws and usually imposing an ineffective and expensive bureaucratic cost to development.   I hear what he’s saying, but it’s my view that one needs some kind of process to encourage developer maturity programs.

I’ve seen a lot of manhours spent pushing secure development lifecycle efforts and development maturity frameworks to disapproving audiences.   Sadly, what I have seen happen more often than not is that it becomes not a developer maturity program that leads to fewer defects and a more efficient higher quality process, but a checkbox-driven recurring third-party code-audit, threat modeling, and pen testing cycle.   These can be useful, but only when performed at the right times in the development process and could easily be out performed by an approach similar to what they describe.

One conundrum that is really unsettled is the exploitation and risk management stuff. I have my own opinions, but they match Aspect Security very closely. This stuff also takes experience, but experience with knowing the business, business risk, data breach notification laws and how they play out, auditors for various compliance/regulatory crap and how that plays out, etc. It’s really whacky stuff that is so custom to any org.

I’ll be writing up a risk management thinktank churn piece sometime soon going into this (again) in greater detail.

Destroy Your Infrastructure

The state of the industry is now and has long been driven by companies selling easy solutions to the incredibly difficult problem of meeting the conflicting goals of performance, usability, and security.   What the clued have known and the unclued usually have not is that there is no easy answers, turn-key appliances, or buttons to click to solve complex system technology problems.

So how are these goals met?   How does this seemingly impossible and nebulous goal of having resources and datastores loose enough to use, but tight enough not to be freely available, managed and accomplished?

It is usually systems thinking, a data driven risk analysis process, and a fundamental hard-won understanding of how things work to at the many levels of interaction required to form an effective defense and environmental awareness.

I make this statement because, in my reasonable experience in these matters, most technology management is less about a perfectly arranged stack of software and technology.   It is more about the problems created by various distinct widgets that were deployed once, no one sufficiently understands, and is full of artifacts and bitrot.   These solutions have to be maintained to because of this lack of understanding of what unmanaged artifacts are important, which are obsolete, which are vulnerable, and which are superfluous.

Some recent examples of systematic failure:

  • Sony’s tragically negligent tale of being owned repeatedly.   If billion dollar companies aren’t getting this job right, and we think of valuation, revenue, and profit as parts of corporate success, who can we expect to have done this correctly?
  • RSA, maker of data loss prevention products, had data loss later attributable to breaches at their customer sites because they lost critical data to attackers.   Shouldn’t they be the last place that this would happen?   Total cost to clean up: $66M. That’s a pretty big technical debt payout.

Lately, everyone appears to be prone to speaking about the usual lolhats and how everyone’s corporate security “sucks” while adding nothing constructive to the conversation.   It doesn’t help when security experts get caught using bad practices and weak credentials in disclosures.   People who have been paying attention in the last few years know that security hasn’t been improving quickly as the primary targets of security have shifted almost completely into the app tier.

People have been perennially speaking about this problem in the same circular discussions that don’t yield improvements or action for a long time. Why we haven’t really got anywhere significant in these discussions because these discussions are largely irrelevant to everyone.

Everyone?   Really?   Aren’t you exaggerating Ian?

Sadly, no.   I don’t think so.

Developers are interested in making things that work and  nearly universally have zero interest in being security experts.   That’s why they’re software developers.   If fail-closed for security isn’t part of the design requirements, it will have little attention.   There has been a lot of talk about implementing fail-closed in development frameworks, but I haven’t heard of any actual progress that is changing the landscape.

Non-technology people?   They don’t care at all because they have no concept of how these things work or how to gauge priorities because of this.

So what do we do?

To address some of these issues, I’m going to voice a simple operational, development, and infrastructure approach instead of another giant heavyweight framework.   There are way more than enough frameworks, methodologies, and religious beliefs out there already.

Developers have always been good about implementing new stuff, but they rarely clean out the old code, fragile dependencies, or dirty kludges that should never have been in the codebase/footprint in the first place unless there is a powerful alignment-focusing incident that brings attention and resources to address it.

Lesson:   Developers need more garbage collection and environment remodeling and need to do it with greater frequently without first having a motivating catastrophe.

Operations and business unit people who aren’t one of the technocrat initiates tend to fear change because of past bad experiences.   Someone needs to go to one of those uber-executive retreats (or RSA keynotes) and give some sexy business fashion pitch that makes change, and more importantly acceptance of change risk as being worth taking to keep the enterprise cutting edge and relevant.

Lesson:   Worry less about getting sued and more on doing the right thing. Share data, tell people about problems and how you’re working to fix them, disclose and be transparent to your customers, and foster trust by allowing people to make informed risk management decisions.   Many vendors and services organizations hide information in order to maintain a sense of brand, delay or disavow breach disclosure information, or just plain lie about capabilities in order to make sales.

Hiding risk and technical debt

In order for there to be improvement, this kind of behavior can no longer be tolerated from firms both large and small.

 

There are too many examples (of tales both trade secret and disclosed) of technology companies hiding risks of their products running in their customer environments and inability to close critical reported vulnerabilities in a timely manner or dismissing legitimate risks as “purely theoretical” to go into.   There are a lot of reasons why this hasn’t been effectively addressed and the cost sink of marginal value that is compliance efforts have taken the place of what was a good opportunity to introduce policy and visionary changes.

SDL/SDLC programs have the possibility to improve environments and, when real changes can’t be made to correct development culture, they’re the next best thing.   DevOps has the possibility to assist in synergizing (did I really say that?) and [re]coupling partnering org units.   Metrics programs can often be relevant and valuable.   However:

  • SDL often just gets reduced to a checklist or process nightmare timewaster when people don’t understand that it’s both a developer maturity training program and also a code quality assurance program.   Yes.   That’s what it really is.
  • DevOps (and other such rugged, agile, scrummish, ruggeddevops, or other buzzword) often doesn’t have their ideal realized as they are not especially prescriptive.   Additionally, everyone seems to always have a differing view of how it should work.   I wonder if it is philosophically incompatible for most technology workers.
  • Metrics programs are hard to make useful.   The biggest reason for this is that people only share (public) data when they are owned or in the courts.   As per usual, this is because of the mythos of the Average Company.   The Average company doesn’t exist.   What is good for this non-existent entity may not have any meaning for yours at all.   Data programs need to be relevant and meaningful.   If they’re not, they become a hamster wheel of pain for bureaucracy and the cult of toxic middle management.

Because of these complicated to grok, tricky to implement, and nearly impossible to align well in corporate kool-aid cultures, I have a different change prescription.

Blow it up.

We as practitioners have known for years what is required to secure things, do it cheaply, be agile in our practices, and often advise others on how to improve their efforts by identifying the root cause of systematic problems.   Let’s examine the persistent and entrenched problems that everyone keeps talking around without zeroing out.

Code artifacts and persistent flaws:

  • Persistent known issues, bitrot, and design shortcomings can’t continue to be tolerated as they are presently.   No vague philosophy is prescriptive enough to fix this problem.   It needs to be readily simple and implementable.   Core problem: huge unnecessary threat surface.
  • Infrastructure needs to be solid.   It needs to be the bedrock on which to build your castles.   If you can’t hire the best people to staff your departments, pay for the best consultants you can afford to give you a strategy and implementation plan.   A lot of my work in my career has been addressing issues of bad implementations that can’t be redone properly.   People have lots of names for this and ways of saying it so that it doesn’t sound like it wasn’t someones mistake.   Technology infrastructure is now the bedrock on which all first world business occurs.   Treat it accordingly.
  • Development needs to fail closed on security problems.   Classical problems should be fixed.   In fact, they should have been fixed a while ago.   We did it with most buffer overflows, we should be able to do it with SQLi, cross site scripting, assorted unvalidated inputs, and all the rest of the all stars of the OWASP top ten.   We know how to do this already.   We just haven’t made it mandatory.

This is all old news.   Practitioners already know all of this.   IT workers should know them all first hand.   We have failed as an industry to communicate these principals to our cousins in services and development and our customers in the greater business community.   Because of this long term failure, the government has decided to do it for us in establishing compliance goals and having some of us work to define them.   Compliance goals for the Average Company.   Compliance where the opinion of an assessor usually focuses on what is a “good enough” control.

For the uninitiated, this is only the tip of the iceberg.   The compliance rabbit hole is very very deep and employs legions of people, most of which you wouldn’t want around your infrastructure or guiding policy/practices.

So how do we fix this?   Easy.

Blow it up and re-deploy with a minimal footprint.

Madness?   Let me explain.

All modern environments have some or all of the following:

  • Disaster recovery plans
  • Virtualization and platform-centric management
  • Detached/decoupled (and usually redundant) data stores
  • A comprehensive and fine-grained approach to information classification
  • “Cloud,” distributed, or community resources and hardware

These things all have one common demand; they require a quick and painless method for action for them to function without significant error.   Deployment is something that is often a kludge packed afterthought.   Much like an insufficient design process, the root cause of most security problems, deployment should be fast, exactly repeatable, push-button easy, and most importantly designed for success.

The solution to your persistent compromise problem

If the problem of easy, automated, and rapid deployment of least-privileged hardened technology can be solved well in your environment, the cost savings will be substantial.

There are a ton of products and processes that monitor and track changes.   They’re usually complicated, fault prone, and generally give false confidence because of:

If it can’t be fixed, it should be retired.   If a process can’t be improved, a replacement should be created.

More importantly, people who love rigidity and inflexibility in a practice that requires dynamic need to find a new job away from technology.   It isn’t a place for people who want to put in minimal effort and collect a check.   If people/processes/technology can’t be made to accept change and adapt to a lean and efficient process that yields quality, replace them with ones that will.   The cost of keeping them around, if data was available to calculate the full cost and implied risk adoption, would be shocking.

Coming to a network/appstack near you

This renewal process should be build not just into operational and code development, as all the current fashionable management and intermanagement process philosophies focus upon, but comprehensively to clear away the relics, dead weight, and ineffective process causing tech debt.

Adoption of turnover, especially in edge and virtualized environments should be easy.   The reasons for not putting it into practice are too costly to ignore any longer.

Turning their game around

With the rampant astroturfing happening around Facebook’s upcoming IPO, I decided to have a celebration of my own; burning all of my data.

This week, I’m deleting my Facebook profile and all of the years of associated data.   So if the upcoming FB valuation is estimated at $94B and there are 425 million monthly active users, clearly my data should be worth about $221 bucks to investors.

I’m sure someone will be trying to short FB on the way down back to cruel cruel reality when venture cashes out to whoever, but it won’t be me.   I’ll just be content with burning my $221 of data.

Ali Meshkati had this to say in Forbes:

The S-1 filing and 99% of articles you have read should be crumpled up into a paper ball and thrown at your children when they don’t listen or your neighbor when he tells you a joke that isn’t funny. Allow your mind to run in directions that would cause any “rational” person to question your judgment and the truth about a company like Facebook will become apparent.

facebooktrolling

Just kidding. You have no friends. Thanks for your email!

What about accessing the walled garden of data that other people are keeping there? What if you have to work with the platform in someone’s assessment or marketing campaign fan page?

You can make a new blank profile, not contribute any user data, and laugh at the little psychological games that FB plays in attempts to wheedle your data from you.

What about pulling down your friends contact information to your phone and personal contact cloud via Facebook sync?

Oh I see. They’re not playing with the other kids on the block anymore as of 2011. I guess there’s no need to run Facebook mobile anymore on my android device either as there isn’t any real data exchange.

Too much data promiscuity.   Too little cleanup.

I hope that everyone cashes out before the dumb money figures out that the party is over before it started.

Legendary Clue

I keep thinking about a conversation that I had this last December.   They described people who were ran their servers and infrastructure well as never existing, or if they did, they do no longer. The person I was talking to called them “old school legendary ninjas.”

They:

  • Ran stable systems with high uptime.
  • Logged events centrally and paged/emailed themselves on things that were relevant to preserving the integrity of the service before things failed.
  • Monitored for uptime, service availability, and weirdness.
  • They understood each component of the systems that they created and maintained.

I know that they were around, because I was one of them. We talked to each other in a variety of nerd-only ways most of which are no longer in common usage today.

I viewed this list above as the base standard for competence in performing in the role of whatever title people have picked for their technology manager. It blew my mind that people would sit around and wait for a telco (or worse, their customers) to notice that their leased line or service was unavailable before springing into action to correct the problem. I had the direct contact numbers to all of the top technical support tiers that serviced me in my role. When problems arose, I called them so that things would work correctly and hopefully without anyone else noticing that an outage had taken place. The telco frame and private lines performed poorly (and often still do) and since they never fixed anything the right way, and I knew where the routinely broken issues were to be found from past experience, it was faster for me to call and troubleshoot the issue with them instead of leaving them to their own processes.

Knowing how everything worked was not an exceptional practice. This was normal.

Now everything is going to big data cloud environments. ISPs either need to have a cloud offering or are looking to close up shop and retire.   For the mass market, this is where demand has gone.

Perhaps not so surprisingly, standards of service are nothing like they used to be. Outage windows can appear in the middle of the day and stretch for hours. Data providers offer unstable links and just randomly create outages by having people who don’t know or don’t care if they are causing problems while going about their routine activities.

The “who cares, it’s good enough for our partners and customers not to fire us” business sufficiency principal is in effect. They’re not endeavoring to be better than their peers, they just want to avoid sucking less then them so that there is no where else to go.

I don’t think many realize how common departing the business really is. A lot of people leave and do one of two things:

  • Opening a bar, restaurant, or other totally predictable and reliable business model (most common referenced example: jwz)
  • Getting involved in the next hyper-valuation bubble (almost everyone with a current social media startup)

None of this tolerance for faulty and unreliable service offering was acceptable then and it shouldn’t be now. As most of our enterprises adopt hyper-complex service offerings that later become untenable because of the layered nature and artifact-bloated old code and dependencies.   The agitators and those wanting to effect good change in their organizations need to be ready to confront these problems head-on and to smash these problematic stacks and silos Shock Doctrine style when the next crisis arrives. You won’t have to wait long for your next catastrophe.   If your ability to detect breaches and downtime is functional, and you have the necessary metrics to capture and quantify them, your way out may be just around the corner.

There are plenty of excuses for an inability to effectively execute change and it’s often the real reason for when things go badly. I would characterize the real cause as inability or unwillingness by the organization to adapt and adopt change effectively as it is a lot easier to blame for incidents and calamity after the fact.

It’s not like we don’t know what is wrong, what to fix, or how to fix it. It’s really a question of many making an active decision that service quality is not worth the investment.

One thing that millennials may add to the business climate when they come of age and take the reigns of leadership is a tipping point of people who don’t try to separate business risk from technology risk and understand them to be one and the same.

Hopefully we won’t have to wait a couple decades more for this to happen.

Talks with a Ninja About Gold and Scalawags

Disclaimer: I have not been provided any inside or NDA’d information on any gaming platform in any form.   All of this is from public information and my own conjecture and professional experiences.

I’m telling this story for a few reasons.   First, I was able to be a little clever in my approach and since I can share it, I thought that I would.   Second, it’s a really good example of how I approach problems and also an example of how some highly competent people who are very close to problems that they have worked really hard on behave and, at times, an external take on it can be revolutionary.   I got super excited about doing this job.   I thought: “All my usual nerd skills plus risk management for in game economies? Too cool.   Let me look into prep for this.”   This is some of what I came up with.

We have a generous benefits package

Enter The Ninja

Some time ago, I was interviewing with one of the serious ninjas of the gaming industry.   I gather that most people in games has heard of him and everyone who plays games knows his work.   The level of success in the teams that he was a leading member demonstrated in yielding profitable game franchises is nearly unparalleled.

I mentioned to him that I’ve played his games, praised his previous work, and that I admired one of his presentation that I was able to review earlier last year in which he was presenting ways that he had worked in the past reviewing attacks and deterrents use difference code base cheating and other attacks used mostly in massive multiplayer online games.   I hoped that I didn’t sound like I was sucking up overly much.

Having researched his presentations and LinkedIn references, I felt that I had some idea of his type of skill base and a vague sense of their perspective when it came to preventing and mitigating attacks to these gaming platforms.   After all, it is pretty hard to give useful advice or an interesting proposal if you don’t have any idea of the interests and nature of your audience.

I surmised from the position description that the rockstar hiring manager was looking to offload responsibility from himself so that he could focus on other matters as a visionary for his game that was coming into focus.   He had previously been performing these deterrence and mitigation tasks himself and was likely to be looking for someone similar to him to carry on his efforts in this area with a comparable methodology of pure code audit.   My proposal, instead of focusing on a pure code-based strategy as he would, focused on analytics,   information gathering, basic deterrence in reducing the easy attack surface and classical targets, and attacking the bread-and-butter of the profitability of the attack itself; the sale transaction.   The premise that I was arguing was that if you can make the trade situation unprofitable for gold farmers and thieves that are drawn to large popular online economies, they will go elsewhere with less protection and easier profits; just like the rest of the internets criminals. You don’t always have to outrun the wolves, but just to be faster than most of the herd.   If you are fortunate enough to be wildly successful, additional resources can be found to meet these latter challenges to cover exposures when revenue makes it possible.

Global organized crime, corporate espionage, and endless complexity of layered third party solutions is often present, expected, and accounted for in enterprise environments.   I figured that the challenges of a nearly pure in-house development, fat client, and all server side controlled environment to be an easier one than most I’ve been tasked to find solutions and was upbeat about it.

When you are able, gathering intelligence on the behavior of your professional adversaries is an advantage that should not be missed.   Sun Tsu and all that.

Virtual Economies

Virtual economies are becoming more and more interesting to people other than fun-seeking gamers as they are beginning to be considered as real world assets instead of worthless imaginary online trinkets.   Since they might be treated as normal assets, the IRS is becoming increasingly interested in them.   More and more, these online realms are starting to be oddly considered as commodity and property houses.   It makes the kind of work that ninja and I were speaking about all the more interesting after you understand this detail.   What are large highly valued gaming environments today?   What will they be tomorrow and what should these companies be planning to handle in managing them?

So how can two people who are clearly doing pretty good work respectively come to different conclusions about how best to meet these challenges?   It’s really just about different experience and insights lead to different approaches.   I haven’t come from a comparable hardcore development background as have many who I talk to and work with in common environments and shared stakeholders.   Most of my work has been in systems management, managing operating systems, information security implementations, tuning, management and maintenance strategies.   In short, I bring things together and make them work.   I had a different perspective than he when it came to focusing on at the attacks that he was most interested interested in deterring.   He went to his core competence; code audit, and I went after mine; a systematic approach with the intent to derail the opponent’s ability to accomplish their goals.

Naturally those who run these gaming operations aspire to write perfect code and prevent or deter misuse that takes away from the profitability or enjoyable experience of the game for themselves and their users.   They, like everyone else, don’t generally speak about the specifics of how they try to accomplish these goals; secrecy has value here.   However, one can guess based on the actions of the most successful MMO, World of Warcraft, that the same lessons will likely apply to other MMO efforts.

The consequences of neglecting the in-game economic factors of the sale of gold in game seems to yield inflation if not addressed.

Blizzard implemented dual factor authentication and very nearly gifted the hardware tokens for their users.   They subsidized the cost of a pseudorandom two factor authentication to their subscribers was required because the online assets of blizzard were valued at over $12 billion dollars (at the time a couple of years ago and sure to be a greater number now). It only makes sense that that amount of value would be worth protecting with an investment in authentication threat mitigation.   For a game that is just about to be launched however, this would be a difficult investment justify immediately as the total in-game assets are still close to zero.

Gold Farmers

Back in the good old days when massive multiplayer online games were new, offerings like Ultima Online had interesting problems that existed in their game environments. Things like duping, which was possible by analysis of network stream or attacks on the application memory itself.   In the 90s, people were able to duplicate the best items in the game without having to earn them after having seen the item picked up or dropped and identifying that the network traffic could be easily replayed to the advantage of the player.   So just by injecting traffic into their appstream during play, it would be accepted as valid and the items generated out of nothing.   Add in the ability to macro up a massive amount of these items during the lifetime of the bug and caching them away for after the bug window closes and prices re-adjust can have serious implications to the game economy.   I make the assumption that hand-picked game devs will know and understand game problems such as duping and why they should be avoided by not trusting clients and having trust perimeters and countermeasures to keep the game honest.

When real money and recognition come to games like this, and I think they will some day, there will be APTs in games just as there are in other areas of more contested electronic mediums; pharmaceuticals, trade and state secrets, illicit markets, and everyone’s personal favorite, anarchist cyberpunk types who don’t care and do what they want.   These corporate espionage people may take advanced measures like infiltrating the businesses that feed off of theirs via lawyers or more clandestine methods.   Most of the world seems to employ a very can-do attitude and lose interpretation of business ethics.   I’ve heard off the record stories from a couple of people telling of the havoc that disgruntled and motivated individuals have thrown down in some game environments, basically trashing the whole operation.   You don’t want to be in that situation.

The Competition

There has been some academic inquiry into the behaviors of these black/grey market commerce cabals and the results point to these groups exhibiting behaviors that would normally be seen in drug trafficking organizations and money launderers.   The statistics seem clearly point to a few key points (and their charts and graphs are pretty amazing):

  • The common stereotype that most Gold Farmers are Chinese is correct. In fact in EverQuest 2 more than three quarters of all Gold Farmers are Chinese.
  • It is possible to identify and construct a set of attributes of Gold Farmers which can be used to build machine learning for automatically detecting Gold Farmers.
  • Social Network Analysis plays a crucial role in identifying Gold Farmers.
  • Criminal network in the online world (Gold Farming Networks) behave in a manner similar to criminal networks (Drug Trafficking Networks) in the offline world.

Here is an example of a programmable, customizable, and flexible framework for gold farming and TOS violating advantage.   I grabbed it randomly as an example and removed the name as there are always more than a few out there at any given time. Let’s look at its features:

Multi-Boxing
One of $PROGRAM’s biggest strengths is it’s ability to control multiple programs on multiple computers. This means you an automate other game characters on other computers very easily. $PROGRAM can control the programs in a fully automatic or improvised way, depending on how you’ve configured your file. $PROGRAM controls the other computers by sending them commands and information directly through your Local Area Network.

Macro Recordings
The easiest way for a macro user to create a routine that is specific to them is for them to simply record it. You can prompt a user to record a certain routine and then you can use that routine in your macro. For instance, you can prompt a user to record the keys they should press when their game character should heal. Then you can use that recording in a healing bot.

Pixel Monitoring
The current line of monitors were made with the gamer in mind. Want a healer to heal your group when needed? Well then, simply monitor pixels on your group’s health bars, using pixel monitors, and when a certain pixel turns from green to black, configure the pixel monitor to start a routine that will heal the person.

Text File Monitors
Want to navigate your MMO character to a certain location? Well, you can either navigate by finding your current location through your log file (using a Text File Monitor) or you can smoothly monitor your in-game location by grabbing it from the your game’s memory (using a Memory Monitor)! Log file monitoring can provide some of the best means of automation as well- if your game uses live log files. By monitoring your game’s log file you can create damage-per-second statistics readouts, notifications of messages to you, notifications of when you need to recast a spell, etc.

Memory Monitoring
Using Memory Monitors is probably the most difficult part of $PROGRAM, yet potentially most powerful. $PROGRAM can monitor your game’s memory and look for certain information like health, in-game location, etc. This information can then be used when creating your routines. With that information you could potentially create navigating bots, harvesting bots, etc.

Purchase Information
Once you purchase the $PROGRAM program it’s yours to use forever. Your purchase also comes with a 1 year subscription to available MMO Bots, $PROGRAM updates, upgrades and forum access. More subscription time can be purchased at a later date.

Many high quality MMORPG bots are available here as well. You can find these in the forums. These are available to use while you have an active $PROGRAM subscription.

$PROGRAM does not require a constant connection to our servers while you use it. You only need to connect to $PROGRAM’s servers when you’d like to upgrade the program by downloading the newer version. Make sure to frequently check back with $PROGRAM to see what new features and new bots have been added!

So really what this offering is allowing for a very low amount of dollars is the ability to perform idealized and programmed tasks in a way that mimics typical user behavior with a low cost subscription model that allows farmers to farm in complex scenarios and maintain profitability.

If these games were p2p and treated every join as a trusted resource (see people playing CoD on modded xboxes with aimbots for example), there wouldn’t be a source for analytic data.

The Way Out

The only effective way [IMHO] to address threats in an environment where anti-behavioral countermeasures are directly ineffective is by identifying statistically suspicious behavior, investigating to determine what percentage is false positives, and after an acceptable tuning phase, impose penalties.   The online poker communities have been all over this in their ability to detect statistical outliers and automated behaviors on their players consoles and sometimes even pushing the envelope to the point where the online poker venues get accused of using spyware on their players. When a player is caught cheating in online poker, they simply take all of their money.   Real money.   Not perceived value or arguable dollars that a rare item, in-game gold, or other server-side bits of data with a street value, but cash-you-can-spend-anywhere money.   Mortgage payments.   Rent money.   Way more serious dollars than EULA-signed things defined as having no value.

This is just a simple example of what happens without the principals of something that security experts have trying have been trying to teach developers for a long time; that trust modeling and threat modeling are differently distinct and important design concepts.   In short and as a rule, client provided data cannot be trusted.   By extension, you can not trust the workstation network, memory, or obscurity without controls or at least replability at a minimum.   The developers in days of yore thought that there be no way that any player would analyze their game network traffic and be clued enough to analyze it and inject it into their own appstream.   In short, they trusted application layer when there wasn’t anything there to make it a difficult challenge to beat.   They also trusted that the player character could not drop something that the game environment should know they didn’t possess.   However, this was not the case, a player could drop anything that they had previously captured were able to find capture data for an inject back into the the stream for their benefit.

A staff meeting

Because for some, hacking the game is more fun than playing the game.   Real world money is just a way to keep score.

All this is an example of things which should already be known in a hand-picked staff of awesomes that the ninja had working for him.   I felt pretty silly making suggestions knowing that the simple concepts I was throwing down was likely worked in first-hand intricate detail previously by the people with whom I was speaking.

Mindset Methodology

I try to look at of the issue and what once it is the for challenge, easiest, and by easiest I mean, what with the least amount of time and monetary expense to address or get into the issue.   In short, I want to solve the easiest problems first and then retool and approach the longer term more difficult challenges thereafter when there’s no discovery or planning stage available.   Often we don’t get ideal circumstances to make our contributions and letting perfect be the enemy of good can produce paralysis and catastrophe.   It’s all about taking your best shot and doing the most good you can with what you have.

A risk manager should always be pragmatic, but if there isn’t time to properly analyze risk previous to working on a small budget or smaller time schedule, I usually find it most effective just to address the easy things first and the harder things later.   The latter may require larger changes than many will expect to address the issues than may have been initially understood as an acceptable tradeoff.

The Times Have Changed

The days of irrevocable cash transfers are over after the Secret Service raided e-gold many years ago.   People in the world don’t like ransomware and 419 schemes, and they will reach out and touch you. Since everything in meta-cash transactions mostly follows credit card rules with the exception of some very limited use gambling money hiding platforms, which are fairly uncommonly used, credit-type transactions can be reversed for 30 days, what the credit industry calls a chargeback. If the objective is to stop the sale of any game currency or assets, so-called gold farming, or in limiting account hijacking, the most effective mechanism seemed to be to attack the profit stream directly and roll back account changes if data indicates that violations or unauthorized account changes have taken place.

So my suggestion if this was to be my gig was a metrics and data program analyzing transactions to find offenders and then roll back the transactions destroying the profit motive for the people lessening the value and game experience for both the game developer and userbase in general.   After that, look then to find what areas were actually being attacked and exploited in ways that damage the platform.   With that knowledge, focus remedial efforts in those areas.   Add into this scoring methodology to gauge the successfulness of the program and then reviewing those scorers to determine if a refocusing efforts would be appropriate.

One should always keep score to know were the best decisions were made.

Something like this I thought would be in keeping with some of the better thought patterns that I encountered in my professional life; recurring self-review, willingness to change, doing what works, the perfect not being allowed to be the enemy of good, and knowing ones opponent. I thought this would be a fairly compelling and successful method for addressing these challenges.

I didn’t get the project this time.   I would like to think it was because my approach was too contextually unconventional.

I was not able to put any of these thoughts into practice for this particular client, but it was a interesting theoretical exercise for me and left me with a lot to think about and to think about how portable some of these concepts can be.   I’m sure that there’s a few people out there with this job at various companies and are having a great time with the challenge of putting it into practice.

My favorite concept from this conversation is that I like the idea of the best coder wins.   “May the best code win!” like some kind of divine right settled by combat concept.   Sadly, often it’s much easier to be an attacker than a defender.   In the case of an MMO, the entire gaming environment is hosted server-side, so really there is no limit the amount of analytics and detection methods that can be employed without any awareness by the attackers.   I would think this would dovetail into a discussion of appropriate application instrumentation, but that’s pretty far outside my realm.

Parting mentions
(because I took so long to publish this draft)

  • It appears that Blizzard is beginning to employ the tactics of targeting the transactions by using lawyers lately.   I guess someone else thought that was a good idea.
  • I still don’t plan on reading any Doctorow.   He doesn’t write for me.
  • If you would like to learn more about Paypal and how they deal with fraud challenges now that they realized that they needed to care about it, give Ohad Samet’s talk a look: