Risky Business

If I were asked what kinds of things keep me up at night, I'd probably say: too much caffeine during the day, SCADA software, and web browsers.  The first two are easy to explain, but web browsers are a much different beast. They're just so complicated and robust!  Between the plugins and core components, they are capable of creating an attack surface which is getting increasingly impossible to identify and protect, but it's really not their fault.  While it's easy to pick on web browsers for the constant stream of vulnerabilities, exploits, and patches, it highlights a growing trend with software in general: it's getting more and more complex.

Think about applications 5 years ago and compare them to the applications you use every day.  Applications, which 5 years ago were relegated to the desktop, are now being turned into cloud offerings and/or providing a plethora of options and libraries that you will likely never use, but an attacker would be more than happy to use them against you.  It's a concept which I constantly try to communicate whenever possible, and it's a simple fact that acquiring software means acquiring risk, and particularly in the corporate world, we're acquiring fancy software at an alarming rate and sadly, it's generally not held to the standard it should be.

A common practice in many corporate appsec shops is to identify code that will be put online, assess it in QA before it gets pushed to production, identify vulnerabilities, patch, validate, and off it goes.  While there's a lot of flaws with the general implementation of this process, it's not what I want to use my soapbox for.  The reason that web applications are tested (you are testing your apps, right?), is that even tiny applications dramatically increase the attack surface of a given environment.  The obvious concern is that vulnerabilities may exist that could be exploited by a remote attacker, and that's bad.  But what about all of the other applications that are used in a production capacity that aren't tested?  I'm talking about things like desktop software that you rely on daily, maybe something like SPSS, or what about software "byproducts" such as components that aren't called out as separate applications(I'm looking at you, ActiveX).  In situations like this, you may be installing software that you aren't even aware of, and that creates an exploitable blindspot.

Here's one example that comes to mind.  In another life, I was performing an application security assessment on a web app that a different organization had already tested.  They found a few bugs, and I was asked to perform validation testing.  While the vulnerabilities in the web application itself were remediated, nobody bothered to notice that the web application prompted you to install an ActiveX control in order to use all of its functionality.  As luck would have it, the ActiveX control was marked safe for scripting and off to the fuzzer it went.  300 test cases later, we had identified a heap overflow which could be exploited by an attacker to execute arbitrary code in the context of the currently running user.  So, the web application wasn't exploitable, but its users were.  As to which is more valuable to an attacker, it's not my position to say, but I'd place my money on exploiting the ActiveX control.  The sad part is that nobody had bothered to include the ActiveX control as a separate component since it was just "part of the web app."  There is no surrogate for assuming new software is dangerous; it most likely is.

On another occasion, I was assigned the task of assessing one company's corporate image that was installed on all of their laptops.  The company had written a tool in .NET that was designed to automatically configure printers for employees without needing to elevate privileges.  After about 20 minutes, we had extracted a domain username and password from the binary, as well as discovered a local privilege escalation vulnerability in the printer utility.  Both of these things are incredibly valuable from an exploitation perspective, and better yet, these bugs were baked into every corporate image!  Again, nobody bothered to do an assessment of this software because it's "only an internal application and it's not even on the web."

So where am I going with this?  Application security isn't just web application security and even if it's not online, it's all remotely exploitable.  Don't assume that attackers don't care enough to find out what specialized software you use and turn it against you in an attack.  If you're only looking at your web applications, you're doing it wrong.  It's not safe to say that just because it's not hanging off the internet that it isn't online or that it isn't connected to untrusted sources.  That's a very naive way to decide which applications are or are not in scope.  Furthermore, you cannot assume that 3rd parties have taken the same care in securing their software as you would.

So, if you want a takeaway from this post, something you can go to the office on Monday and start building off of, here it is.

  1. Test 3rd party code as though it's your own.
  2. Trust 3rd party code as though someone else wrote it.