I recently completed a gig with a client who was suffering from data overload.
The tasks this particular client wanted to solve was how to monitor the status of their 100,000 plus publicly facing systems. You know the kind, something most don't even give a second thought to what it is, but interact with it on a daily basis. Money storage devices aka point of sales, etc.
Today's cash registers are basically a specialize computer with some sort of human interface, feedback mechanism (fancy way to say monitor), serial ports to support peripheral devices, memory, mass storage device, applications, services with a non-proprietary operating system and the creme de le cream, the "best" part, the avenue of attack most taken, a network interface of both wireless and wired variety. Important client there were over 100,000 of such devices across thousands of stores.
To answer that question that just popped into your head, yes, they've been monitoring everything. And given all of the data breaches (for those of you who needs facts and figures...) they have been hearing slash reading about, they naturally asked themselves; "are we next?"
But after re-reading the last paragraph, the truth just popped out. Part of the problem is in the sheer volume of information that was blinding them. (Okay, I know, Duh!)
In this case, the client took my advice. Instead of shifting through every line of every log file looking for something that isn't clearly defined, they took road less traveled and went with white listing. The goal now was to detect if a certain value ever changed from true to false; the solution that was architected was designed to prevent the evil doers lurking in the shadows to have actually exposed the PAN (including the PIN). And according to the vendor this is impossible.
But my blog today isn't about their tech or me spilling my guts about any imminent details that is still bouncing around in my head. What I can tell you is the thought process.
First, if you don't understand, get, or practice threat modeling, start doing it. There are allot of definitions and how to's out there on the Internet. Google turned up 2 million results in just under 40 milliseconds. Some are great at providing very granular definitions, methodologies, and some make me feel frustrated due to how vague and obscure they attempt to describe it. That's my not so subtle way of say; for making something more difficult than it needs to be.
My methodology is based on the multitude of hours spent in my early years working with Electronic Engineer's (and in the following years as a Security Arch). And in the spirit in keeping it simple, I define my methodology as simply as DPD (Data, Process and Documentation.)
The reason I've broken it down to these three key points, because its simple. Define what you are trying to protect, why is it valuable, how can it be peeked at, stolen or destroyed, and most that the parties involved are on the same sheet of music. Understanding the "Data", (i.e., whatever we're trying to protect) helps me define how far I need to go in order to protect whatever it is. Then process, which can get little more involve, but basically this provides the context of who, what, where, when and how the data will be used. Controlling who has access, following the principal of least privileges, and making sure there's appropriate detective, and preventative controls in place. The documentation is the linch pin to the entire methodology.
Ideally the initial documentation would come from the project team themselves. This enables me to understand what the proposed solution is all about. If not already described, work with the project team to determine the ebb and flow of how the informational asset is to be processed, where will it be stored, what type interaction is involved, and of course the level of human involvement.
The next step is to define the different ways the processes, interactive entry and exit points can be manipulated into revealing, maliciously or unintentionally modified the informational assets. What controls, if any, will be in place to detect and prevent nefarious activities. Documentation also helps to solidify what was agreed upon at the time we worked on it. Act as the communication conduit to other business lines as well as the support IT groups of what it is they we're trying to accomplish.(Now we're knee deep into my previous blog about Security Architecture.)
Kinda of when off a tangent there, but the point is don't underestimate the importance of threat modeling. Not all of us can just wing it and hope for the best.
Another seldom talked about approach is knowing what are you trying to solve, especially when you are looking at the sheer volume of data that is available. But, I'm catching myself this time by not going into how to tackle this problem without first defining it. For this client, the problem was first to pick apart the question: "How do I know the vendor's solution is actually doing what they said its supposed to do?"
By defining the problem statement, coming up with the solution that was both actionable and achievable became that much easier. And yes, I did actually come up a solution. But unfortunately, I cannot reveal what it is because of the NDA that is in place. But what I can tell you is what tool I used to solve it.
Answer, Powershell. Yup, the same one Microsoft developed to help administrators manage their Windows (headless) environments. Also the one everyone is talking at all of the security conference's such as DerbyCon, but in more a dastardly sort of way. Penetration testing tool sets have been created solely on Powershell such as Empire, nishang, and PowerSploit just name drop a few. The crazy part I hadn't written a single Powershell script before this project. (Certainly helps that I've written a few scripts and executables over the years.) Since then, I've come up with quite a few. If you are interested in any of them, they are available on GitHub, along with other scripts I've written. Nothing to run off to a Con for, but stuff I've created to help me during my assessments.
For this client, they were locked down. They issued me my resources which were harden to the max. Couldn't plug in or install anything without getting locked out for the rest of the day. Needless to say, for the initial few days, I didn't get paid. Then finally I discovered Powershell. I got into spaces that I wasn't suppose to be able to. Remotely execute code maybe where I shouldn't have and learned more then I should have known.
Thank God the client was more grateful then angry. They had brought in several penetration testing teams to find all of their weaknesses in their environment and applications. (Wish I got a piece of that action...) And I was the first to report on the excessive rights Powershell was given.
All this just to tell you that I provided them a "one liner" type of solution using just Powershell to detect if the vendor's solution fails.
No comments:
Post a Comment