Monday, December 28, 2015

Bypassing Bastion Hosts

Have you ever wanted (i.e., needed) to install a piece of software on a client's resource (e.g., laptop) and couldn't because a., their uber paranoid security team locked it down, b., to much effort to crack it open using devious methods, c., and heres the kicker - you have an ethical (and contractual) objection not to?  Me too!

It was one of those days I had some idle time for whatever the legitimate reasons were.  I was being bugged by a vendor to give their absolutely, positively, false positive free, error free application security scanners a try.  But how can I?  After all, I barely had enough rights to even login.

Off the top of my head I had an idea that seem simple enough to try.  Sure, I could have come up with a dastardly social engineer scheme to be granted admin rights (REM the aforementioned boundaries), or an easier route would be to just install virtual environment.  But that meant going back to the first problem statement, or does it? Can you say portal app.

Did you know that VMware Player is a portable application?  Knowing this dubious trick, I now have a way to setup a platform were I can pretend to be the almighty admin.  Now onto avoiding the next snare in our path, access to the installation image of a Microsoft Windows operatin system.

Since the demise of my Technet account, I could no longer just pop over and download what I wanted anymore.  So that left me no other option given I left my ISOs in my other pants pocket. And of course an idea popped into my head that essentially is an nonoption option and something to avoid for all the obvious reasons.  Don't tell me you haven't done this one before, download "free" software from some nefarious website with a product key that had a only a slight chance of actually working.   (Not to mention all of the "tag alongs"...)

Guess what I found?  Out of the goodness of Microsoft's heart,  (Yay Microsoft! Yeah I said it...), they provide VM images free of charge for just this sort of purpose.  Their actual intent is to give developers a test bed to run the various incarnations of the Internet Explorer browser against their web site.  What they actually gave us is a fully functioning operating system that happens to have the browser installed.  The only downside, it expires after 90 days.  But for my purposes this was 89 more days then I needed. 

And at the time this blog was written, its available without having to give up any of your personal information.  You know the drill; register, wait for creds, check your email and then login.  (In reality, you probably already provided all of your personal information just by visiting their site.)  Not paranoid much, just saying...

The URI is:

Check here to goto https://dev.windows.com/en-us/microsoft-edge/tools/vms/windows/

You might also want to check out their scanner that suppose to help you find what configuration settings are missing.

Check here to goto https://dev.windows.com/en-us/microsoft-edge/tools/staticscan/

[BTW, if you didn't first peek at the link to where the URI is taking you, you're doing it wrong.]

Also, here is a list of websites to test the vendor's fabulous guarantee to not fail tool against. A couple of list to choice from:

https://www.vulnhub.com/
http://blog.taddong.com/2011/10/hacking-vulnerable-web-applications.html

Google, (yes I said Google, the source of all evil, oh wait isn't that Microsoft?) if you don't like the choices I've provided.

The rest is pretty straight forward.  See Dick install, see Jane configure, see Spot install and configure blindfolded with one paw tied behind his back, now run.  Because the dog can apparently kick your butt.

Thursday, December 17, 2015

Secure HTTP Headers

Two of my favorite websites to comb through is CyberPunk and Kitploit.  Sure there are plenty of other websites that post the same type of content, but comparatively both sites does an excellent job of staying current with all things security, plus the layout is easy to navigate, ascetically pleasing and for someone like me subjects are categorized so it makes finding stuff not such a pain in the 4th point of contact.

The reason for the web crush(es), is the timely discovery of tools written by fellow security wonks like Scott Helme who created a tool that checks the HTTP Headers for security settings. Nothing earth shattering but when it comes to defense in depth, it doesn't hurt to add a couple of more layers.




Implementing the following configuration enhancement can help any web facing application to combat the likes of cross-site-scripting attacks or clickjacking based attacks.

Cross Site TracingTRACE echoes back to the client whatever string is sent to the server and is meant for debugging purposes.  This includes cookie and Web Authentication strings, since they are just simple HTTP headers themselves. On IIS update the verb on Request Filtering to DENY for TRACE.


Strict-Transport-Security - HTTP Strict Transport Security is an excellent feature to support on your site and strengthens your implementation of TLS by getting the User Agent to enforce the use of HTTPS. Recommended value "strict-transport-security: max-age=31536000; include Subdomains".


Content-Security-Policy - Content Security Policy is an effective measure to protect your site from XSS attacks. By whitelisting sources of approved content, you can prevent the browser from loading malicious assets.


Public-Key-Pins - HTTP Public Key Pinning protects your site from MiTM attacks using rogue X.509 certificates. By whitelisting only the identities that the browser should trust, your users are protected in the event a certificate authority is compromised.


X-Frame-Options - The X-Frame-Options header tells the browser whether you want to allow your site to be framed or not. By preventing a browser from framing the site it can defend against attacks like clickjacking. Recommended value "x-frame-options: SAMEORIGIN".


X-XSS-Protection - The X-XSS-Protection header sets the configuration for the cross-site scripting filter built into most browsers. Recommended value "X-XSS-Protection: 1; mode=block".


X-Content-Type-Options - The X-Content-Type-Options headers stops a browser from trying to MIME-sniff the content type and forces it to stick with the declared content-type. This helps to reduce the danger of drive-by downloads. Recommended value "X-Content-Type-Options: nosniff".


Server - This header seems to advertise the software being run on the server but you can remove or change this value.

X-Powered-By - The X-Powered-By header can usually be seen with values like "PHP/5.5.9-1ubuntu4.5" or "ASP.NET". Trying to minimize the amount of information you give out about your server is a good idea. This header should be removed or the value changed.



Saturday, October 10, 2015

AutoSSH

At times VPN can be such a monstrosity its not worth the overhead it ensues.   I just need the comforting beacon of a command prompt and always there when you need it; SSH.  The following are the steps I've taken to create a tranquil state of pragmatism.

Step 1: Create an account on the remote-host to be used to SSH with instead of root

# adduser [username]
Adding user `[username]' ...
Adding new group `[username]' (1000) ...
Adding new user `[username]' (1000) with group `[username]' ...
Creating home directory `/home/[username]' ...
Copying files from `/etc/skel' ...
Enter new UNIX password: [Enter a password]
Retype new UNIX password: [Re-enter the same password]
passwd: password updated successfully
Changing the user information for [username]
Enter the new value, or press ENTER for the default
 Full Name []: [Press enter key]
 Room Number []: [Press enter key]
 Work Phone []: [Press enter key]
 Home Phone []: [Press enter key]
 Other []: [Press enter key]
Is the information correct? [Y/n]:[Press enter key]

Step 2: Add the user into the sudo group on the remote-host

# usermod -aG sudo [username]

Step 3: If applicable, create public and private keys using ssh-keygen on the local-host

# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/to/[username]/.ssh/id_rsa):[Press enter key]
Enter passphrase (empty for no passphrase): [Press enter key]
Enter same passphrase again: [Press enter key]
Your identification has been saved in /to/[username]/.ssh/id_rsa.
Your public key has been saved in /to/[username]/.ssh/id_rsa.pub.
The key fingerprint is:
ff:ff:ff:ff:ff:ff:ff:ff:ff:ff:ff:ff:ff:ff:ff:ff root@local-host

Step 4: If applicable, install openssh-server, openssh-client and autossh on the local-host and remote-host

# apt-get install openssh-client openssh-server autossh

Step 5: Copy the public key to remote-host using ssh-copy-id

# ssh-copy-id [username]@[remote-host]

Step 6: Setup (test) reverse listener on the local-host

# ssh -R 4444:localhost:22 [username]@[remote-host] -i /to/[username]/.ssh/id_rsa

Step 7: Verify local-host is listening

# lsof -i -n -P | grep -i "listen"
sshd    19780        [username]    8u  IPv6  43920      0t0  TCP [::1]:4444 (LISTEN)
sshd    19780        [username]    9u  IPv4  43921      0t0  TCP 127.0.0.1:4444 (LISTEN)

Step 8: From the remote-host ssh into the localhost

# ssh [username]@127.0.0.1 -p 4444
The authenticity of host '[127.0.0.1]:4444 ([127.0.0.1]:4444)' can't be established.
ECDSA key fingerprint is SHA256:AbCdEfGhIjKlMnOpQrStUvWxYz.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '[127.0.0.1]:4444' (ECDSA) to the list of known hosts.
Password: [local-host's Password]
Last login: Mon Jan 1 00:00:01 1979
local-host:~ [username]$ 

NOTE: You might need to either repeat step 5 OR simply reboot

Step 9: Setup autossh

# autossh -f -M 4444 -N -R [remote-host]:4444:localhost:22 [remote-host]







Wednesday, October 7, 2015

Security Architect

I'm on the hunt for my next project.  Okay, lets clarify that last statement, a project that pays!  One of the areas that I would like to focus on is that of Security Architecture.  Why?  Well maybe the rest of this blog will help with the explanation.

A couple of consulting gigs ago, I did a 2 year stint as a SA for a large retailer and I really enjoyed every moment there.  It satisfy my need to be introduced to new technologies on a fairly frequent basis. It was extremely satisfying to have personally involved transforming the company's business lines view of Security from persistent nay sayers to full fledge business enablers. It help drive my desire to develop ways in communicating security more in business terms.  For example, instead of Security Risk's Management, its Operational Risk. Threw out the formulas associated for explaining risk. And the colored tables that at the heart of it, a lay person didn't want to have to earn a PhD just to learn how to use it.  Instead, I described risk in terms of how security prevents whatever business solution from failure to deliver the goods and services to the client.  If you couldn't tell, I didn't actually throw the baby out with the bath water. Instead the experience taught me a valuable lesson. Security should be described in terms of saving money and if possible, make money.  And most importantly, getting to work along side some pretty intelligent and gifted people instead of just the machines!

Another area where my OCD helps out and one that I've been told I'm really good at is, process development.  Within this same company I had developed, documented and mapped out the intake procedures, project's documentation requirements, third-party risk management, escalation procedures, contractual review, risk assessment plan, independent assessor, and final deliverable(s).  I was even accepted to speak at a local security conference about what we had accomplished. 


If you are interested, here is the highlight of my presentation for implementing Security Architecture in a fast pace, bleeding edge company.

The TAO of SA

In my opinion, there are two different types of SA's, one who focuses on designing and implementing security services at an enterprise level, in line with service-oriented architecture (SOA) state of mind. Than there is the type I gravitate towards; as being someone who develops security requirements based on established security policies and standards, conduct analysis of the proposed design patterns, and conduct testing to ensure the security requirements have been met from the application to the infrastructure layer and everything in between.  
(And based on my recent interviews, a third version is on the rise, a software security architect.  As the name implies, defines the requirements and testing as the aforementioned SA is defined, but is more heavily centered on having a (professional) software development background)


Security Architecture Methodologies

To complicate matters, there are a few security architecture methodologies to choose from:


Which one to use? In my opinion it needs to be a cultural fit, can be easily operationalized across multiple projects by one person and answers the basic needs of the company in determining the state of its security posture. Which I define as a company's resiliency to handle service disruption to the delivery of goods and/or services. I could be wrong, just going by what a couple of hundred security assessments across multiple business lines on what worked and what did not.


Goal of Security Architecture:

Security Architecture enables the business units and executive management to:

  • Identify operation risks
  • Validate appropriate controls are in place
  • Meet privacy and data protection concerns from a reputational and regulatory compliance stand point

Each have their pros and cons, but they have a common purpose of providing a formal review process to ensure consistency and reputability for determining risk.  One of the most important component is threat modeling analysis (TMA) no matter what methodology your organization follows.


Threat Modeling Methodologies

At the heart of any SA assessments is Threat Modeling.  Here are but a few threat modeling methodologies to choose from:

  • STRIDE - Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege
  • DREAD - Damage, Reproducible, Exploitable, Affected Users, Discovery
  • PASTA - Process for Attack Simulation and Threat Analysis
  • Trike
  • OWASP Application Threat Modeling 

Again, there are pros and cons, and is meant to provide some formality and uniformity in identifying risk to whatever you are trying to measure.


Goal of Threat Modeling:

The goal of threat modeling should be to:
  • Identify what, where, when, who and how informational assets will be utilized
  • Identify where the riskiest points exists
  • Determine where the appropriate controls should exist at the application and supporting infrastructure layers


Summary

As far as methodologies, I'm partial to OSA.  This to me provides the quickest and most logical approach.  Besides understanding the business requirements, this methodology works best at translating what the informational assets touches, handles, transmits, stores, processes, and transforms to which security patterns will offer the best means to thwart what is currently available to detect and prevent most threats known today.

Regarding threat modeling, for software/application based assessment, I think that OWASP Application Threat Modeling  approach makes the most sense. This model utilizes data flow analysis which is complements OSA.  Including a library of threats that could have an adverse affect each touch point (i.e., touches, handles, transmits, stores, processes, and transforms) could be susceptible to.

Sunday, September 13, 2015

EMV a.k.a., ICC

I recently completed a gig with a client who was suffering from data overload.

The tasks this particular client wanted to solve was how to monitor the status of their 100,000 plus publicly facing systems.  You know the kind, something most don't even give a second thought to what it is, but interact with it on a daily basis.  Money storage devices aka point of sales, etc.

Today's cash registers are basically a specialize computer with some sort of human interface, feedback mechanism (fancy way to say monitor), serial ports to support peripheral devices, memory, mass storage device, applications, services with a non-proprietary operating system and the creme de le cream, the "best" part, the avenue of attack most taken, a network interface of both wireless and wired variety.  Important client there were over 100,000 of such devices across thousands of stores.

To answer that question that just popped into your head, yes, they've been monitoring everything. And given all of the data breaches (for those of you who needs facts and figures...) they have been hearing slash reading about, they naturally asked themselves; "are we next?"

But after re-reading the last paragraph, the truth just popped out.  Part of the problem is in the sheer volume of information that was blinding them.  (Okay, I know, Duh!)

In this case, the client took my advice. Instead of shifting through every line of every log file looking for something that isn't clearly defined, they took road less traveled and went with white listing.  The goal now was to detect if a certain value ever changed from true to false; the solution that was architected was designed to prevent the evil doers lurking in the shadows to have actually exposed the PAN (including the PIN).  And according to the vendor this is impossible.

But my blog today isn't about their tech or me spilling my guts about any imminent details that is still bouncing around in my head.  What I can tell you is the thought process.

First, if you don't understand, get, or practice threat modeling, start doing it.  There are allot of definitions and how to's out there on the Internet. Google turned up 2 million results in just under 40 milliseconds.  Some are great at providing very granular definitions, methodologies, and some make me feel frustrated due to how vague and obscure they attempt to describe it.  That's my not so subtle way of say; for making something more difficult than it needs to be.

My methodology is based on the multitude of hours spent in my early years working with Electronic Engineer's (and in the following years as a Security Arch).  And in the spirit in keeping it simple, I define my methodology as simply as DPD (Data, Process and Documentation.)

The reason I've broken it down to these three key points, because its simple.  Define what you are trying to protect, why is it valuable, how can it be peeked at, stolen or destroyed, and most that the parties involved are on the same sheet of music.  Understanding the "Data", (i.e., whatever we're trying to protect) helps me define how far I need to go in order to protect whatever it is.  Then process, which can get little more involve, but basically this provides the context of who, what, where, when and how the data will be used.  Controlling who has access, following the principal of least privileges, and making sure there's appropriate detective, and preventative controls in place.  The documentation is the linch pin to the entire methodology.

Ideally the initial documentation would come from the project team themselves. This enables me to understand what the proposed solution is all about.  If not already described, work with the project team to determine the ebb and flow of how the informational asset is to be processed, where will it be stored, what type interaction is involved, and of course the level of human involvement.

The next step is to define the different ways the processes, interactive entry and exit points can be manipulated into revealing, maliciously or unintentionally modified the informational assets. What controls, if any, will be in place to detect and prevent nefarious activities. Documentation also helps to solidify what was agreed upon at the time we worked on it.  Act as the communication conduit to other business lines as well as the support IT groups of what it is they we're trying to accomplish.(Now we're knee deep into my previous blog about Security Architecture.)

Kinda of when off a tangent there, but the point is don't underestimate the importance of threat modeling.  Not all of us can just wing it and hope for the best.

Another seldom talked about approach is knowing what are you trying to solve, especially when you are looking at the sheer volume of data that is available.  But, I'm catching myself this time by not going into how to tackle this problem without first defining it.  For this client, the problem was first to pick apart the question: "How do I know the vendor's solution is actually doing what they said its supposed to do?"

By defining the problem statement, coming up with the solution that was both actionable and achievable became that much easier.  And yes, I did actually come up a solution.  But unfortunately, I cannot reveal what it is because of the NDA that is in place.  But what I can tell you is what tool I used to solve it.

Answer, Powershell. Yup, the same one Microsoft developed to help administrators manage their Windows (headless) environments.  Also the one everyone is talking at all of the security conference's such as DerbyCon, but in more a dastardly sort of way.   Penetration testing tool sets have been created solely on Powershell such as Empire, nishang, and PowerSploit just name drop a few.  The crazy part I hadn't written a single Powershell script before this project.  (Certainly helps that I've written a few scripts and executables over the years.) Since then, I've come up with quite a few.  If you are interested in any of them, they are available on GitHub, along with other scripts I've written.  Nothing to run off to a Con for, but stuff I've created to help me during my assessments.

For this client, they were locked down.  They issued me my resources which were harden to the max.  Couldn't plug in or install anything without getting locked out for the rest of the day.  Needless to say, for the initial few days, I didn't get paid.  Then finally I discovered Powershell.  I got into spaces that I wasn't suppose to be able to.  Remotely execute code maybe where I shouldn't have and learned more then I should have known.

Thank God the client was more grateful then angry.  They had brought in several penetration testing teams to find all of their weaknesses in their environment and applications.  (Wish I got a piece of that action...) And I was the first to report on the excessive rights Powershell was given.

All this just to tell you that I provided them a "one liner" type of solution using just Powershell to detect if the vendor's solution fails.

Friday, August 7, 2015

A Use Case for Security Operational Software Application

The use case for an operational oriented software application stems from the need to centralize the management of numerous vulnerabilities found during the various stages in an application’s lifecycle. Security teams often manages and tracks multiple of different types of assessments. The assessments range from automated scans with predetermine fields (i.e., that cannot be changed) to manually completed assessment where the fields are not required to follow a standard naming convention.

However, despite the lack of uniformity in the naming convention while describing vulnerabilities (a.k.a., findings), they can also be described through a common base set of characteristics. These characteristics typically include, a title of the finding, description, results (the output of the finding), risk rating and recommendations. This provides an opportunity to normalize these common characteristics into a single unifying process. The benefit of this approach will enable the Security Analyst to concentrate on the causality and more importantly, developing effective both in cost and mitigation solutions. This will also enable management to make clear and decisive decisions on where the real problems potentially exist in the organization's resources and technology in both the I.T. and the business operations.

Each finding can then also be broken further down into categories, not only by the type of assessments that were completed, but also by what areas have the greatest impact to the stability, as well as the effectiveness of any given application to maintain the confidentiality and integrity of the data being accessed and modified on a daily basis. By breaking down each finding by the characteristic of the vulnerability, management can then review where the issues are within the secure software development lifecycle (SSDLC). There are numerous benefits to this approach. For example, identification of where legal and regulatory requirements are not being met, lack of attention to parts of existing (or non-existing) SSDLC processes, as well as third-party development team’s adherence to their contractual obligations.

But before the Security Analyst can start to analyze the vulnerabilities and define the various characteristics each are associate with, they must be provided with consistent and repeatable processes with the obtainable goal of maximizing both the group and the organization’s operational effectiveness.

This can be accomplished through various means, such as manual analysis through documentation review or by automation and correlation processes. As there are multitude of ways to assess and analyze data, there are also numerous methods to manage it. However there are challenges to managing the volume of data typically associated with the discovery, tracking and monitoring of vulnerabilities. These challenges typically consist of information sharing, data duplication, variation of the same data, historical references, accessibility and the versatility of reporting that addresses the organization's risk posture instead of just performance based metrics.

To address these challenges, the organization must move towards a more formal workflow process. This means eliminating the dependency on spreadsheets as a means to archive, process and managing its vulnerability data. Besides the risks that can be attributed to accidental exposure inherent in a decentralize model spreadsheets represents, it lacks the real-time visibility and insight organization's require for dealing with threats. To achieve and overcome these challenges the organization can look towards more operational oriented software application centered round a single codebase, database and workflow processes. More specifically, the operational oriented software application must provide:
  • A single simplified operational process,
  • Multiple options to import various assessment methodologies,
  • Normalization of the data fields from the various reports,
  • Automate the categorization of the records according how the findings where discovered, and
  • A workflow framework that supports risk analysis, exception and remediation management.
It is through this type of application that will enable the organization to reap the benefits of operational effectiveness. And its through this simplification process security in partnership with management, can combine their functional expertise to tailor processes and applications in a way that improves performance and ultimately the visibility for dealing with today's cyber security landscape.