Thursday, March 05, 2015

Predictions Revisited, The Eye of Mordor, Crypto and Amazon?

The Eye of Mordor principle - my prevailing theory of computer security - has proven itself in the world of SSL. SSL put the "one ring" on and the Eye of Mordor looked into it. This past 6 months has seen intense scrutiny focused on encryption. After Beast, Crime and a few other attacks we got POODLE, PO or Padding Oracle on TLS 1.0 and 1.1 {largely ignored by the industry} and FREAK on SSL and TLS.

At Black Hat some amazing crowd-sourced hacks were shown versus standard cryptography. These attacks were generally using ordinary and simple means for reducing the number of rounds of brute forcing. Nothing super magical really, admittedly not more than grade-school math and the application of common sense. Some common algorithms were cracked in real time at the show. Cool... but how?

Much of what we rely on is built by practical people who prize speed, graceful error handling, solid design principles and extensibility. These are admirable features but all good intentions can be, and will be, taken advantage of. Timing attacks are an obvious one (speed), graceful error handling combined with timing attacks provided some fun this past year. But one slightly less obvious example lies in the solid design principle when let's say you're generating a key. In this illustration, a strong algorithm is required to produce a near perfect random string of data such that the cryptography operating on that key is not weakened. Sounds good right? It is.This is such a prized principle that it even makes NIST's standard 800-57. Awesome, it's now a borderline regulation to have exceptionally random content. It must be very helpful. But to a hacker, this means that since the key data is going to be highly random, so what happens if you discard all low entropy samples from the potential brute force data. (Throw out anything that has significant runs of the same number or letter) The reason we can do this is a low entropy key in an encryption algorithm may have the side-effect of allowing patterns to bleed through that may give hints as to the underlying data. This would be considered a weak key. While knowing that key generation favors high entropy, this doesn't give the attacker the key, far from it, and it may mean that you've thrown the baby out with the bathwater - BUT it means that there is now slightly more than half an ocean where before there was a full ocean of possibilities that are more likely. This is kind of like Sauron knowing that Frodo has to come to mount Doom to destroy the ring. Anything that has constraints can have those constraints used against them.

The FREAK exploit may or may not use statistically-reduced brute force, but it can certainly take advantage of the awesomely benevolent programming feature of extensibility.  SSL allows extensibility by helpfully providing a list of algorithms that can be negotiated during the handshake phase. Unfortunately this allows negotiating down to less secure encryption. Who knows, it may be possible to negotiate down to no encryption at all if both supported a null cipher. But in the case of FREAK we are able to downgrade to "export" approved RSA 512. Now RSA 512 is meant for countries who we deem to be enemies. But still most institutions both government an private sector ironically and very helpfully offer this weak crypto during the negotiation phase of SSL. So this means that a man-in-the-middle just needs to tweak the packet that specifies the crypto and a few virtual CPU cycles later they have the decryption key for your data. This feature that provides extensibility at the expense of security is a typical pattern we see repeated in security flaws.

Any time we help users and make things more convenient or accessible we are simultaneously lowering our guard. So where is the balance on the side of security in this equation?  I think the answer is that here is not any possible strong security stance that maintains any sort of balance between ease and security. At some point, some compromise has to be chosen where things aren't perfect but are maybe better than average. The trick is to put a time limit on them and actually expire those methods and move on to the next awesome thing before someone breaks the system. Yes, most companies and governments will fall down on the job and leave things in too long.  Who wants to spend all their development time upgrading legacy stuff that still works?

Is there hope? Maybe... With the evolution of processing power, the flexibility of the cloud, and the new technologies on the horizon there is some hope in quantum computing.  But making it between say... 2017-2020, we'll go through a period of time where encryption will be pretty much useless until quantum encryption is available to business at a reasonable price - perhaps via cloud-attached hardware services. It will be a point like in the movie Sneakers where the guy develops the system that cracks all encryption and suddenly every type of hack is possible. We are already at a weird point where most of our secure transactions are leaked through side channels, meta data, and inference can let people know most of what's happening anyway. That's not unlike real life where people talk about private things and others overhear.  The same thing happens in computing. You think your doctor visit is private until you realize the person in the next room can hear the doctor detailing your ailments anyway. In the real world we ignore that - in the digital, we tend not to because of economies of scale. Perhaps there is hope hiding somewhere wrapped in chatty and verbose web services in the cloud.

Currently, has a new Key Management Service in the cloud that does take some of the pain out of creating strongly managed crypto systems with proper keys and rotation schemes. Unfortunately it doesn't take the pain out of creating encrypted data fields automatically and seamlessly inside your ORM layer. POST and wait and get your data back and then write it. It remains to be seen what kind of performance writing heavy JSON back and forth over hopefully TLS (referred to in their whitepaper as "SSL") on a back end system with hopefully non FREAK algorithms, and waiting for a web service to deserialize, check permissions, load keys, encrypt and re-serialize and tunnel stuff back will take in an app with thousands of users writing millions of rows of data per hour. But the tools are gradually getting better, and most importantly, simpler. Maybe this is a step in the right direction. Unfortunately, for developers, Mount Doom is getting steeper by the step. Acceptable chaining modes and key wrapping and hashing algorithms keep changing almost yearly it seems, and tools offer so many options and modes (here we go again with estensibility "features") that some developers are still mistakenly implementing weak combinations of strong things. Thankfully, Amazon does address that somewhat with it's service toolkit largely forcing you toward better combinations of options and simpler code. But real-world attacks are still progressing at a rate that is outpacing the tool-sets, and often attacking in points of integration where even the best block chaining mode doesn't help. Remember, Frodo had to go the edge of the volcano before getting rid of the ring.  I suspect that we'll see something analogous to that with SSL, TLS and one or two encryption algorithms, and maybe the loss of a finger and "The One Ring" disappearing in a flash of fire before the Eye shifts to something else.


Saturday, June 07, 2014

New significant issues - IE and OpenSSL

One Extremely Important Patch Tuesday!
This coming patch Tuesday we'll have a patch (hopefully) for an IE bug that's been in the wild for about 6 months depending on the source.  A CDATA use after free flaw that apparently can be exploited by javascript and it affects a broad swath of Windows systems.  For once the details have been withheld as near as I can tell which is saying something.  Usually someone leaks the info and you have a bunch of bad actors using the code.  If it hasn't been leaked it would be a super-human triumph over our natural instinct to put ourselves above the security of others. Kudos to the researcher who apparently hung on for a very long time to a massive exploit that could have been running Godzilla-like through the computer world otherwise. Even though Microsoft was slow to put out a patch the researcher held out and did the right thing in my opinion.

Open SSL was cracked wide open again
If you believe that only poorly made products are vulnerable to security issues or if you're one of those who believe that only open software is exploit free then you might want to rethink your position.

Open SSL has been made much more "open" by a new CSS Injection bug. (here) This allows an attacker to force an Open SSL implementation of SSL/TLS to use weak key material and thereby allow a man-in-the-middle attacker to decrypt a session potentially.  But this is not the only issue... consider the DTLS recursion flaw, DTLS invalid fragment vulnerability, SSL mode release buffers null pointer deref, SSL mode release buffers session injection, and the anonymous ECDH denial of service.  Basically you have a recipe for disaster if you're a APT soldier for hire.

I believe that two forces are at work here on the sudden explosion of exploits against the underpinnings of our online world.  
A) the Snowden revelations 
I know it may seem far fetched but the reasoning is thus: if you know that there is an organization with the ability to deconstruct and observe much of what we do online you must also assume they have the means to do so.  If you believe they have the means to do so, you begin to open your mind to the possibility that encryption systems we rely on are more vulnerable than what we originally thought.  From there, it's logical to take a second look at these encryption systems.  When we begin to find that there are significant flaws we prove the supposition.  Once we prove the supposition true the cycle begins once more and we look deeper, finding more issues and so the cycle goes.

B) the Eye of Mordor principle
When the curiosity of the hacking world is focused on a fad or the exploit-du-jour we see a phenomenon which I call the "Eye of Mordor". Essentially the focus of the hacking world is collectively the "Eye of Mordor".  Once the eye focuses on a single product or company etc. then the bugs start to be ferreted out. A case in point was the focus of The Eye on Microsoft's operating system.  Now that Linux is represented on more desktops it begins to draw The Eye just like when Frodo put the ring on.  

What does that mean for the future?
I predict we'll see a lot more ground-breaking attacks on crypto and against the underpinnings of the systems that employ it.  We'll see the world begin to get more serious about staying secure from everyone and everything else. No product will take off without having strong encryption and bold marketing promises to keep data out of the hands of virtually everyone. Lastly, governments that like to skim data, in an effort to satisfy themselves that everyone is playing ball, will find other means of getting it... probably by new regulation. 

Friday, June 06, 2014

Is antivirus a waste of time?

Symantec turns from prevention to remediation (article) as the company comes to grips with dropping detection rates for new viruses and malware.  Many savvy companies have already begun to analyse viruses using multi-engine systems like Virus Total which can generate a consensus on a piece of malware if you're lucky.

To what do we owe this great turn of events?
a) Could it be the cool tricks APTs use to bypass antivirus disclosed at RSA 2013?
b) The tips given at BlackHat 2013 to fool virus engines?
c) Could it be the codifying of those tricks into Metasploit for the script kiddies to push-button hack?
The answer is Yes.

The tricks used by APTs and by hackers in general to bypass anti-virus are very easy and extremely effective... so much so, that trying to detect them would be almost impossible and if detected would lead to a huge amount of false positives since many programs share those same API calls.  So the former revelation by Symantec is just common-sense... not a shock or even really that surprising.

So what is a person to do?

For a long time now I've championed the use of a Trip-Wire like app.  Just a simple hash of files and key registry segments... if those areas change then the user is given the opportunity to restore them to the original settings.  You can take this idea as far as you want, with VMs or what have you.  User's are not perfect and we all know they can be fooled easily but even savvy kids know that when they're surfing the web they should not have something get installed that they didn't ask for.

I agree it's time to go back to a leaner AV with greater attention to segmentation of information and an absolutely rock-solid restoration capability.  There are few things more frustrating than removing a virus only to find that you have to re-build a home user's machine from scratch because there are still tentacles of the bug infesting the remotest areas of the OS.

But good luck finding this kind of solution for a price you can stomach.  Maybe an AV company will build this up but if the past is any indication it will come with 100 megs of useless legacy crap installed with it.  So far it seems that freeware solutions steer clear of this type of app maybe due to patents in the area or simply because it's dangerous to restore anything to a computer and thereby risk the legal repercussions of not getting it perfect.

What does the business do?

Business will have to turn to multi-engine AV systems and to anomaly detection systems like Fire Eye, Tripwire, Splunk to catch hacks after the fact.  You can roll your own tools, write Snort rules, block massive lists of IP addresses. I believe the industry is coming to a tipping point where lower costs tools are needed.  I enjoy writing my own, but most companies don't have the people with the skills to take a day or two on a new tool.  Also, pet projects can take on a life of their own as their capabilities need to expand to support additional systems and log types.  I recently wrote a sniffer and a log analysis system that feeds into SQL Server (with full text search).   A few stored procs shape the data into useful intel but parsing new and varied types of logs becomes a pain point and Splunk starts looking better if you have a wide variety of input.  These are the kinds of decisions you'll find yourself dealing with more and more as the attackers continue to outpace the defenders.

I'll talk more about how to deal with this tipping point shortly as a reset is needed to tip the scales back in the favor of defense.


Monday, January 20, 2014

Interesting traffic

I hooked up Fiddler the other day to do some run of the mill testing and I started seeing these requests...


I shut down the browsers and continued to see this request periodically.  I pulled up the URL in a browser and got nothing.  It would seem to be a continual polling performed by Windows to see if the government has anything to say in which case what happens?  Would a dialog box pop up?  Would you see an amber alert or a public service announcement warning about something dire?  Is it more like an amber alert for cyber attack information?  Would Windows do something like lock down ports or respond in some remote-controlled manner like an anti-botnet?

I'll continue to look into this and see what I can find.

Tuesday, September 24, 2013

Apple Lock-screen woes continue

After the lock screen problems with IOS6 where you could bypass the lock screen using the emergency call function, now IOS7 has a lock screen problem all it's own.

With IOS7 you can bypass the lock screen and access, transfer, view, email, tweet and facebook all the pictures on the phone. From the photo app you can access the contacts including their phone numbers and email addresses. You can also send out career-ending tweets or facebook posts. You can turn on Air-Drop and "drop" all their pictures to another phone if you think of turning it on prior to unlocking the phone!

Videos online show the procedure but don't readily give an idea of how to reproduce it because timing is everything. Here's the HOW:

1) Push the home button to wake up the phone if it's turned off.
2) Slide the photo icon up from the bottom of the screen (this will activate the camera app if they phone user end-tasked it previously
3) Push the home button to go back to the locked home screen
4) Flick the bottom panel upward
5) Click calculator (calculator app opens)
6) Push the top button for about 5 seconds until the phone presents you with the "Slide to power off" message and more importantly the "Cancel" button at the bottom of the screen
*** Do these two things as one step and try to get the timing right ***
7) Click cancel - push the home button twice (the push of the home button must happen about 1/2 of a second after the push of the cancel button... and the second click of the home button is about 1/2 second after the first home button click. The way I do it is say out-loud Click-Click clicking the button with each verbal cue)
8) You can scroll through open apps but the only app that lets you in is the calculator and apps available through the home-slide function

If you have an iPhone with IOS7 I'd recommend keeping it with you physically. If you do have to leave it somewhere you will need to edit your settings and turn off access to the control panel in the lock screen. (this option allows you to still use the control panel in unlocked mode) And of course, watch for an OS update and install it asap when it's available!

Wednesday, February 06, 2013

The New Face of Pen Testing

A recent pen test changed my viewpoint on the industry standard practices.  The old ways are no longer working.  Not only are industry standard pen tests not working they are providing a false sense of security.

Given the latest generation of firewalls that are adaptive and resistant to scanning we would expect that port scanning and repeated door knocking would be less effective.  So why are we still being charged for a type of scan that is ineffective?

Recently a fortune 500 company performed a costly pen test on a network segment running critical systems and returned no results at all.  Thinking that was impossible I ran a few tests of my own with the same tools that they used.  I got two ports on the first IP and then nothing after that.  I tried to mix it up using NMAP with the most stealthy settings.  I decided to pwn them using my Nessus skills and even tried a zombie and spoofing different addresses and nothing worked.  I was shut out.  Nessus and other tools failed to get anything from our ninja-like firewall.

Now that I had established that the standard tool-set was producing almost nothing except false warm fuzzies I broke out "my" set of tools.  And I did what I do very well... break the rules.  We know that companies on the net are spying on us in unprecedented ways and without restriction.  Why not take advantage of their legally sanctioned espionage and leverage their data sets to reflect our own systems?

A day and a half later with 138 vulnerabilities and counting and a full network diagram including IPs, servers, web sites, email addresses, technologies, phone numbers, all laid out in perfect searchable and browseable order.  A couple hours of hand testing and searching some specific vendor sites and I found even more issues not determined by any of the new "black hat" tool-sets that I use.

I realized five very valuable lessons:
1) The cost of a pen test does not predict its effectiveness
2) Standard pen tests against active firewalls are useless and give a false sense of security
3) Standard pen tests use tools that are basically out-dated in four ways
    a) they rely heavily on port scanning
    b) they rely heavily on CVE lists and CVE lists don't have a monopoly on vulnerability info
    c) they don't leverage the Google factor enough
    d) they require firewall exceptions which distort the important "view of the hacker"
4) We need to use the tools hackers actually use, not the ones we're sold by security intelligencia
5) There never will be a substitute for hand testing

David Cross

Sunday, January 13, 2013

Java Security Woes

Rapid 7 published an exploit for Java versions prior to 7.7 which gives an attacker full control of the affected computer.  All that needs to happen is to lure a user to a web site that has a particular set of code running on it.  Now that this exploit is in the wild (available in public to hackers and wannabe's alike) you need to take action.

Lately Oracle has had a bad run of security issues with Java.  For years vulnerability testers have focused their efforts on Windows or other high visibility targets, but now that Java runs on more machines world wide than any other technology, hackers are taking notice.

Recently the US government, apparently a new source of computer security wisdom, (yes I am fully aware of the irony) is recommending turning Java off.  Curiously though, given all the machinations Java had to go through to get around Microsoft's proprietary protections, uninstalling it is rather more difficult than it would seem.  Java runs outside of the normal task-managed applications.  You can't just pop up task manager and kill java apps.  You can't just turn it off with one browser setting either.  There are many ways to invoke java by HTML and half a dozen ways you need to stop it using registry tweaks and IE settings.

So now what can be done?  If it's so hard to turn off that you can't be certain you've shut them all down you may want to simply uninstall the entire JRE (Java Runtime Environment).  That gets kind of inconvenient if you're a Java developer.  If you're not it's probably the best alternative to ensure that you're actually safe for the time being.  YouTube is a good resource for understanding the removal process and does a better job than 100 static screen shots.  How to uninstall Java from Windows  Uninstall Java from Mac

If that solution doesn't sound good you or your child runs Minecraft and is complaining the next day you have the option of upgrading to JRE to 7.11 and praying really hard. If you like that option here is the link to install JRE 7.11. (yes 11 major security patches in a year - YIKES!) So upgrade and take your chances is always an option.

For now I've uninstalled JRE on my PC and will probably break down and install the update on my sons' computer for Minecraft.   

I hope this helps!  I wish there was an easier way.  Personally I think that disabling Java is the least recommended course of action because it leaves you open to feeling secure even though there is likely some way to still invoke Java by the browser that remains in place.  Additionally, leaving the vulnerable version on your computer even if it's turned off is a risk because at some point in time it will likely get turned back on and it will be in an ideal state for cyber criminals to take advantage of.

If you choose "the road less traveled by" and it makes "all the difference" please let me know.


BTW you'll want to patch up to 7.12 now. Good luck!