Thursday, March 05, 2015

Predictions Revisited, The Eye of Mordor, Crypto and Amazon?

The Eye of Mordor principle - my prevailing theory of computer security - has proven itself in the world of SSL. SSL put the "one ring" on and the Eye of Mordor looked into it. This past 6 months has seen intense scrutiny focused on encryption. After Beast, Crime and a few other attacks we got POODLE, PO or Padding Oracle on TLS 1.0 and 1.1 {largely ignored by the industry} and FREAK on SSL and TLS.

At Black Hat some amazing crowd-sourced hacks were shown versus standard cryptography. These attacks were generally using ordinary and simple means for reducing the number of rounds of brute forcing. Nothing super magical really, admittedly not more than grade-school math and the application of common sense. Some common algorithms were cracked in real time at the show. Cool... but how?

Much of what we rely on is built by practical people who prize speed, graceful error handling, solid design principles and extensibility. These are admirable features but all good intentions can be, and will be, taken advantage of. Timing attacks are an obvious one (speed), graceful error handling combined with timing attacks provided some fun this past year. But one slightly less obvious example lies in the solid design principle when let's say you're generating a key. In this illustration, a strong algorithm is required to produce a near perfect random string of data such that the cryptography operating on that key is not weakened. Sounds good right? It is.This is such a prized principle that it even makes NIST's standard 800-57. Awesome, it's now a borderline regulation to have exceptionally random content. It must be very helpful. But to a hacker, this means that since the key data is going to be highly random, so what happens if you discard all low entropy samples from the potential brute force data. (Throw out anything that has significant runs of the same number or letter) The reason we can do this is a low entropy key in an encryption algorithm may have the side-effect of allowing patterns to bleed through that may give hints as to the underlying data. This would be considered a weak key. While knowing that key generation favors high entropy, this doesn't give the attacker the key, far from it, and it may mean that you've thrown the baby out with the bathwater - BUT it means that there is now slightly more than half an ocean where before there was a full ocean of possibilities that are more likely. This is kind of like Sauron knowing that Frodo has to come to mount Doom to destroy the ring. Anything that has constraints can have those constraints used against them.

The FREAK exploit may or may not use statistically-reduced brute force, but it can certainly take advantage of the awesomely benevolent programming feature of extensibility.  SSL allows extensibility by helpfully providing a list of algorithms that can be negotiated during the handshake phase. Unfortunately this allows negotiating down to less secure encryption. Who knows, it may be possible to negotiate down to no encryption at all if both supported a null cipher. But in the case of FREAK we are able to downgrade to "export" approved RSA 512. Now RSA 512 is meant for countries who we deem to be enemies. But still most institutions both government an private sector ironically and very helpfully offer this weak crypto during the negotiation phase of SSL. So this means that a man-in-the-middle just needs to tweak the packet that specifies the crypto and a few virtual CPU cycles later they have the decryption key for your data. This feature that provides extensibility at the expense of security is a typical pattern we see repeated in security flaws.

Any time we help users and make things more convenient or accessible we are simultaneously lowering our guard. So where is the balance on the side of security in this equation?  I think the answer is that here is not any possible strong security stance that maintains any sort of balance between ease and security. At some point, some compromise has to be chosen where things aren't perfect but are maybe better than average. The trick is to put a time limit on them and actually expire those methods and move on to the next awesome thing before someone breaks the system. Yes, most companies and governments will fall down on the job and leave things in too long.  Who wants to spend all their development time upgrading legacy stuff that still works?

Is there hope? Maybe... With the evolution of processing power, the flexibility of the cloud, and the new technologies on the horizon there is some hope in quantum computing.  But making it between say... 2017-2020, we'll go through a period of time where encryption will be pretty much useless until quantum encryption is available to business at a reasonable price - perhaps via cloud-attached hardware services. It will be a point like in the movie Sneakers where the guy develops the system that cracks all encryption and suddenly every type of hack is possible. We are already at a weird point where most of our secure transactions are leaked through side channels, meta data, and inference can let people know most of what's happening anyway. That's not unlike real life where people talk about private things and others overhear.  The same thing happens in computing. You think your doctor visit is private until you realize the person in the next room can hear the doctor detailing your ailments anyway. In the real world we ignore that - in the digital, we tend not to because of economies of scale. Perhaps there is hope hiding somewhere wrapped in chatty and verbose web services in the cloud.

Currently, Amazon.com has a new Key Management Service in the cloud that does take some of the pain out of creating strongly managed crypto systems with proper keys and rotation schemes. Unfortunately it doesn't take the pain out of creating encrypted data fields automatically and seamlessly inside your ORM layer. POST and wait and get your data back and then write it. It remains to be seen what kind of performance writing heavy JSON back and forth over hopefully TLS (referred to in their whitepaper as "SSL") on a back end system with hopefully non FREAK algorithms, and waiting for a web service to deserialize, check permissions, load keys, encrypt and re-serialize and tunnel stuff back will take in an app with thousands of users writing millions of rows of data per hour. But the tools are gradually getting better, and most importantly, simpler. Maybe this is a step in the right direction. Unfortunately, for developers, Mount Doom is getting steeper by the step. Acceptable chaining modes and key wrapping and hashing algorithms keep changing almost yearly it seems, and tools offer so many options and modes (here we go again with estensibility "features") that some developers are still mistakenly implementing weak combinations of strong things. Thankfully, Amazon does address that somewhat with it's service toolkit largely forcing you toward better combinations of options and simpler code. But real-world attacks are still progressing at a rate that is outpacing the tool-sets, and often attacking in points of integration where even the best block chaining mode doesn't help. Remember, Frodo had to go the edge of the volcano before getting rid of the ring.  I suspect that we'll see something analogous to that with SSL, TLS and one or two encryption algorithms, and maybe the loss of a finger and "The One Ring" disappearing in a flash of fire before the Eye shifts to something else.

David