Application Whitelisting: Who Is in Control of Your Infrastructure Today?

August 29, 2012 3 Comments »
Application Whitelisting: Who Is in Control of Your Infrastructure Today?

On the surface nothing much seems to have changed over the last 10 years in relation to virus threats. New vulnerabilities appear regularly, patching has become the norm and antivirus software is doing its best to keep our networked world working. However take a closer look and there has been a massive evolution.

Step Back in Time

30 years ago IBM launched the XT5160—the first hard-drive DOS-based PC. And in less than three decades the “personal computer” has evolved to form the backbone of the networked world that we all rely on.

Nowadays, the computer virus—so seemingly tied to the PC—was in actual existence almost a decade earlier. It took until 1986 for these two threads to come together and the first PC virus “Brain” was born. By 2000, networks were spreading and so were worms like ILOVEYOU, which was considered one of the most damaging.

So where are we today? To a large extent nothing has changed, but the rise of targeted attacks and the involvement of nation states, often linked because of viruses such as Stuxnet and Duqu, point to separate new and worrying developments.

Targeted attacks can be engineered to seek out a very specific machine, infrastructure or geography. They can target a single company, maybe with the intention of stealing trade secrets or discrediting that company. If you want a good example, just look at the infection map for Flame: it is tightly grouped around the Gulf States. The other development is the apparent involvement of the nation state. Whether this is true or not, the key learning point is the scale of the enterprise behind these attacks. In under 20 years the resources that can be deployed has grown from “two brothers” to a nation state with the ability to shut down critical systems.

Today most organizations still use antivirus software that relies on a snapshot of the signatures of the bad stuff to be kept out. The major disadvantage of this approach is that it does not know what it does not know about! So when a new or adapted threat not yet in the snapshot appears, it will be allowed to run. This is the classic mechanism by which some of these targeted attacks have managed to be so successful.

So what of the future? There is some light at the end of the tunnel in the form of application whitelisting. This technique has two parts. First, a snapshot of the computer is made; this snapshot contains signatures for all the programs, operating-system elements, drivers and so on that were originally installed. Second, an agent is installed to check everything just before it runs to ensure it was in the snapshot. Even though this technique still uses signatures, it has the major advantage of being able to block unknown code and prevent what is now known as “zero day threats.”

Application Whitelisting

So why do we still stick with less effective antivirus solutions when application whitelisting software now exists? Both techniques use signatures (in part), and signatures need to be generated and managed—so what is the issue?

Back in 1986 when the first PC virus came along, it was just that; a PC virus. By then PCs had been around for a few years, and programs already contained thousands of executables. And each PC probably contained different executables because one was in engineering and another in finance. So was it easiest to look for the one piece of static bad stuff that was the same everywhere or a variable amount of good stuff which was different everywhere? I think the answer is obvious!

Two things have changed in the last couple of decades that mean it is now time to reconsider the options. First, the numbers game: the amount of bad stuff grows daily, and some antivirus signature files contain around 20 million signatures. On the other hand the good stuff has not grown as fast, and a signature file for a standard operating system, such as Windows XP Professional, will contain something like 50 thousand signatures. Second, the rate of change: viruses were a static program and did not change, but nowadays they are written to self-adapt or operate in a command-control mode where they can be remotely updated. So what do you do now? Look for the 50 thousand relatively static signatures of the good stuff or the growing 20 million adapting signatures of the bad stuff?

Signature Management

By solving the challenge of signature management, we solve the problem of why application whitelisting is not as widely adopted as logic would suggest it should be. Most organizations hope they never see any bad stuff, and they have no expertise in the dark science of understanding them. So it is sensible that both the generation and updating of antivirus signatures be “outsourced” to the experts, and that is how the industry has developed. Application whitelisting appears to require the opposite approach. The organization itself has to both generate and update the signatures of the good stuff, because PCs are unique to every organization. Worse still is the worry that with application whitelisting the signature file of every PC might have to be different! Compare this with antivirus software where the same signature file can be applied to almost every machine. Well, worry not: a step change is on the horizon. The “outsourcing” of signatures is possible with application whitelisting as well, simply by taking advantage of the increasing amounts of signed software now available.

Today the concept of “signing” software is becoming commonplace and will contain metadata, such as the software author, a checksum to verify that the object has not been altered and versioning information. Signing involves a process using a pair of keys, similar to SSL or SSH sessions. The private key used to sign the code is unique to a developer or organization. These keys can be self-generated or obtained from a trusted certificate authority (CA). When the public key used to authenticate the code signature can be traced back to a trusted root authority CA using secure public key infrastructure (PKI), you know the code is genuine. We see this most commonly today in environments where we have not requested a given piece of code, so the source may not be immediately evident—for example, a Java Web Start application accessed from your browser.

This same signing process can now be used by application whitelisting solutions, such as Cryptzone’s SE46. The agent that checks everything just before it runs clearly trusts the signatures generated for that PC in the first place (especially if they have been signed in a way similar to the above). But the trust model can be extended to include other signing authorities. This means, however, it would now be possible to have a Windows PC that has the trust model extended to include, say, Microsoft, Adobe and Cryptzone, so it can now self-update without needing in-house management of the changing signatures. Effectively, the management of the signatures for the good stuff has now been outsourced in much the same way as for antivirus.

Today application whitelisting is being most aggressively adopted for industrial control, medical and manufacturing systems, where there is less configuration variation and the outcomes of infection can be extremely severe. Wider PC protection will undoubtedly follow as the trust model is extended to include other signing authorities and as self-updating signature files without the need for in-house management becomes possible.

So, returning to the question, who is in control of your infrastructure today? If you are solely relying on antivirus protection, you can no longer be sure. If you apply certificate-based application whitelisting, however, you can be certain that just you and any developers you choose will be able to take control—and that will be all!

About the Author

Jamie Bodley-Scott is Account Director–Systems Integrators at Cryptzone.

Photo courtesy of cyanocorax

3 Comments

Add Comment Register



Leave a Reply