Wednesday, 26 October 2011

Fundamentals principles of better security

Defence in depth

This is the daddy of all security advice. The idea is to not rely on any one particular technology, process or approach to security. By building up layers of defence you can tolerate the eventual compromise of one of the layers. For example, don't just run AV software and hope for the best, harden your system and keep it up to date. Then use technologies such as whitelisting, sandboxing and URL blacklisting to further reduce risks. Don't just rely on a perimeter firewall or NAT, firewall between networks and use IDSs.

Security is a process 

Similarly to the defence in depth advice, don't ever expect to be completely secure. No product or solution will ever make you 100% hacker proof, regardless of what the marketing material states. Security is a process of identifying vulnerabilities, prioritizing and mitigating risks, making things a little bit better each time. 

Security through obscurity

Sticking with obvious advice, don't rely on security through obscurity. This means relying on the secrecy of a design, implementation or process for security. For example, rather than relying on some obscure proprietary encoding to protect data, you should use well known and tested cryptography. It shouldn't matter that someone knows your data is protected with 256bit AES. 

Crypto is hard

Speaking of cryptography, it is difficult. Very difficult. So unless your clever idea to protect data has just been rigorously tested by the crypto community, you probably shouldn't use it. Use a standard library instead, don't roll your own. You're going to get it wrong. Even implementing standard libraries can be tricky, so be careful.


The phrase Keep It Simple Stupid applies to most areas of life, including security and IT in general. Basically, complexity is the enemy of good security. Complex systems are difficult to understand, manage and maintain. A unnecessarily complex system will degrade over time which generally means holes and weaknesses developing. Bugs creep in and remain hidden. People become afraid of it, unwilling to investigate and fix issues.

Test, test, test

Speaking of bugs. Test the security of code during development by means of code review and static analysis etc. Test your applications before they go into production, eg through audit, fuzzing and standard QA testing. Test your infrastructure once it's in production by red-teaming and using external audits / pen testing.

Temporary is rarely 

This might not sound like security advice, but it is. The classic being the "any/any" firewall rule. Probably got put there during development or testing. It fixed the problem, and probably has a comment like "TODO - Work around to get customer working before deadline. Must investigate and fix properly - A N Other, March 2005". Although unavoidable, at least don't pretend that you'll get around to fixing it later. Just accept that it is going to be yet another permanent hack, and perhaps invest another 10 minutes to make it suck slightly less.

Weakest Link

Aside from all that dodgy hacked code and long forgotten cronjobs, your systems are in pretty good shape. However, security is like a chain. It is only as strong as the weakest link. Perhaps your database server is uber-hardened. But if you've got a five year old cronjob running somewhere on an unpatched six year old server that connects to your database will full admin privileges, that hardening is meaningless.

Principle of least privilege

So, why would that cronjob that simply dumps data need to run as an admin? It was probably created when the database only had one user - the admin. Applying the principle of least privilege means that people and systems should only be authorised to do what is required for them to successfully do their jobs. Another example is developers having root access on production systems. Is that really necessary? Or even sysadmins having full access to the development source code repositories. Following this principle helps guard against accidental and malicious abuse of power.

You cannot create data retrospectively

But you can throw it away. This is an operational point that extends beyond security. You can do some very interesting and insightful things with operational metrics, giving you new visibility into your system and therefore greater understanding. From a security perspective, an IDS alarming might coincide with a jump in open file descriptors on a system, or a sudden jump in memory used. Perhaps the number of lines per minute in an application log file jumps or drops. By collecting, graphing and reviewing this data you could lower the time it takes to understand an attack. There is more to life than just network traffic. Not using certain data? Archive or delete it after 12 months. Simple.

Secure at the source

As a general principle, you want to secure as soon as possible. If data is being generated on a system, you wouldn't wait until that data has been transferred half way around the world before attempting to secure it. This can go as far as never letting unencrypted sensitive data even reach a harddisk.

Security is a user problem

On the one hand you have security, on the other you have usability. No matter how clever a security technology is, if it needs to be driven by a human then it will need to be usable. When designing a system, try to work security into it from the start so that it doesn't become a bolt-on that gets in  the way of usability.

Security is everybody's responsibility

Another classic principle, security is the responsibility of everyone, from the receptionist to the CISO. Why? Well, weakest link for a start. The receptionist might not have root access on the production servers, but they are probably on the network and could be a pivot in further attacks. They would probably also make good sources of spear-phising emails to internal addresses. Educate people to take an interest in doing their best to maintain security. Reward them for reporting suspicious activity (e.g. a social engineering phone call).

Remote code execution == game over

Basically, if a box has been compromised to the extent that the perpetrator has run code on it, you have to assume you have lost control of that system. Rootkits are becoming more sophisticated by the month and you probably do not have the time to fully investigate exactly what happened to the system. Investigate it, understand what they succeeded at, what they failed at and perhaps even what their motivation was. Then rebuild the box and start from scratch. Also, don't rely on backups taken after a compromise.

You're going to get hacked

Almost finally. It's not a matter of if, but when. Work on the assumption that you are going to get hacked and look at how it's going to happen and how you can minimize the impact on the business. Make sure you have an incident response process.The idea is to stop an attack as soon as possible, reducing the impact whilst recording data for later forensic analysis (a hardened centralised syslog server might help).


This list isn't complete. I will add to it over time.

No comments:

Post a Comment