Thursday, July 31, 2008

Security through obscurity - is it useless?

For a few weeks now I've been thinking about security through obscurity (STO). It is a common wisdom that it's a bad way to build security of anything. But, this doesn't have to be necessarily true, as I'll explain in the moment. What made me write this post is that a similar comment about usefulness of STO was given in a Matt Bishop's artice in IEEE Security & Privacy journal (About Penetration Testing, November/December 2007, pp 84-87). He notes that:

Contrary to widespread opinion, this defense [STO] is valid, providing that it’s used with several defensive mechanisms (“defense in depth”). In this way, the attacker must still overcome other defenses after discovering the information. That said, the conventional wisdom is correct in that hiding information should never be the only defensive mechanism.
His note goes right to the point. So, to explain this point, first I'll explain what STO is and why it is problematic. Then I'll explain what actually security is, and finally, how in this context STO can be actually useful.

STO is a principle that you are secure if the attacker doesn't know how you protect yourself. For example, if you invent new crypto algorithm and don't tell anyone how it works, then the one that invented algorithm believes it's more secure. Instead of crypto algorithm, you can take almost anything you want. Good example would be communication protocol. Now, the problem with this approach was that usually crypto algorithms, or protocols, were very poorly desinged! So, the moment someone reverse engineered those he was able to break in! Now, think for the moment if this secret algorithm is actually AES? Would discovery of algorithm mean that STO is bad? I suppose not, and so should you, but let us first see what security is.

Security is complex topic, and I believe we could discuss it for days without reaching it's true definition. But, one key point about security is that there is no such thing as perfect security. You are always vulnerable, that is, in any real world situation. So, to be secure actually means too hard for attacker to break in. When attacker breaks in, he doesn't attack from some void, but he has to have some information. So, the more information attacker has about it's target, it's more likely he'll succeed.

Now, how this goes along with STO? Imagine to implementations, completly identical, apart from the first implementation beeing secret. In the first case attacker has first to find information about implementation and then he can try some attack, while in the second case the attacker can immediately start attack.

So, STO can make security better, but with precautions. First, it must not be the only way of protection, i.e. bad algorithm/protocol/implementation. Second, you have to be ceratin that sooner or later someone will reverse engineer your secret, depending on how popular your implementation is.

To conclude, STO could help make security better, but only if used with caution. What you can be almost certain, is that if you go to invent new crypto algorithm, new protocol, or something similar you'll certainly make an error that will make the design, as well as implementation, very weak! Thus, this was of using STO might be usefull only for biggest ones with plenty of resources and skills, like e.g. NSA. :)

Sunday, July 13, 2008

The critique of dshield, honeypots, network telescopes and such...

To start, it's not that those mechanisms are bad, but what they present is only a part of the whole picture. Namely, they give a picture of the opportunistic attacks. In other words, they monitor behavior of automated tools, script kiddies and similar attackers, those that do not target specific victims. The result is that not much sophistication is necessary in such cases. If you can not compromise one target, you do not waste your time but move on to the next potential victim.

Why they only analyse optimistic attacks? Simple, targeted attacks are against victims with some value, and honeypots/honeynets and network telescopes work by using anallocated IP address space and thus there is no value in those addresses. What would be interesting is to see attacks on high profile targets, and surrounding addresses!

As for dshield, which might collect logs from some high profile site, the data collected is too simple to make any judgements about the attackers sophistication. What's more, because of the anonymization of data, this information is lost! Honeypot, on the other hand, do allow such analysis, but those data is not collected from the high profile sites.

In conclusion, it would be usefull to analyse data of attacks on popular sites, or honeypots placed in the same address range as those interesting sites. Maybe even combination of those two approaches would be interesting for analysis.

That's it. Here are some links:

dshiled
honeynets

Sunday, July 6, 2008

Reputations for ISP protection

Several serious problems this year made me think about security of the Internet as a whole. Those particular problems were caused by misconfigurations in BGP routers of different Internet providers. The real problem is that there are too many players on the Internet that are treated equally even though they are not equal. This causes all sorts of the problems and it is hard to expect that those problems will be solved any time soon.

Internet, at the level of the autonomous systems, is kind of a peer-to-peer network and similar problems in those networks are solved using reputations. So, it's natural to try to apply similar concept to the Internet. And indeed, there are few papers discussing use of reputations on Internet. Still, there are at least two problems with them. The first one is that thay require at least several players to deploy them, even more if they are going to be usefull at all. The second one is that they are usualy restricted in scope, e.g. try to only solve some subset of BGP security problems.

The solution I envision assumes that ISP's differ in quality and that each ISP's quality can be determined by measuring their behivor. Then, based on those measurements all the ISPs are ranked. Finally, this ranking is used to penalize misbehaving ISPs. The penalization is done by using DiffServ to lower the priority of the traffic and when some router's queues start filling up, then packets are droped, but first of the worst ISPs. This can further be expaned, as each decision made can use trustworthiness of the ISP in question. E.g., when calculating BGP paths, trustworthiness of AS path can be determined and this can be taken into account for setting up the routes. Furhtermore, all the IDS and firewalls can have special set of rules and/or lower tresholds for more problemattic traffic. I believe that possibilities are endless. It should be noted that it is envisioned that this system will be deployed by a single ISP in some kind of a trust server, and that this ISP will monitor other ISPs and appropriately modulate traffic entering it's network!

In time, when this system is deployed by more and more ISPs (well, I should better say IF :)), there will be additional benefits. First, communication between trust servers of ISPs could be established in order to exchange recommendations (as is already proposed in one paper). But the biggest benefit could be the incentive that ISPs start to think about security of the Internet, their own security and security of their custerms. If they don't then their traffic and their sevices will have lower priorites on the Internet and thus their sevice will be worse that those of their competitors which will reflect on income!

Of course that it's not so easy at it might seem at first glance. There are number of problems that have to be solved, starting with the first and the most basic one: How practical/useful is this really for network operators? Then, there are problems of how to exactly calculate reputation. And when the reputation is determined, how will routers mark the packets? They should match each packet by the source address in order to determine DS codepoint but the routers are already overloaded and this could prove unfeasible.

I started to write a paper that I planned to submit for HotNets08, but I'm not certain if I'm going to make it before deadline as I have some other, higher priority work to do. The primary reason for sending this paper is to get feedback that is necessary in order to continue developing this idea. But, maybe I get some feedback from this post, who knows? :)

20081229
I missed the deadline because of the omission, but the paper is available on my homepage. It is under work in progress section on the publication page. Maybe I'll try to work a bit on it and send it to some relevant conference next year. Are there any suggestions or opinions about that?

About Me

scientist, consultant, security specialist, networking guy, system administrator, philosopher ;)

Blog Archive