Why Investing in IT Security is Bad for Your Business: A Model

Consider a simple economic model. There are N companies that make doodahs. I am going to make two assumptions about the doodah market.

1. The market for doodahs is very competitive, so the profit margins are thin - a doodah maker that has a higher cost of production quickly goes out of business. This is not a very strong assumption - most modern markets are like this.

2. The doodah market is very sensitive to security. If a dodah maker gets hacked, nobody will ever buy any doodahs from that maker, and he will go out of business. This is a pretty strong assumption - in real world a security breach usually does not put the company out of business. Still, I am going to prove that even under this strong assumption, it makes no business sense for doodah makers to invest in security.

Suppose that the probability of a doodah maker to get hacked in one year is p1. p1 is less than 1. Suppose that some (exactly M, actually) doodah makers in the market decided to implement some security measures, that reduce the probability to p2<p1. Those measures cost Q dollars to a doodah maker.

Now let's see what happens in the end of year 1:

M companies implemented the security measures (I will call them group A) and N-M did not (those are group B). In group A: M*p2 vendors never the less got hacked and went out of business. M*(1-p2) weren't hacked (let's call them group A1). In group B (N-M)*p1 were hacked and went out of business, and (N-M)(1-p1) weren't hacked and stayed in business (let's call them group B1).

So group A1 and group B1 stayed in business. All companies in A1 have implemented the security measures. This has raised their cost of production by Q, compared to B1. So, they can no longer compete with B1 and go out of business too. The only companies left in doodah business now are B1 - those that never implemented any security measures.

Let me repeat: after the first year, the only companies that stayed in doodah business are those that did not implement any security measures.

Now, how does this theory apply to the real world? Very well, actually.

First, let's look at the model parameters and some border cases. The most important variables here are p1, Q and M.

If p1 (the probability of compromise) is nearly equal to 1, nearly all companies in group A get hacked and go out of business. In that case, group B1 does not have much competition from them, and stays in business, despite their increased costs. This explains why everybody buys firewalls (if you don't have a firewall in front of your company network, the probability of a compromise is close to 1), and why everybody patches their Internet facing systems now, but never did it before the Internet worms became common. If p1 is significantly less than 1, the model described above is valid.

Q is also important. If the cost of security is negligible, compared to other variations in costs, it will not affect profitability. That's why many large companies have a security officer, while giving him/her no authority or budget he/she can control. Having one more person on the payroll does not make a big Q.

Finally, if M (the number of companies implementing security measures) is close to N, for one or another reason, there are not enough surviving companies in group A to compete with group B survivors and put them out of business. If "everybody does it", investing in security does not put the company out of business.

This model also explains why investments in web application security are low compared to investments in firewalls. The probability of a given web application getting hacked in a year is considerably less than 1. The probability of an unfirewalled network getting hacked in a year (by worms most likely) closely approaches 1. A special case here will be companies with high web visibility or those attracting the attention of wrong people (like HBGary, for example), where p1 suddenly becomes close to 1.

There is another very important special case in this theory. If you are a client of Gremwell, this theory does not apply to you at all. It is very important to remember that.


Last I checked, SQLi and RFI botnets were high impact and common security incidents.

Verizon, Trustwave, etc all list web apps as the numbero uno attack vector.

Yes, SQLi and RFI botnets are common. However, if you are running a custom made web app (which are still the majority), it is unlikely to get exploited by RFI bot, even if it is vulnerable to RFI. Equally, it will take months or even years before somebody bothers to find and exploit SQL injection in that app. Just try it - make a silly PHP page with one named parameter vulnerable to SQLi or RFI, put it on the Internet, and wait how long it will take get exploited. Alternatively, put an unpatched Windows XP on the net, and wait till it's compromised. It would be months or years in the first case and days or weeks at most in the second case.

Yes, web apps are number one attack vector - number one successful attack vector. That's because everybody has firewalls, but almost nobody bothers to secure their web apps. You are comparing the probability before any defence is implemented for web apps with the probability after defence is implemented for network attacks.