Most corporations determine their network security by risk management. One definition of risk is RISK=Threat x Vulnerability x Asset Value. They will calculate out the risk, assign values and defend accordingly. I have been thinking about this and wonder if this is the best way to determine the security level of our networks.
In software programming, they have discovered that on average, for every 1,000 lines of code in a released product, there will be approximately .5 software bugs. So, if you have a program like Vista which has about 50 Million lines of code, it could contain about 25,000 coding bugs. It is very straightforward. We have this many lines of code, therefore this is our risk. It does not have anything to do with what the programmers think; it is a straight numbers game. The more lines of code, the greater the risk. Period.
The problem with risk as defined in the beginning of the article is that it depends on your point of view. People would assign value to items as they see appropriate, not necessarily how an attacker would see them. For example, a research company may be doing research on the mating habits of the Alaskan Tsetse fly. The Alaskan tsetse fly would truly be a rarity indeed, so the researchers tag the database as a high risk item. So they put their time, effort and energy in defending this server.
Their other server that they use just for messaging connects directly to a college campus which does a lot of military research. But, because they only use it for messaging, it gets a low risk tag. Chinese hackers who don’t give a rip about the Alaskan Tsetse fly (no offense intended to the assumedly important tsetse fly research) would be very interested in the unsecured server that connects into the campus. Their perception of risk is obscured by their belief of what needs to be protected.
I believe that a new formula should be used to determine computer network security. We could call it “Hackability”. And it would take into consideration the number of connected users, number of machines, number of internet connections & wireless connections, installed security devices, security consciousness of users & staff and most importantly, the external importance of work being done.
So according to “Hackability” a small office with no outside connections, internet or e-mail, doing research on the Alaskan tsetse fly, with three machines and run by three computer security experts would have a very low hackability percentage. A network with several thousand or more connected users, with wi-fi, satellite, and multiple internet connections, thousands of servers including web, e-mail, database and SharePoint services, doing work on top secret military projects and with low computer security conscious workers and IT staff, would have a very high hackability rating.
The pre-conceptions of value need to be taken from an opinion to a hard facts type of data table. Now I just need to come up with a formula that will not offend our tsetse fly loving researchers. 🙂