As I mentioned in a previous post, the MBTA v. MIT scenario is extremely distasteful to me. I do believe the MIT students have a "right" to disclose the information they had. I also believe they increase risk in the process.
[If you are fully versed in the full disclosure discussion and already have a firm position, no need to read further.]
Bruce Schneier has another Wired.com column out about disclosure. Here are my comments:
- Even though it may "feel right" for security professionals to disclose as much information as possible, it provably increases risk for everyone.
- There is a much more cost effective,appropriate way to "force" vendors to improve their code - by letting them respond to real incidents. Only 3-10% of vulnerabilities are ever exploited, so we are better off focusing on those that are, and historical information shows that disclosure does not help here.
- Only a tiny fraction of all the vulnerabilities in the world are ever found, so discovery and disclosure needs to ramp up by orders of magnitude or else it is simply security theater - there are too many other vulnerabilities for bad guys to exploit.
- Since only a tiny fraction of vulns are ever found, and there are so many available, there is no reason to believe that the good guys will find the same vulnerabilities as the bad guys unless you believe in collusion between the two. Statistically,random collisions are highly unlikely and we'd need to find substantially all of the same ones that the bad guys do in order to be effective - close to impossible.
- Every risk manager knows that the attackers' costs are a key component to the probability of compromise. Secrecy, though not perfect, keeps the cost of attack at its highest point; the threat increases with disclosure. The vulnerability level stays the same whether disclosed or not.
- In order to encourage vendors to "build security properly rather than relying on shoddy design" one must define the level that is acceptable. If any single vulnerability is intolerable, then all software is shoddy and insecure and we should get rid of everything. If not, then we must answer the question of how many vulnerabilities are acceptable for some given software program.
As far as I can tell, there isn't any new ground here. I would certainly welcome any new arguments against the points I've made above. I know this all gets a bit repetitive, but I think it is important to highlight the contra-case every time someone brings up the topic.