Well, a handful of people are really mad at Michal Zalewski. If you've read some of my opinions about vulnerability discovery, you may be surprised to hear that I am unphased by his perspective. In fact, in my opinion his approach will lead to better security in the long run than the self-righteous "white hat" who forces charity on us while basking in egotistical limelight.
He's just an ambivalent guy.
A guy like Zalewski would be keenly aware of the hypocrisy running rampant throughout the security world. He may even be disclosing this way simply to highlight the arbitrary nature of the "rules" in force anyway. He's the type of guy I hope will someday address the problem of security in a completely different way - say, one that doesn't require knowledge of bugs before providing protection.
I can respect somebody without pretense who does what he does "because it is there" with no interest in profiting from it. At least he is straightforward about it and doesn't suggest that he is providing any benefit to mankind or other value. It simply is what it is.
The other thing that is cool is that he doesn't go into long diatribes about the sorry state of software development or suggest in hindsight that the code could be more secure. That is both obvious (sure, it could have, but...) and yet unattainable. I suspect if he did lament software coding, he would at least back it up with some meaningful idea of what metrics or measures might be useful as a quality indicator.
Here's the interesting thing about Zalewski's approach: if it inspires a lot of "shock and awe" in you, then you are nowhere near able to protect your environment in a reasonable manner. The fact that he didn't provide enough time for a little song and dance before publishing is pretty much what I'd expect from an attacker, too, albeit instead of publishing they will be attacking your environment.
You ought to get ready for it, and then you can thank Michal Zalewski for not following the rules. I guarantee you the bad guys aren't.
the rules of responsible disclosure are not arbitary or hypocritical - they are there to prevent (as much as possible) the supposed good guys from helping the bad guys...
sure it's possible that the bad guys already knew about the vulnerability, but it's also possible that they didn't... by trying to work with microsoft first (with full disclosure as fall back position) the window of exposure for this vulnerability could potentially have been closed before details were given...
by practicing full disclosure before a fix is created the researcher makes the bad guys aware of a new and still effective avenue of attack and thereby increases the risk that they'll be able to use it against people... because of those costs this kind of disclosure should only be practiced as a last resort...
Posted by: kurt wismer | April 27, 2006 at 10:21 PM
@Kurt -
1) If the time delay supported by "responsible disclosure" isn't arbitrary, what is the mathematical proof behind [whatever the time delay happens to be]?
2) If good guys wanted to prevent (themselves?) from helping the bad guys, they wouldn't disclose at all, as you imply in your comments about how the bad guys likely didn't know about Zalewski's vuln and how the window of exposure begins at disclosure.
3) You didn't actually note that the bad guys likely didn't know about Zalewski's vuln, but I know it and you know it - the chance that there was some sort of overlap between ambivalent guy research and bad guy research is very, very low.
4) The "window of exposure" for this vulnerability started when it was coded and delivered to customers, not when it was disclosed.
5) It appears you were happier before this disclosure - do you support complete secrecy like I do?
6) What are you doing about all those other vulnerabilities the bad guys currently know about and you (we) don't?
7) What exactly *is* disclosure a "last resort" to?
Btw, I like your blog (http://anti-virus-rants.blogspot.com).
Thanks for the comments,
Pete
Posted by: Pete | April 27, 2006 at 10:50 PM
1) the time delay isn't a 'rule' per se... there are guidelines about how long is a reasonable amount of time to wait for the vulnerability to be fixed... the rule simply requires that time delay to be greater than zero...
2) avoiding helping the bad guys is one of many (sometimes competing) goals... it is not the only goal a researcher needs to concern him/herself with, but it is one s/he should keep in mind...
3) i don't pretend to know what the bad guys know on their own, i'm only speaking about the information that supposed good guys drop in their laps... you're calling him an ambivalent guy (aka greyhat) but that contradicts the very notion of defending him... if what he did was actually good instead of bad then he was acting as a good guy... somehow though, exposing explorer users to greater risk doesn't seem particularly good to me...
4) the fact that responsible disclosure hopes to close the window of exposure prior to public release of the information makes the opening of the window of exposure prior to public release of the information a given...
5) i support responsible disclosure... try to work with the vendor first, if that fails then release the info publicly... once a fix has been created then the info should also be released publicly so that the public knows why it should apply the fix...
6) the issue of vulnerabilities that the bad guys know about and we don't can be addressed by a number of different means... one is to try and find those vulnerabilities independently (what's so special about bad guys's abilities to find vulnerabilities?)... another is to try to detect active exploitation of those vulnerabilities generically (honeypots, various autonomous agents, etc)... still another is to develop assets in the bad guy camp who'll leak the info to you...
7) full disclosure prior to development of a fix is a last resort measure for trying to force the vendor to address the security concerns for the vulnerability in question... it places market pressure on the vendor by exposing their flaws to public scrutiny, but it also exposes their customers to increased risk...
Posted by: kurt wismer | April 28, 2006 at 10:58 AM
in retrospect to 5) - my answer was specific to the fixable type of vulnerability... different types of vulnerabilities have different disclosure considerations...
Posted by: kurt wismer | April 28, 2006 at 11:22 AM
Good points, especially this one: " if it inspires a lot of "shock and awe" in you, then you are nowhere near able to protect your environment in a reasonable manner."
Posted by: Anton Chuvakin | April 28, 2006 at 03:13 PM
Zalewski has stated that he is essentially targeting Microsoft for this type of disclosure. These guys refuse to accept any responsibility for the harm they are causing. What's more, I am pretty sure that there are more than a handful of people who disagree with this philosophy.
Posted by: trbilbro | May 01, 2006 at 02:20 PM
Zalewski has stated that he is essentially targeting Microsoft for this type of disclosure. These guys refuse to accept any responsibility for the harm they are causing. What's more, I am pretty sure that there are more than a handful of people who disagree with this philosophy.
Posted by: trbilbro | May 01, 2006 at 02:21 PM