So, I don't feel compelled to critique a fellow analyst's comments, but these are so generic and so wrong that I feel the need. Assuming they are accurately attributed, they are completely misguided:
From InformationWeek - Analyst Dings Security Vendors For Exploiting Apple Flaws, a story about Rob Enderle calling out security vendors for furthering their business interests to the detriment of security:
Pescatore noted the history of publicizing security vulnerabilities, and how disclosing them has actually made users safer. "Before 2000, no one talked about vulnerabilities, and because of that, no one really patched. But then in 2001, Code Red and Nimda creamed massive amounts of Web servers and PCs because the bad guys found the vulnerabilities."
Huh? I don't recall Code Red or Nimda being "undercover exploits" at all - they came after the vulnerability was disclosed and a patch was issued. This supports Enderle's argument better than Pescatore's.
That's why vendors -- Microsoft in particular -- have moved to regular patching schedules and implemented automatic updating tools. In Pescatore's view, Microsoft didn't do it willingly, but was forced into the change by up-in-arms users. "Because no one talked about vulnerabilities, vendors took their time coming up with a patch. And when they did, no one implemented the patches.
Smokescreen alert. There is a fallacious link implied between more patching and better security. Not necessarily true for two reasons: 1) patching only works on known vulnerabilities and not undercover ones, and 2) patching can cause its own problems; more patching brings more complexity.
"Microsoft would have been perfectly content not to have to issue a patch for the WMF [Windows Metafile] bug," Pescatore said, "if news about it hadn't been made public." But then the exploit, which was found by hackers, not legitimate researchers, could have attacked users that much longer.
"The pressure has to be on the vendors to make their software better," Pescatore said. "If we all shut up about vulnerabilities, yes, a lot these attacks wouldn't happen. But when one did, it would be ten times worse because we wouldn't be prepared."
Copout alert. Better? Well, how much better is "better enough?" Was WMF ten times worse? If so, I want it every time! Were we prepared? Certainly, enterprises were. Certainly, there were products that blocked it without much trouble. Somebody found it; a security company apparently knew about it and wasn't telling. I was prepared. We would be more prepared if it happened more frequently. I could hold my daughter's hand every time she crossed the street or I could teach her how to cross the street safely on her own. Our current state of vulnerability management has simply prolonged the childhood of the average user.
These attacks HAVE happened - at least ten times in the past ten years - and they aren't worse. It wouldn't take much to determine what the true risks are and protect against them if we needed to. But we'll never do it as long as we have someone to fall back on so we can remain kids forever. And here's the thing: we think we can control the threat environment even when we can't, and we aren't preparing anyone for attacks, even though they are as likely today as they would be if we never disclosed another vulnerability again.
Recent Comments