12 security suites tested and 12 security suites fail

In September of 2007, I wrote “The truth about viruses” and pointed out how the ubiquitous danger of viruses exists largely because of negligence. When the vulnerabilities that common viruses exploit never get fixed, and those viruses are only guarded against in a case-by-case manner using signatures-based and heuristic detection systems, new viruses that will bypass detection and still affect your computer can be created by the hundreds and thousands with minimal effort. In short, much of the reason for the ubiquitous threat of viruses is a tendency of software vendors to ignore virus-exploitable vulnerabilities, and expect antivirus vendors to pick up the slack.This is not a problem particular to viruses. In fact, a good antivirus application can protect you from viruses reasonably well most of the time. Those of us who deal with security issues professionally, or even regularly as a hobby, are understandably leery of the idea of being “reasonably well protected” from something “most of the time.” Still, it’s obvious that antivirus software is not a complete failure as a Band-Aid over a sucking chest wound.

The same problem exists outside of virus-exploitable vulnerabilities, however, and is not nearly so well addressed. As Gregg Keizer reports in “Top security suites fail exploit tests,” integrated security suites for desktop computers fare much worse across the range of threats against which they’re expected to protect you.

I’ve prepared a couple of simple bar graphs to give you an idea how much they protect you against virus threats and active attacks. I cut the number of compared vendors down to 10 in each case, because that’s the number of vendors that overlapped in the two shootouts. In both graphs, I will use the color green to show threat coverage that exceeds 50%, yellow to show threat coverage that exceeds 25% up to 50%, and red to show threat coverage no higher than 25%. In both graphs, they’re ranked from best performing vendor to worst performing.

The first example is from the June 2008 Virus.gr antivirus software shootout, and in each case where a single vendor had more than one product in the shootout, I counted only the best-performing product:

Antivirus Performance by Vendor

The second example is from Secunia’s [PDF] October 2008 Internet Security Suite test:

Security Suite Performance by Vendor

An antivirus application is expected to do well at protecting against viruses. While I wouldn’t consider anything lower than, say, 98% coverage to qualify as doing sufficiently “well” to satisfy me personally, at least nobody came in under the 50% wire.

An integrated Internet Security Suite is expected to protect one against active threats; it should include effective firewall, rootkit detection, active vulnerability defense, and at least some rudimentary kind of real-time intrusion detection. Sadly, one might have noticed I didn’t get to use my virtual yellow highlighter at all in that security suite graph. Everything came in below 25%. Even though the best was significantly better than second place for the vulnerability prooofs of concept that made up the testing gauntlet, it was nowhere near good enough to even bother.

The problem here is multifarious. A few key points include the following:

  • Notice that there isn’t much relationship between who were the best performers on one graph and who were the best on the other. A lesson to take from this fact is the idea that maybe companies that are good at one thing aren’t necessarily good at another. The best performer on the AV graph was tied for third worst on the security suite graph; the best performer on the security suite graph was fourth worst on the AV graph. The fact some vendor seems to do well at AV in no way suggests you should entrust that vendor’s software with all your security software needs.
  • As pointed out in Secunia’s test results document, vulnerability defense coverage is abysmal in every case — more so in some cases than in others. When corporate software vendors do not quickly and effectively patch vulnerabilities (which is almost always the case) and users do not test and apply security patches in a timely manner (which is usually the case), vendors for security software should definitely look into picking up some of the slack. Can you imagine the marketing benefits if you were the only vendor to achieve more than 50% coverage in Secunia’s test — especially with second place, an order of magnitude better than third, scoring less than half as well as you?
  • Security software vendors shouldn’t even have a vulnerability defense market to target. Software vendors should be patching vulnerabilities more quickly than security software vendors can develop active defenses for them, and software update systems should be designed to make it at least as easy to find and apply patches without breaking functionality as to fail to update. This is a huge challenge for closed source software vendors, of course, but that doesn’t mean it’s not a challenge that needs to be met.

I can offer three simple pieces of advice, if you want effective defense, one to deal with each of the above problems:

  • Build your defenses a piece at a time, selecting the best security options for each part of the whole. Don’t trust a single vendor to get everything right, because, frankly, it probably won’t.
  • Track vulnerabilities yourself. Choose software that offers good mechanisms for doing so, and protect yourself to the best of your ability.
  • Make your software selections at least in part based on good vulnerability response time (and don’t fall into the trap of simply counting discovered vulnerabilities).

source