The antivirus platforms are constantly evolving and, like anything else, the fact that an antivirus is at the top of the security software market for desktops does not guarantee that it will stay there.
So, how come people decide what makes the best antivirus programs? If you visit a review site, the allusive “test” is referred to as this year’s leading software. However, if you have ever tried to use one of these spreadsheets to determine the platform to use, be aware that they do not make it possible to highlight one manufacturer above the others. This means either that most big players are basically the same, or that something is wrong in the way tests are conducted.
The question then is: what type of test is it? For those who would like to read more articles on the subject, check out these resources from the AMSTO (Standardization Organization for Malware Testing).
AMSTO has been discussing this issue quite seriously over the last two years and has started publishing new test standards. So far, the tests have taken a large and impressive number of threat samples, between five hundred thousand and one million, and have allowed to isolate a layer of the program under test to see how far it detects a imminent threat.
It sounds like what you expected, but some problems come to your mind. 1) If a layer of a software package is tested individually, does that say much about the actual operation of the software on the consumer’s computer? (2) Does the method of launching hundreds of thousands of threats on a system actually simulate the real world of online security in today’s world?
For the first question, we can only assume that this technique does not provide a precise result on the operation of the software when it will work as on a consumer computer. First, it could bypass some security measures built into another layer. software designed for certain threats, the antivirus missing because it was not designed to detect this type of threat. It is not because a program uses certain buffers that it must be poorly reflected in the test.
In addition, if the software is reduced to one layer, this will greatly affect the speed. Imagine that Software A is part of a big package with a lot of features, and that Software B is a program with no additional features. If software A, focusing only on the antivirus, detects 98% of all viruses faster than software B which detects the same number, software A can be considered the best software. As a consumer, I see it and buy Software A, only to be surprised by the obvious lag in system resources. It does not seem to work as fast as the test said. This is due to the test method. Of course, this will be slower when all aspects of the software are running simultaneously.
To answer the second question posed above, in reality, a user will never face this number of threats in a short time. Test procedures have been an industry standard for twenty years. It goes without saying that many changes have occurred since then. Perhaps the most relevant change is the massive use of social networking sites and the downloading of software from the Internet.
These two examples are both real-time and very isolated threats; non-Armageddon as scenarios-unlike the way the antivirus software is tested. And if a message on Facebook appeared with a link containing a form of malware? We do not need our software to protect us from a million threats; only this one.
Two percent of a million is 20,000, and that’s 20,000 possibilities of a threat that could cross if I click on the link. If I click on it, the probability that my antivirus will detect it is strong. But what if it is not? A chance on one represents a greater threat than a chance in a million. What I mean is that we simply can not know. For this reason, the software should note this type of messages that are part of our online experience before doing so and advise us on the best action to take. If an antivirus does this and does it accurately, it will be much more valuable than testing how many million-dollar threats go unnoticed.