[Discussion] - Antimalware testing is hard, disputing a flawed test is even harder



Show first post

31 replies

Userlevel 7
Badge +63
@wrote:
The problem we have is that no matter how hard you try to convince some clients that X product is good, they still want to see the scores, stats, and standardized testing scores. Doesn't help that many of those testing organizations will not test your product in the way that they were designed to work. Good article. 
So very true! I have been using WSA since the beginning even Beta Tested the first version for approx 8 months before it's release in  the fall of 2011 for the 2012 version and never had an infection at any time in  7 years!
 
Cheers,
 
Daniel 😉
Userlevel 7
@ wrote:
Thanks Daniel,
You put me in a difficult position, now I have to defend the testers. The testers at AV-Test, AV Comparatives, NSS, Virus Bulletin, and a couple of lesser known test labs are my friends. Good friends. They are driven by the same passion to help consumers that we are. And yes, they do have to earn a living too so there is money involved.
 
The problem I am attempting to address is the general perception (not yours) that the testers are always right. As a result there is no "appeals process" for the vendors. All of us vendors test our own products too. It's a bear.
 
There was a day when the testers would not even consider what the vendors had to say. I can't blame them, they were not treated with any respect. Over the years, primarily due to AMTSO, vendors and testers are working collaboratively to make the quality of testing better.
The testers don't always get it wrong. They also get it horribly wrong sometimes too. What matters then is that mistakes are admitted and that the results of the test are corrected post-publication.
It is perfectly fair for you to be skeptical of the test results, but please don't extrapolate that to questioning these people's integrity. We are actually on the same side – even at the times when we know we weren’t given a fair shake.
 
I appreciate you taking the time to comment and I look forward to lots more participation and discussions with you and the community!
 
Hi RAbrams
 
Have been following this thread with some interest and without the need to add my two pennies worth...until now. ;)
 
You mention testers and their integrity but as with any labeled 'group' whilst there the majority will be honest, diligent and do what they do with integrity, there will always be a minority who may 'stray' (to be kind to them) from that path. Even worse there may also be those that purport to be members of the 'group' or intimate that, and who are unscrupulously looking to profit, etc.
 
The perception that Daniel exposed is a very real one in my book because there is no clear & easy way to discern the credible from the disreputable, especially for users who may not be as well versed in the topic as the likes of Daniel and others. They get bitten, proclaim this and that casts dispersions on those of integrity, etc.
 
What would also help is if testing organisations explained their commercial links/how they make their money, i.e., some greater transparency would help cement the credibility of those organisations of integrity, more widely...which would be a good thing in my book.
 
Finally, in my book, both testing organisations & vendors should actively promote their collaboration more widely & openly...the case in point was when recently WRSA fared badly in a test, due to the testing organisation not having installed it correctly (if my failing memory does not fail me here) and the testing organisation did not overtly announce (as far as I am aware) that this was the reason.
 
We in the Community had to dig around and report that following the plethora of posts about WRSA's apparent 'bad' showing. Such episodes do nothing to (i) make those who know about these things less 'suspicious' (for want of a better word) & (ii) to give confidence to the average user who take the test results as 'gospel', that the testing organisations can be fully 'trusted'.
 
The same goes for the scant overt recognition that the way that WRSA works is different and until recently was not catered for by the testing organisations in terms of the way they test (I believe that some are working with Webroot to remedy that, and indeed some of that remedy may already be in place) which led to the widely held observation by the plain stupid, those with an axe to grind about WRSA's success, etc., that Webroot are 'scared to put WRSA up to independent testing...and then the conspiracy theories start about WRSA's capabilities, etc.
 
A simple declaration at the start  of a test result report, about why Webroot, or indeed any other main stream AM that might be in the same position, is not included, etc., would demonstrate partnership, etc., and go a long way to prevent the aforementioned 'dis-inforation' that can (i) blight an AM & (ii) cause users to doubt the integrity & usefulness of the testing organisations.  
 
This is a shame, as clearly the vast majority are driven by passion & integrity to that passion, etc.
 
Now, I may be in a minority view here and also may be misinformed as to the lay of the land here...but this is what I perceive (and I suspect that there are many more like me) but stand to be corrected if necessary.
 
Thank you for an excellent & informative discussion topic...I hope that there will be more like this.
 
Regards, Baldrick
Userlevel 7
Badge +63
@  Another bad showing from WSA in MRG Tests: https://www.mrg-effitas.com/wp-content/uploads/2018/03/MRG-Effitas-360-Assessment_2017_Q4_wm.pdf
 
https://www.mrg-effitas.com/recent-projects/our-projects/
 
 
18 applications tested
322 In-the-Wild malware samples used
Operating System: Windows 10 x64
Browser: Edge
Real World scenario with no user initiated threat neutralization
 
But the after 24 hours looks good but is it really? Webroot needs to Hyper Boost ENZO to classify faster! ;)
 
Thanks,
 
Daniel
Userlevel 5
Badge +9
@wrote:
@  Another bad showing from WSA in MRG Tests: https://www.mrg-effitas.com/wp-content/uploads/2018/03/MRG-Effitas-360-Assessment_2017_Q4_wm.pdf
 
https://www.mrg-effitas.com/recent-projects/our-projects/
 
 
18 applications tested
322 In-the-Wild malware samples used
Operating System: Windows 10 x64
Browser: Edge
Real World scenario with no user initiated threat neutralization
 
But the after 24 hours looks good but is it really? Webroot needs to Hyper Boost ENZO to classify faster! ;)
 
Thanks,
 
Daniel
Hi @! This is a great question. First of all, adding a turbo charger and a tank on N2O to ENZO as a step one is in the works. There is also a lot more that R&D won't let me talk about, but is very exciting to me.
 
Now to your comments. To start with, I have no information that would indicate that the results were not accurate, but to assessing product effectiveness than this test reports. Focusing on ransomware is arbitrary and defensible as a touchpoint, but because of the skew it puts on the test, the conclusions drawn cannot be that one product is broadly more effective than another. Cryptominers are perhaps the fastest growing threat, and a good backup routine does not mitigate the damage as it does for ransomware.

A couple of blogs from now I will be going much deeper into this exact aspect of issues with test analysis.

One last comment... Anytime you see a result where products score 100% the scope of the test is narrow, or the sample set is too small. Nobody is 100% in the real-world!
 
Thanks for the lead in to a future blog!
Userlevel 5
Badge +9
Hi @, Sorry for the vey late reply, but thanks for your comment. The truth is that competitors work together too... we have a common enemy 😉
Userlevel 5
Badge +9
Hi Baldrick, I'm sorry for the tardy reply. You do raise some interesting points. There are some willfully dishonest "testers out there, but (and this is my own opinion) how influential are they? How much business is a 0 percent score is any company going to lose when the tester reaches a very tiny percentage of the world? The perception is OK as long as it doesn't bend the perception of reality. I do not believe that ant of the influential test organizations are unscrupulously looking for profit.
 
I have a blog in review right now that addresses the economics of security product testing!
One of the biggest problems with test reports where the test was pretty good (none or perfect) is in the analysis. Few people think of that aspect. It isn't that they're stupid, it's not intuitive enough to many people to recognize that data is not the same thing as information. Mea Culpa. It wasn't until I was into my career in this field that I was made to understand that with respect to security product testing..
 
As for conspiracy theories... I am of the belief that conspiracy theorists conspire to create conspiracy theories :-)
 
 

Reply