Did You Know?



Reply
Frequent Voice
CLAUDIU
Posts: 34
Registered: ‎06-12-2012
Accepted Solution

Webroot SecureAnywhere- not so good in AV comparative....

only 93% detection, in fact the last on the list

 

http://chart.av-comparatives.org/chart2.php

 

 

Any comments?

 

Thanks,

Claudiu

 

Please use plain text.
Retired Webrooter
Retired Webrooter
Kit
Posts: 359
Registered: ‎01-19-2012

Re: Webroot SecureAnywhere- not so good in AV comparative....

In general, since it's the weekend:

 

The methodology by which AVC performs its tests triggers a Safety in the system meant to prevent catastrophic issues in the event of various problems on the system, and the tests do not allow for the proper handling of this safety trigger.  Generally, if a machine really has that many infections involved, an AV shouldn't just be wiping up swaths of them, so WSA takes a much safer approach that causes AVC to rate the response as a failure.  In a real situation, such a scenerio would almost, if not completely never come up.

 

Changes have been made to the way that system specifically responds to testing situations.  Despite how "realistic" anybody claims tests to be, they are nowhere close to it unfortunately.  These changes were implemented well after the May testing was started, so do not reflect in this series of tests.

 

Honestly, it's an interesting situation, since SecureAnywhere is specifically tuned for, literally, real malware and threats in real-world situations.  This means that millions of users are well-protected, and in cases where something is missed (which happens with every security program), recovery is possible within minutes or hours at worst instead of days or weeks waiting for a new definition download. 

 

But when sanity checks say, "Wait, something is VERY wrong here, no real user could have this situation occur." and cause failsafes to start making things cautious, they are right: No real user had it happen, it was just a test.  So the failsafes and extra caution cause a problem with the test.

 

Since we have zero interest in leaving users unprotected from real sanity issues, it becomes an issue of how best to deal with the tests without breaking reality.  The old theory that security vendors used was "If more than 200 (or whatever) viruses are detected in a directory, it's a test, so everything in the directory must be a virus, so I'll just detect it as such."  Obviously then when a worm replicated rapidly in c:\windows\system32\ and was detected by a definition update two weeks later, the results were catastrophic when the whole system32 directory was wiped out "because it's a test".  So you can see the problems perhaps. :smileyhappy:


Kit - Prior Webroot Quality Assurance / Prior Webroot Escalation Engineer

Please use plain text.
JimM
Posts: 2,308
Topics: 299
Kudos: 1,320
Solutions: 395
Registered: ‎01-19-2012

Re: Webroot SecureAnywhere- not so good in AV comparative....

This question was also already raised here, where one of our executive vice presidents chimed in with a similar answer as well.

/// JimM ///
/// Former Community Manager - Now Humble Internet Citizen///
/// Also Formerly a Technical Support Escalations Engineer ///
Please use plain text.
Community Guide
The_Seeker
Posts: 156
Registered: ‎05-15-2012

Re: Webroot SecureAnywhere- not so good in AV comparative....

I just hope Webroot doesn't tune its product to perform well in these tests, to the detriment of the product itself. These tests in no way mirror real world, average user scenarios, so should be taken with a grain of salt.
--
Windows 7 Ultimate 64-bit • WSA Antivirus • Ad Muncher • Image for Windows
Please use plain text.
Frequent Voice
CLAUDIU
Posts: 34
Registered: ‎06-12-2012

Re: Webroot SecureAnywhere- not so good in AV comparative....

[ Edited ]

Hi Jim,

 

Thank you for your answer!

 

I followed the link indicated and I want to add few things,if I may;

 

"We survey a large group of customers every month. In our April survey of 958 customers, 96.6% said they were likely or highly likely to recommend WSA to their friends and family "

 

While is important how the customers feel about WB - mostly from a SALE point of view- however this is not relevant for the eficiency of a security product; as a customer myself I only could test WB against Eicar and I only noticed that has a very low impact on my system so I am a happy customer.

 

we know exactly what we catch and exactly what we don't catch across our users ..."

 

While I understand that "we know exactely what we catch",  somehow I find difficult to see how you know " what we do not cath" ; if you would have known what you did no catch it you would catch it:smileyhappy:

 

Anyway, all regular customers need a reference either AV comparative, PC mag or wathever; is a common tendency among AV vendors to claim that the tests are not accurate and to withdraw their products from these tests when they have unexpected results.

 

While I believe WB has a revolutionary approach on computer security I will need more testing in order to be an "educated" satisfied customer.

 

Thanks,

Claudiu

 

 

Please use plain text.
Retired Webrooter
Retired Webrooter
Kit
Posts: 359
Registered: ‎01-19-2012

Re: Webroot SecureAnywhere- not so good in AV comparative....

The good news is multifold. 

 

It's also recognized by most of the more highly-technical end users who are familiar with security as a whole that the testing is nopt always going to be the best way to do things.  The tests are brute-force, for example.  That means they download 1000 threats (or often more), and a good number of these threats are "broken".  Literally, partial files, things that can't run, etc.  However the AV product is expected to detect them regardless.  So most AVs have to use "cut and burn" methods to pass the tests regardless of the fact that this can be highly detrimental to a real user case.  Thankfully, as described, the tests generally are easy to detect and real users never see the cases the tests present (10,000 infection files in a directory?  Really?).

 

Tests also have a tendency to do things like "Install XP SP2 and no other patches, an old version of Flash, Reader, and Java, and old IE.  While this can be an excellent way to know it's a test machine and go into hyper-paranoid mode, doing so hurts real users just to try to get better test scores.  One can also consider that the people who do exist with such an environment... aren't likely to know enough to get AV either...

 

Take almost every "test" and compare it to reality.  They rarely match.  A certain certification test historically expects 20 DNS entries to be added via a ten-minute, 230-click process, for example, otherwise you fail that part.  In reality, anybody who actually knows what they are doing adds the 20 entries with a one-minute, seven-click process.  It works better, faster, and more efficiently, but according to the test, it would be a fail.

 

For what we do and don't catch, it trivial:  The way the cloud database system works means that it sees every file that touches the core of the computer and anything added thereafter. When a file is determined to be a threat, we also know exactly how long between the first time any one of our users anywhere worldwide saw the file and when every single instance of that threat worldwide was removed for our users.  We can count this in terms of seconds in most cases, minutes in some, and hours in a few.  If any other AV was able to say how long it was between when a file was first seen by an endpoint and when the definitions reached every user to be able to remove the files in question, they'd be counting in days in most cases, and weeks in some.

 

So what tests expect and what tests show are not the results of reality.  After all, if you go 100% by test results, then the absolute BEST security, with an invariably 100% efficacy, computer NEVER gets infected, never ever responds to network probes, NEVER gets broken...  Is turning it off and unplugging it. :smileywink:

 

 


Kit - Prior Webroot Quality Assurance / Prior Webroot Escalation Engineer

Please use plain text.