Solved

AV-Comparatives and Our Unique Approach


Userlevel 7
  • Administrator
  • 239 replies
Today, the most recent AV-Comparatives File Detection report was released. While the results were disappointing to see on paper, they were not surprising to those of us on the Webroot team.

To help clear up any confusion, we’d like to again share our thoughts on why our approach is difficult to compare in reports like this.

The latest AV-Comparatives report is an on-demand detection test of a specific set of files that evaluates whether or not a particular security solution has a signature for those files. Signature-based detection is not the only measure of a solution’s efficacy in protecting a user from threats, however.

While Webroot’s SecureAnywhere solution does leverage signature files in the cloud as part of its detection capabilities, those are not the primary security capabilities that the solution uses to protect users, particularly from unknown threats. While this test demonstrated that Webroot did not at test time have the most current sample set as part of its cloud-based database, it did not test the efficacy of the Webroot solution in protecting users from these threats outside of signature-based detection.

Webroot’s solutions work differently than traditional security solutions by focusing on the behavior of files that try to execute on a system regardless of whether or not we have seen that file previously and have a signature for it. Any unknown file is monitored and its behavior journaled as it tries to execute. Once it is deemed malicious, any actions the file may have taken are automatically rolled back to return the system to the last known good state, reversing only the changes that the suspicious file made. While the file is being monitored, SecureAnywhere has a collection of shields – including a Behavior Shield, a Web Threat Shield, an Identity Shield, an Offline Shield, and a Zero-Day Shield – that provide real time protection that prevent any untrusted file from executing behaviors that put the user or their information at risk.

Webroot has updated its signature database to reflect the sample set used in this test and has improved its overall sample sharing process to ensure we are continuously updating that database. However, users can be confident that the regardless of the state of signatures, they are protected from any current, new or unknown threats that they may encounter as a result of Webroot’s unique approach to protection.
icon

Best answer by RetiredTripleHelix 6 December 2012, 23:23

AV-C has a great report: Whole Product Dynamic "Real World" Protection Test Results Graph Bar for November Great Job Webroot teams! 😉
 
TH

View original

59 replies

Userlevel 7
Badge +55
I totally agree with you Cat and you explained it very well! Also I have hard time to help some users understand Webroot SecureAnywhere`s way of handling malware and the strong technology happening behind the scene`s with it`s journaling of unknown files!
 
Thanks,
 
Daniel  😉
Userlevel 7
Having read the same reports,i can honestly say i was a tad disappointed,but not all that surprised.It seems that no matter how often you explain to people how WSA works,they just repeat the same attacks over and over.I commend Joe for taking all the abuse on Wilder's head on.No machine i have installed WSA on,in my household,or any other for that matter,has ever seen an infection.If WSA was as bad as it has been portrayed,the forums here would be flooded with complaints and the phone lines to tech support would be burning up..but they are not.Is not the truest test of a solution is it's ability to keep a clean system clean??If by some miracle something makes it onto your system,WSA can revert your system back to it's pre-infected state.No other signature based solution can do this in this way.Most signature based solutions in the process of cleaning an infection actually can do tremendous damage to system files for a great many reasons.This cannot and will not happen when you are protected by WSA.I have the utmost respect for the Webroot team,from the developers all the way down to the suppport staff and forum volunteers.There is not a single solution on the market like WSA,which lends itself to some poor test results now and again,as the testing batteries do not do a good job in testing the type of product WSA is.I know WSA will continue to improve,as it has one heck of a team behind it,constantly striving to make it better,constantly listening to the consumer's input and implementing change when the time calls for it.I do not worry in the least about possibly being infected,and neither should you.Webroot has my back,and your as well:D
Userlevel 5
Why does Webroot keep participating in these antiquated tests?
Userlevel 7

@The_Seeker wrote:
Why does Webroot keep participating in these antiquated tests?

I don't have other chance than to agree with The_Seeker because I don't see any benefit for Webroot participating in such tests. On the contrary it is harmfull to Webroot because without explanation like having made Cat wide public thinks that WSA is a poor application what we know that definitelly isn't 😉 So all in all I see more disadvantages than benefits.
Userlevel 5
The tests themselves aren't antiquated seeing as the other AVs tested aren't using the approach Webroot uses. Webroot either needs to pull out or a separate test needs to be done showing its capabilities utilising its journaling features.
Userlevel 7

@TonyW wrote:
The tests themselves aren't antiquated seeing as the other AVs tested aren't using the approach Webroot uses. Webroot either needs to pull out or a separate test needs to be done showing its capabilities utilising its journaling features.

Yes, I Agree. ;) 
Userlevel 5

@TonyW wrote:
The tests themselves aren't antiquated seeing as the other AVs tested aren't using the approach Webroot uses. Webroot either needs to pull out or a separate test needs to be done showing its capabilities utilising its journaling features.

Pull out, definitely pull out.
 
Userlevel 5
CEO of Malwarebytes, Marcin Kleczynski, is currently doing on "Ask Me Anything" on reddit. He recently made a statement which I feel is germane to this discussion:
 
Q: What do you think most AV companies are doing wrong these days?
 
A: I think they are focusing on silly av-tests instead of focusing at the threats their customers are actually exposed to.
 
Now, this seems to be true of other AV companies, yet I'd hate to see Webroot fall into this trap of trying to score highly in these useless tests. By all accounts, WSA is doing a stellar job of protecting people's PCs - far more important than misleading percentages.
Userlevel 5
Without doubt, Webroot has a unique approach, and a lot of people seem to be having difficulty understanding it. I don't think it's so much the actual method employed, but the worry about the time frame from being infected to the time the unknown file is recognised and all actions rolled back to before the changes that file made. At first glance, it looks like you're infected for however long it takes for that file to be marked as malicious. However, I suspect other techniques should come into play too, such as behaviour analysis and heuristics. With a combination of all of these, WSA users should be happy in the knowledge they are protected.
Userlevel 7
There's an excellent video posted by Yegor on how Webroot really works:
 
https://community.webroot.com/t5/Webroot-SecureAnywhere-Business/quot-Missing-quot-a-Virus-w-SecureAnywhere/ta-p/10202
 
Listen where she said: This is where AV Testing says Webroot Fails the test. (but it didn't)
Userlevel 7
An excellent video.Would be nice if this video were enough to silence the naysayers.I already felt perfectly safe,but the video cements that fact.WSA is truly revolutionary,so much so that come people can't wrap their minds around how it works.This video will help communicate this and hopefully the trolling will end.I have never been and never will be much of a fan of testing.It almost never reflects real world usage,nor the home user's environment.I feel sorry for the people that change their av solutions as often as normal people change their undergarments and base that decision on test scores.I see people literally changing their protection weekly,sometimes more often on wilder's..quite often based on a single review.Their registry must be a minefield as even removal tools don't quite get everything.While the trolls will constantly change their solution on a whim,i will sit here quite happy and content,laughing at them,while i sit her quiet,serene,and safe due to WSA protecting my machines.
Userlevel 7
Badge +55
I'm just saying this as a Long Time user of Prevx now I know why they never did any AV testing besides West Coast Labs in which Webroot is part of! 😉
 
"With many thousands of new security threats emerging daily, Consumers, SMBs and Enterprises are continuously attacked by both existing and new threats which are at work 24 hours a day, 7 days a week, 365 days a year. It's a non-stop barrage of complex threats targeting vital consumer and business information.
In assessing the relative merits of security products and services, it's important to gather information on the performance of those technologies to support any kind of buying decision. Hence, product performance data is a critical component in evaluating one product against another.
Given the nature of today's security threats, their attack vectors, and behaviour profiles, West Coast Labs' Real Time Product Performance Testing is a vital information system that reports on the performance capability of security products and services 24x7x365.
This programme comprises a range of continuous, live tests against threats identified and collected in real-time, 24x7, around the world and across multiple attack vectors. The live reports you can access through this website confirm product performance capabilities in real-time.
It is a unique service for Consumers, SMBs and Enterprises alike who seek a high level of independent validation on real-world product performance."
 
http://www.westcoastlabs.org/ and click on Realtime Testing at the top!
 
TH
 

To Admins & Mods If picture is not allowed Please remove or ask me to remove as I'm not quite sure if it's against West Coast Labs copyright?

 



Userlevel 3
What about this? =122643]http://www.av-test.org/no_cache/en/tests/test-reports/?tx_avtestreports_pi1[report_no]=122643
 
Protection rate is pretty high...
 
Any comment?
Userlevel 7
Any test is only as good as the testing environment and the goal of the process.  Somebody pointed out somewhere that a person can be an Olympic-Gold long-distance runner, but if they were only tested in a sprint or dash, they would lose horribly.  Deciding they were a horrible runner based on those tests would be a faulty conclusion.
 
The Curiosity rover on Mars right now just barely avoided a catastrophic error.  If somebody had not discovered the issue before it happened, the rover would be guaranteed to find "signs of life" (teflon!!) on Mars.  Thankfully this was discovered in advance so the testing people can take action to overcome the problem.  However, at the same time, we look specifically for carbon-based things with the rover. If anything else we haven't accommodated for for shows up, we'd miss it completely.
 
The best test is experience and reality.  Never forget that.
Userlevel 7
Badge +55
And in this test WSA is showing great results! http://chart.av-comparatives.org/chart2.php
 
Blocked 88.2%
User dependent 11.2%
Compromised 0.6%
 
I wonder if they are using the 2013 version of SecureAnywhere?
 
TH
Userlevel 5
I'd like to think they were using WSA 2013, but as that was only released around 23/24 September, I'm not so sure. The WPDT covers the whole month, doesn't it?
Hi Cat
 
I have some questions:
 
No matter what method Webroot is using, it should be able to identify malware.
In this test Webroot missed more that 20%
On the other hand Webroot found 210 false positives.
 
The best way to stop malware must be to identify it as soon as possible, before it can do any harm.
If the method is based on signatures, behavior, cloudbased or something else does not matter.
 
I belive that You monitored the files and analyzed their behavior?
 
How did Webroot find 210 fp`s ?  
 
Best regards
Steffen Ernst
 
 
Userlevel 7
Hi Steffen, 
 
The results from this specific test do not accurately measure WSA's efficacy because this test is designed to test traditional AV signatures exclusively.  In this test, commonly known as a 'zoo' test, files are intentionally obscured to remove behavior-based characteristics. Additionally, samples are not executed to further prevent behavioral analysis.

AV-Comparatives also performs a real world test, http://chart.av-comparatives.org/chart2.php, where samples of malware are executed.  This test is designed to rate the efficacy of a product in a real world environment, such as being hit with a drive-by exploit.  Webroot scores very highly in this test because behavior data is analyzed.

Webroot has protection within its cloud database to prevent high volume false positives from occurring. The 210 FP's reported in this test were all files first seen in the 'zoo' testing environment. None of these samples had been previously seen by our users.

If you experience any false positive, please let us know and we'll correct it immediately.Unlike other AV companies, we can correct false positives in mere seconds and have them instantly reflected for all users worldwide.
 
Hope that helps to clear things up!
Userlevel 5
I'd like to think CatB or someone else could answer TH & my query as to whether WSA 2013 was used in this test. We both made these points prior to motzmotz's post.
Userlevel 7

@TonyW wrote:
I'd like to think CatB or someone else could answer TH & my query as to whether WSA 2013 was used in this test. We both made these points prior to motzmotz's post.

As they do not release the time frame of the testing period for a specific product, there is little way of knowing.  Any tests that were performed after 2013 was released would automatically update to 2013.  Any before would not.
 
Zoo tests would not be highly affected by 2013 however, as they still rely on "Detect a file based on it being a file, and detect it immediately or fail".
 
That's an interesting sub-note to the whole "Unique Approach" part of what Webroot does.  As a general rule, if a conventional AV doesn't detect a sample within a week, there is almost no chance it will ever detect it at all unless a user explicitly submits it. 
 
Conventional AV works based on "Bad" and "Not Specifically Bad", so the idea of catching it before it can do any damage is serious business.  With conventional AV, if it's not caught immediately and it runs, it does damage.  That means the test is unforgiving about time frame because it works on the premise that "If it is allowed to load code, it will do damage no matter what."
 
Webroot takes the approach of understanding that nobody is going to catch 100% of everything in the real world.  Even things that hit 100% on the test will miss hundreds of things in reality.  Think about it in this light:  A test covers a month period on one machine.  Something does "Really Well" and gets an "Excellent!" score because it caught 99.8% of everything, the best in the test.  That means one out of five hundred things get by.  That's not too bad until you take that 0.2% across millions of users.  If a million are exposed to things, about two thousand will become infected.  That is Reality compared to the test.
 
Therefore, we take the concept of "catch it before it does damage" and turn it on its ear.  It's not "Bad" and "Not Bad", we have "Good", "Bad", and "Unknown", and a threat that we don't catch it Unknown.  This gets special treatment, so suddenly itcan't do damage before we catch it.
 
The idea behind AV is not "Catch everything no matter what!", it's "Prevent threats from doing bad things to your computer".  The tests have to treat "It ran" as "It did bad things to your computer" because that's the easiest and most efficient assumption to make when testing conventional AV.  Thus the tests will fail Webroot if something runs and is caught twenty seconds into running.  This does not take into account that the threat that ran was neutered and incapable of doing damage or "bad things to your computer" and anything it tried to do was undone when it got caught.
 
So when we say the tests are not able to handle our unique approach, it is completely true.  The tests look for the solid line.  If the threat goes past that line, the AV fails.  Webroot saw everything getting past that line in reality, despite how good the tests seem to be, and so we created a safe zone beyond that line.  That creates much more time and efficiency in solving the problem of the threat and creates substantially more safety in the system, but in a way the tests cannot see currently, as they only look at the line, not the safe zone.  Our safe zone is not time-limited.  If something is detected minutes or even hours or more later, it can't have accessed privileged information and any minor things it managed to do prior to detection are still fully undone.
 
It's like a bank (computer) with a security officer (AV) at the door.  The test says "Look, the criminal got past the officer and is inside the bank.  The officer failed to detect that criminal."  Webroot puts extra security stuff inside the bank itself though, so even if (WHEN, always, with everything) the criminal gets past the security officer, it is handcuffed without knowing it and shown a virtual fake bank that it can't steal any money from or hurt anybody inside.  If he behaves as and ends up being a good customer, he will be able to do normal things and eventually be known as good and unrestricted.  If he drops a few things to support his criminal work and then tries to rob the bank, not only does the robbery not work, but the tools he dropped to support it get cleaned up too so he can't break in later.
Userlevel 5
I ask again: Why does Webroot keep participating in these antiquated tests? It concerns me because I fear that Webroot may negatively alter its product to score highly.
 
Edit: Clarification.
Userlevel 7

@The_Seeker wrote:
I ask again: Why does Webroot keep participating in these antiquated tests? It concerns me because I fear that Webroot may negatively alter its product to score highly.
 
Edit: Clarification.

Change and improvement does not come from hiding one's head in the sand.
 
It's kind of a situation where we're hurt either way we do it.  If we avoid the tests, people will wonder what we're trying to hide.  If we take part, people wonder about the low scores.  But the tests can only be changed and brought up to date by showing the testers where things are going wrong and pointing out that their tests are no longer accurately measuring against reality.  If we are not in the tests, there is no way to show that the tests don't match reality, and thus no way to get the tests changed to more accurately reflect the state of the real world.
 
Edit: While we do want to ace the tests, we consider the end user's security to be the driving force behind our work and we will take zero chances in implementing things that would "help" in the tests at the cost of the security of the end user or the performance of the product. 
Userlevel 5
The point is WSA's approach to malware prevention is radically different to its competitors, especially when dealing with unknown threats. That much we do know. The other vendors typically employ classic techniques and it is this that the likes of AV-C are testing against.
 
I can't see them changing testing methodology just because it doesn't fit your protection model. The one possibility that could work is a standalone WSA product test where the journaling is taken into account, and the missed samples re-run.

You might as well take this a step further and say other anti-malware products should alter their techniques to be in line with yours so that testing can change accordingly.
Userlevel 7
The good news is that several test facilities are already looking at the flaws in their testing model because of our methodology.  They realize that they cut off the test at a certain point and call it a failure, but that follows the letter of the test as written and not the actual premise and cause behind the test.  So they are looking at how to most accurately test the extended capabilities of systems like ours and a few others without compromising the testing they already do.
 
Any testing organization worth its salt knows that accountability, transparency, and accuracy for the test is critical to people continuing to care about their tests and their tests giving realistic information.  They get mud on their face and a sullied reputation when their testing doesn't show an accurate snapshot compared to reality. 
 
I mean, really, if you ran a test that came to the conclusion based on the test that the sky is plaid 86% of the time in New York, nobody will take that test seriously if they can simply look up and see blue, black, or grey depending on the weather and time of day. ;)  So if the test is vastly different from reality, you change the test to get accurate results compared to reality.
 
Remember:  The AV tests are not there to -change- how the AVs work.  They are there to do a focused sampling of pseudo-reality against the AV product to try to get an overview of the whole world based on the relatively tiny sample that is tested.  They don't want to define reality, they want to reflect it.  So if their test is not reflecting reality even vaguely in some cases, they want to fix that.
Userlevel 5

@Kit wrote: 
Edit: While we do want to ace the tests, we consider the end user's security to be the driving force behind our work and we will take zero chances in implementing things that would "help" in the tests at the cost of the security of the end user or the performance of the product. 

 
That's what I wanted to hear. Cheers 😃

Reply

    Cookie policy

    We use cookies to enhance and personalize your experience. If you accept or continue browsing you agree to our cookie policy. Learn more about our cookies.

    Accept cookies Cookie settings