Webroot Weekly Highlights - 12/7/2018

  • 7 December 2018
  • 1 reply

Userlevel 7
Badge +36
This is a weekly highlight of the best articles and news going on in the Community.
See any stories that catch your interest? What would you like to see in the future? Let us know in the comments below!

Microsoft Calls For Facial Recognition Tech Regulation
Microsoft and the AI Now Institute are both calling for regulation as facial recognition software picks up popularity. As facial recognition continues to gain traction in public use cases, Microsoft on Thursday called for regulation of the technology, citing heightened concerns around privacy and consent.
Over the past year, facial recognition technology has started to pop up in various government-related applications across the country – from police departments to airports. Most recently, this week the Department of Homeland Security unveiled a facial recognition pilot program for surveilling public areas surrounding the White House. However, Microsoft president Brad Smith said in a Thursday post that the race for developing facial recognition software in the tech space is forcing companies to “choose between social responsibility and market success.”
See the full article here.
22 apps with 2 million+ Google Play downloads had a malicious backdoor
Almost two dozen apps with more than 2 million downloads have been removed from the Google Play market after researchers found they contained a device-draining backdoor that allowed them to surreptitiously download files from an attacker-controlled server.
The 22 rogue titles included Sparkle Flashlight, a flashlight app that had been downloaded more than 1 million times since it entered Google Play sometime in 2016 or 2017, antivirus provider Sophos said in a blog post published Thursday. Beginning around March of this year, Sparkle Flashlight and two other apps were updated to add the secret downloader. The remaining 19 apps became available after June and contained the downloader from the start.
See the full article here.
The Weaponization of PUAs
Back in the 90’s, the Internet was not too *wild* in terms of malicious software, hacking attacks, etc. Viruses and some Worms had started to emerge, and while some were really dangerous, a great majority were not. Different categories like Joke programs or other similar programs with non-destructive payloads were part of that era. Many developers were not really malware writers, but skilled developers that wanted to have a good time scaring people or simply making a joke by developing small pieces of software that performed behaviors such as opening the CD-ROM or displaying different characters walking across a screen. Unfortunately, one of these innocent categories turned into something more serious: the well-known PUAs. In this FortiGuard Labs article we will define what a PUA is, describe its inherent risks, and how malware makes use of them by showcasing a malware sample.
What are PUAs?
PUA is the acronym for “Potentially Unwanted Application.” This is a general category used by all vendors to tag particular applications that can be misused by malicious people. In that sense, these tools are not really malicious and the program itself does not necessarily represent a risk. It is the usage of such tools and the related outcomes that are the real problem.
See the full article here.
Product Update: Mac Agent Version
  • Implemented [list]
  • Whitelsiting can be applied to selected folders
  • Detection applies to Gzip archives
  • Support for macOS 10.14
  • Efficacy enhancements
  • Fixed
    • Mics. Bugs
    Is Malware Heading Towards a WarGames-style AI vs AI Scenario?
    Adam Kujawa, Director of Malwarebytes Labs, has been contemplating the evolution of malware attack and defense, attempting to work out strategies to stay ahead of cybercriminals in what has always been a technological game of leapfrog.
    While malware has continued its trajectory of increasing stealth and persistence, defenders currently have the edge with their introduction of artificial intelligence (AI) and machine learning (ML). For now, these systems don't do anything more than can be done by human analysts, but they do it at machine speed, don't miss any red flags, and return predictions on maliciousness faster than could be achieved by a human team.
    See the full article here.

  • 1 reply

    Userlevel 1
    I remember, back in 2000, when I bought my first computer that it did not take long until a hacker got into my email and what this person was doing was changing the language in my emails and writing sick or werid changes which caused me to lose a friend. It wasn't until AOL investigated this whole scenario was I then exonerated.