Discover how Deep Instinct DSX protects your NAS storage more effectively than Trellix.

Learn More
NOVEMBER 17, 2019

Legacy AV: Why Keep Crawling?

As originally published on Mathew Fulmer's LinkedIn Profile on November 14th, 2019   There are many different solutions o

 

There are many different solutions on the market all of which advertise the utmost of defense for endpoints, but what's the difference between them and next-generation AV solutions?

First we should classify what is "next-gen" vs what is "legacy" as that will factor heavily in our discussion.

"Legacy" antivirus is anything relying on content created by a human and delivered to the endpoint via an update with a scheduled frequency (commonly this would be daily but some vendors actually do updates substantially more frequently). Legacy antivirus also comes with inherent limitations based on those required updates.

Daily updates don’t sound bad, do they? What if I told you those daily updates have most likely been in development for the greater part of a week? That a human was having to write the instructions for the program to "detect" said malicious content being added to the content. So, what about the malicious content the detection engineer doesn't know about then? Surely, they cannot know about everything malicious, so what happens when they cannot provide a content file that covers everything? This is where you have the heuristic data mentioned above coming in to play, it was designed to "catch the unknowns" and ensure you were protected from future malicious releases or campaigns. For this content to be created you would have a team of people sitting around downloading any and all samples they could find, be it from VirusTotal or the dark web all to try and reverse engineer the samples and "build detection" which would keep people safe. Here's a little flaw in the whole "daily content update" nobody likes to share: not everything gets put into the content files! I repeat, along with it taking upwards of a week to initially add the content, not everything gets added purely to "save space" when it comes to the overall size of the content file!

Just to give you an example of the size we are talking about and why not every piece of content can be added: The average AV installer program is ~180MB compressed, when installed you are looking at normally double that size depending on features. That's not counting the data coming down for the DAT/Content files. The average size of content files is ~80-180MB depending on vendor (in the case of a major endpoint solution provider they have a program which is 1GB in size, and that's just the installer to download). The process is as follows with "most" companies - Download the installer package for the endpoint, post-installation package checks in with company repository and/or management console for updates, any patches are brought down to update the program itself followed by content files for detection. So, what does that mean in the environment? Well, first off until the content is brought down you are sitting there with a "base detection file" which has the bare minimum in detection available. Second, you are at the mercy of the internal bandwidth, or in the case of remote users their overall bandwidth to pull down the content updates and secure them. Now, factor in how many machines are in your environment (average enterprise is 50,000 machines?) and how much bandwidth it takes for all those machines to try and pull that content down should they accidentally all get deployed around the same time. It's not a pretty sight, but neither is the success rate of legacy AV against current threat trends.

In cases like the above, you would end up with malicious content (such as ransomware) which gets through and impacts your environment/clients in a very negative way. Not only does this cost you monetarily but it also can cost you in the form of a damaged reputation in the cybersecurity industry, a blemish that can be quite hard to overcome. Recent examples of this would be the 22 municipalities hit by ransomware in Texas, the damage done monetarily has put the groups in charge of managing the security in a terrible position and tarnished their reputation greatly. Last I checked there were ~12,000 unique variants of just WannaCry alone and from experience all it took to get around most legacy AV solutions was changing one tiny bit of data, just enough to force an update of the SHA/MD5 (hash value) so that the detection within the signatures (DAT/content) would no longer recognize it as malicious. As an example of how severe the ransom demands can be, one must look no further than New Bedford in Massachusetts which was hit with Ryuk, an evolution of ransomware which contained elements of SamSam, WannaCry, and other major variants. They were asked to pay 5.3 million in ransom to obtain access to their files. Atlanta city government spent $17 million recovering from a ransomware attack in lieu of spending $52,000 which was the ransom request. Baltimore was hit with ransomware asking for 13 Bitcoin per computer (~$75,000 per) to regain access. The FBI advised them to not pay and the overall cost quickly approached $18 million to recover from said attack.

The above examples were not necessarily an endpoint security-based issue although with the proper endpoint solution the threat would have been fully mitigated. Would a legacy AV have stopped this? Doubtful, the variants which hit them were a new wave of targeted malware campaigns, some of which were fileless variants which legacy AV solutions cannot protect against. If a legacy AV cannot protect against this then what makes the next-gen solution better? It cannot be that different right? Yes, it can! With the advent of programs using AI and Deep Learning to intelligently learn about threats and prevent them before a human can even get involved with the "training" process. Today’s frame of mind shouldn’t be "if" you get attacked by a ransom campaign, but rather "when". Many CISOs today are finding that they are woefully understaffed and vastly outgunned on the endpoint front compared to the tricks the new wave of malware content creators employ. Most consider themselves to be “next-gen” in the world of cybersecurity and endpoint security? Most consider themselves to be “next gen” if they have an offering available that has some AI, mostly Machine Learning AI. What isn’t discussed is the prevalence of false positives which causes the detection level to be lowered to reduce the noise and workload of the SOC teams chasing down issues that turn out to be innocuous. Which happens to be the biggest caveat to Machine Learning AI. So, what really makes a next-gen solution then?

First off, for a "next-gen" solution to truly be next-gen it should be leveraging more than just base level machine learning (which still involves a great deal of human involvement). This human element can be the make or break when it comes to the success of the product and while it sounds highly advanced there are still better options that limit the necessity of human interaction. Artificial Intelligence and the understanding/application of deep learning to cybersecurity have ushered in a completely new way of preventing malicious attacks, something absolutely needed, considering the currently evolving threat landscape. For example, a large amount of fileless malware utilizes PowerShell without conducting a write action on the location machine (a write action is a necessity for a legacy machine to trigger a scan of the file), AI/Deep Learning can leverage numerous different aspects to analyze said fileless attacks and prevent them from even getting through the environment. Machine learning is the first iteration of AI in the endpoint security field when applied correctly. Here is the most commonly accepted description of machine learning: A subset of artificial intelligence involved with the creation of algorithms that can modify itself without human intervention to produce desired output - by feeding itself through structured data. I liken this to a child learning to crawl. It's the first signs of independent movement (or in this case autonomous detection/prevention) in the endpoint security world, but as I mentioned this must be applied correctly. There is still a high level of risk due to the human involvement aspect of "programming" the algorithm initially as well as understanding the right data sets to feed the algorithm for proper "learning". What I mean by "programming" is that the data must be structured for the algorithm to really put it to use but requires upfront labeling to help the machine make the necessary connections. Sounds advanced and helpful, right? But wait, there’s more!

If we consider Machine Learning AI as the crawling phase, with Deep Learning AI, we have entered the walking phase. How much different could Deep Learning be from Machine Learning? Quite a bit, here’s the simplest way of describing Deep Learning: A subset of machine learning where algorithms are created and function like those in machine learning, but there are numerous layers of these algorithms- each providing a different interpretation to the data it feeds on. Such a network of algorithms is called artificial neural networks, being named so as their functioning is an inspiration, or you may say; an attempt at imitating the function of the human neural networks present in the brain. This means that while Machine Learning AI requires a strict subset of data for the pathways to be connected and detection/prevention to occur, Deep Learning AI is behaving closer to how a human brain would learn, by taking the data/experiences it's seeing and building an understanding of relationships between what it's seeing and the underlying results. But why walk when you can run? Deep Instinct is the pioneer of using Deep Learning for cybersecurity purposes and while others are crawling with their machine learning we are already lapping our competition. How good is Deep Instinct?

Maybe it's time for you to become one of "The Learned Few"? Reach out to learn more about how successful CISOs are using deep learning to prevent attacks from entering their enterprise every single day. At Deep Instinct, we prevent tomorrow's threats today!