Blog Post
March 23, 2023 | From the experts
The Brass Tacks of AI and Cybersecurity
By Matt Holland
With contributions from Alexander Yakub, Chris Champion, Colin Belcourt, Earl Fischl, Erik Egsgard, Jane Harwood, Jonathan Machnee, Katie Yahnke, Nicolai Moles-Benfell, Patrick Smith, Sean Alexander, Shea Cole, Shri Kalyanasundaram.
Last updated: March 13, 2024
For the foreseeable future, AI will have little to no impact on malware detectability.
Being a dinosaur in the cybersecurity industry is both powerful and incredibly maddening. Practical experience is almost everything when it comes to being effective in the cybersecurity industry. Much more important than academia. By far. It’s why people who have worked in active cybersecurity programs (be it offensive or defensive), as part of large intelligence agencies or militaries, find so much success when transitioning to the commercial space. It’s why the most successful cybersecurity companies are led by people with decades of practical cybersecurity experience. And it’s how Field Effect has built the most holistic MDR platform and service stack on the planet with fewer than 100 developers.
Unfortunately, being a cyber-dino makes paying attention to the largely terrible cybersecurity industry frustrating at best. Cursing like a crazy person at one’s desk is a frequent occurrence. Grandiose claims about “next generation” solutions, or how “AI detects everything”, or the constant rediscovery of “game-changing” malware techniques from decades ago, are all unfortunately not helpful for the industry.
For years now, I’ve rolled my eyes and ranted at these things within the comfort of my colleagues’ very patient “cone of silence”, but recently an article popped up in my LinkedIn feed about three times too many, about Large Language Models (LLMs) and malware creation that made that artery in my forehead (I’ve named him Arthur) throb red with more frustration than normal.
The article is entitled AI-Powered ‘BlackMamba’ Keylogging Attack Evades Modern EDR Security, which summarizes a research paper entitled BlackMamba: AI-Synthesized, Polymorphic Keylogger with on-the-fly Program Modification. It discusses LLM-generated malware by the name of (you guessed it) BlackMamba. The article’s headline alone triggered Arthur to appear:
“Researchers warn that polymorphic malware created with ChatGPT and other LLMs will force a reinvention of security automation.”
Before I start ranting, let me be clear that:
- I am not attacking the author or questioning the author’s integrity or intentions. I assume they are curious and interested in cybersecurity, and are just reporting research as provided, and;
- I am not attacking the BlackMamba research team or questioning their integrity. I’m sure they have positive intentions to raise awareness of potential impacts of LLMs (or other forms of AI) in the cybersecurity industry. In fact, their research and BlackMamba is actually quite cool.
However, the article is bordering on being factually incorrect. And the assumptions and conclusions by the BlackMamba research team are not reflective of how malware or EDRs really work, or how LLMs (or AI in general) will or will not change the cybersecurity industry. The unfortunate consequence is that anybody who reads that article, or the research paper, may be misled about new risks to their personal and professional lives, businesses they own or where they work, and their general overall security. This is damaging to society and the cybersecurity industry. I am quite concerned inaccurate reporting and conclusions regarding AI and cybersecurity will make the industry even more confusing than it already is, resulting in more fearmongering that disproportionately favors cybersecurity vendors, and not the customer.
This is damaging to society and the cybersecurity industry. I am quite concerned inaccurate reporting and conclusions regarding AI and cybersecurity will make the industry even more confusing than it already is, resulting in more fearmongering that disproportionately favors cybersecurity vendors, and not the customer.
Hopefully you are still reading. Still there?
So why should you trust my ramblings opinion? I will keep it short. I started my career with CSE (a Canadian intelligence agency) and spent seven years working with the world’s best as part of the Five Eyes intelligence community. People don’t often acknowledge that the Five Eyes intelligence community is at least five years ahead of the public cybersecurity industry, but consider the impacts of the Edward Snowden, SHADOWBROKER, and Vault7 leaks. So, when I say that I worked with the world’s best, it’s not an exaggeration. In 2007 I co-founded and bootstrapped an intelligence tradecraft company called Linchpin Labs (Canada and US). Linchpin grew to become the commercial world leader at what it did (if you know, you know) and was sold in 2018 as part of a bundled 200M+ USD deal alongside our partner company Azimuth. Prior to selling Linchpin, I co-founded and bootstrapped Field Effect - a cybersecurity company which has built the most effective MDR offering on the planet (Covalence). Field Effect is going to positively change both the cybersecurity industry and the world over the next decade - it’s exciting times, strap in. Most importantly, for the past 24 years, I have had the honor and opportunity to work with the world’s best offensive and defensive cybersecurity professionals, and they have taught me a tremendous amount along the way.
In summary, I don’t know of anybody else who has a similar background. The perspectives I’ve gained over the course of my career are unique. In particular, insight and understanding into what is true and real in the world of cybersecurity, and what is hype and commercial jargon, down to the lines of code.
I should also point out that I am not anywhere close to being an AI expert. I work with much smarter and more educated people than me in that realm. In fact, up until about a year ago I was a skeptic in the business value-add of AI, but I’ve been won over by seeing the amazing things our data scientists and analytics teams have done at Field Effect over the past few years. They have managed to utilize AI as a true business-additive tool in Covalence, and not as a misleading brochure “AI detects everything” line item. This thing called ChatGPT is also quite impressive.
What I do have, however, is an extremely deep understanding of cybersecurity (especially endpoint technology), what it takes to write almost undetectable malware (undetectable is not actually a thing), and how to detect the most basic or advanced malware. Since 2007, I have written and delivered specialized courses on these topics to 1000+ Five Eyes intelligence agency employees.
Evaluating if AI will have an impact on cybersecurity is more about understanding the problems and challenges of cybersecurity than it is about understanding what various incarnations of AI can do. If there is overlap between cybersecurity challenges and AI capabilities, then there is opportunity and likelihood of meaningful change. If you are an AI expert and consider this off-base, or perhaps confident that AI can solve most malware or cybersecurity problems, I say to you, perhaps you are prescribing a solution without fully understanding the problem. If one does not understand the problem and its associated challenges and limitations, how can one propose solutions? Most cybersecurity research around AI that I have read attempts to tailor the problem(s) to be solved with AI. The times when I have seen AI add value are when the problem is first understood, and then the strengths of AI yield a potential solution.
The bottom line is, I’m not going to blow smoke or exaggerate. Also, this blog has been peer reviewed multiple times over (check the byline) so I’m confident that I’m not entirely off base.
The Anatomy of Malware
Given that this post (the first of three) is intended to focus on offensive cybersecurity, let’s dive into what malware is. Regardless of the type (virus, rootkit, spyware, etc), or who created it (intelligence agencies, ransomware actors, etc), malware is a series of techniques or technical approaches, glued together to achieve a particular nefarious goal. For example, ransomware combines techniques of writing data to files with encryption and remote command and control. Another example is spyware that combines techniques of persistence (runs on reboot), reading files, key loggers and screen capturing (to name a few), with remote command and control to steal data.
These simplified examples are intended to highlight that which makes malware “mal”. It is also those techniques that create distinguishable and detectible characteristics of malware. How a technique is implemented, and all of the surrounding code, is just noise that can be largely ignored. That also means that, if it is being changed as part of a polymorphic strategy, it doesn’t really matter from a detectability standpoint. I’m glazing over some EDR vendors’ love of using code byte patterns as a detection mechanism, but those can yield questionable or unexpected results. For example, this past February, a competing EDR falsely identified the Covalence endpoint agent as a match of the “Trojan:Win64/CawkCawk” malware variant due to a poorly crafted byte pattern matching signature. Mistakes happen. There are a finite number of malware techniques, and I dare say that the vast majority are known versus what is yet to come. To be clear, I’m talking about malware techniques, not malware types, families, or instances. I’m also not talking about exploits or exploit chains utilized to install malware. Those will be around for quite some time yet.
Additionally, from this point on I’m going to use the term EDR to describe any type of defensive endpoint agent. There are so many classes/acronyms for endpoint agent technology now that it will only clutter things if I constantly differentiate. Don’t get me started on NGAV.
Let’s take a look at key logger techniques, which was the example used in BlackMamba. MITRE has a list of keylogging malware, although it seems to be lacking a list of actual techniques (i.e. Windows message hook, filter driver, etc). There are publicly known techniques (often documented APIs), and other techniques that I am aware of that are not listed here, and others that I am probably unaware of. But my point is that the list of actual key logging techniques is largely finite, and once a technique is known and a detection is built, no amount of permutation (be it really cool host-side polymorphism or manual re-codes) of that technique has any effect on detecting the usage of that technique. There are tell-tale signs in an operating system that a technique is being attempted, or actively used. Moreover, it’s the detection of the technique itself and not a static signature.
To draw a comparison, pretend that an airplane is malware. The wings constitute the malware techniques, the parts that make it actual malware (i.e. the bits that do the bad stuff). The wings are the consistently detectible aspects of the malware. If we see wings, there be dragons.
Now if somebody comes along, repaints the plane, expands the size of the wings, adds new landing gear, adds a bunch of wheels - none of those things matter, because it’s still a plane and anybody looking at it will conclude as much when they see the wings. The plane may get sold to a different owner and repainted, but that doesn’t change the fact that it’s a plane. Somebody might mount a dorsal fin and a kung-fu grip, and while it might look pretty odd, it’s still a plane.
That, in a nutshell, is the effect of polymorphism on malware. Malware techniques are either detectable at runtime or make recognizable changes to a process(es) or operating system. No matter how much surrounding code is modified, those techniques still stand out. While it does change the binary hashes of what constitutes the malware, and can evade static signature detection, it doesn’t affect its baseline heuristic detectability. I’m ignoring timing and technique variance because those don’t really matter.
Malware techniques are either detectable at runtime or make recognizable changes to a process(es) or operating system. No matter how much surrounding code is modified, those techniques still stand out. While it does change the binary hashes of what constitutes the malware, and can evade static signature detection, it doesn’t affect its baseline heuristic detectability.
Additionally, polymorphic malware is not new. Conficker malware used polymorphic agents back in 2009, and Finfisher malware in 2018. Then there are Windows-specific approaches utilizing C# (ConfuserEx and neoConfuserEx) and PowerShell (invoke-obfuscation), that, while the implementation is typically server-side, accomplish functionally the same thing as BlackMamba. My first direct exposure to binary polymorphism was a proprietary system built by Nicolai Moles-Benfell, a colleague of mine based out of New Zealand, in 2009. His work was incredible and quite advanced at the time and included testing with a massive provisioning and automation system for correctness verification (which ironically now powers the Field Effect Cyber Range). We learned a lot about the strengths and limitations of polymorphism via first-hand experience and years of lab time. In particular, the quality of the polymorphism does not make a difference when trying to hide malware technique usage, regardless of whether it is AI-powered or not; the active execution of malware and malware techniques are still the weak point, and you can’t hide those if you want your malware to actually do something. As the researchers pointed out, polymorphism can be used to vary the timing of technique utilization, in particular from user mode. But if an EDR is fooled by timing attacks such as this, then the EDR vendor should probably rethink its approach.
Challenges with Detecting Malware
Now you may be thinking, “Hang on there, fella. If it was that easy, why doesn’t every EDR in the world work perfectly and detect all malware?”. Well, unfortunately writing a highly effective EDR is very hard. There are some challenges that make detecting and confidently responding to techniques more difficult than one would think. But none of these challenges are related to polymorphism:
- Legitimate software sometimes utilizes known malware techniques. This is probably the most irritating aspect of writing an EDR. “Why on earth is that software doing that, what are they thinking!?” becomes a common phrase in the lab. EDR teams spend a lot of time and energy dealing with software that does dumb things - there is no easier way to put it. For example, some third-party software ships with vulnerable kernel drivers which unintentionally leave their customers vulnerable to attack. Some blame Microsoft for allowing this, but it’s the third-party software vendors that create this situation. It is then up to EDRs to protect hosts in these unnecessary circumstances.
- Technique detection can be more intensive on system resources and asynchronous to detect, so the experience, nuance, and OS internals knowledge of the team building the EDR plays a big part. This trifecta of attributes is rare, although we have lots of these people at Field Effect.
- Some malware relies heavily on living-off-the-land techniques, where binaries native to the operating system (such as PowerShell on Windows) are used maliciously. This muddies the waters, precipitating deeper inspection or correlation with full execution history or adjacent/related events in the operating system.
- Modern EDRs are a balancing act between false positive reduction, performance management, and data reduction. Contrary to what many think, the less data an EDR needs to send back to a central location for analysis, the better - at least from a performance perspective.
Considering the above, this is why static signature scanning on disk and in memory is still quite valuable (i.e. hashes, byte pattern matching, or Yara rules). Hashes are extremely definitive but inflexible. They add value when malware is absolutely identified, and it becomes a faster identification mechanism. Yara rules are a bit fuzzier. A bad Yara rule can cause a world of false positives, but a really good Yara rule can confidently identify malware based on code or technique patterns, where hashes amongst matches may not be the same. Yara rules can also be used in threat hunting by targeting the artefacts of offensive techniques, and to identify shared code in families of malware.
This is why utilizing static signature scanning (i.e. hashes or Yara rules) beyond technique or behavior detection (i.e. heuristic detection) still provides value. Dunking on these approaches with the argument of polymorphism is a bit cheap. I would point out that technique combinations, execution order, etc, don’t matter - or at least shouldn’t if the EDR team has an ounce of offensive experience. Not every EDR team is equal in that background, mind you. And, not every EDR is equal.
Let T-800 Take the Wheel?
This is where my offensive cybersecurity background is going to get a bit chippy. Exploitation, malware campaigns, or hacking in general, is not like the movies. The majority of the work happens in the lab and doesn’t go live until the confidence of success is very high. It can be quite mundane and it’s far from glamorous.
I have read multiple articles and opinions about an impending rise in AI-driven cyberattacks that cause me to think, “This person has clearly never built or run a malware campaign.” Almost always, only a small slice of what constitutes running a malware campaign is ever discussed, let alone augmented with AI approaches. You can’t take 5% of the entire process, throw in some debatable AI value-add, and claim that it is now AI-driven. At best, it becomes an AI-assisted system, but I would argue there is a big gap between AI-driven (i.e. Skynet) and AI-assisted (i.e. better results that allow a malware campaign operator to take a lunch break).
The other aspect to consider is the world of vulnerabilities, productized exploits, and exploit chains that are built and deployed to gain access to a remote device. Finding vulnerabilities and building reliable exploit chains (99%+ reliable) is incredibly difficult, that’s why they can sell for millions depending on the attack vector, targeted third party software, and operating system. Exploits are found by various means (reverse engineering, fuzzing, pure luck, sacrificing a chicken, etc), but the nature of their operational deployment is typically a balance of quality and replaceability for an attacker. Letting AI figure out what to do with high-value exploit chains would give most exploit researchers a series of ulcers in the shape of the Aquarius constellation. I had the pleasure of working closely with Mark Dowd (the godfather of public vulnerability research) for almost a decade, and not once did I hear him say, “Hey, you know what we need here? Some AI to run the entire thing…”
I had the pleasure of working closely with Mark Dowd (the godfather of public vulnerability research) for almost a decade, and not once did I hear him say, “Hey, you know what we need here? Some AI to run the entire thing…”
There are two types of attacks where I could see generative AI or LLMs playing a role: phishing attacks, and exploit chain delivery that utilizes content. In both, social engineering plays a big part in the attack preparation where content is derived and delivered (often via email or SMS) in hopes of convincing a target to click on a link or provide personal information such as login credentials. The more convincing the generated content, the better chance of success.
ChatGPT, “Generate content, you say….”
If an LLM could be rapidly trained based on the social media profile and other open-source information about a person or organization, specific attacks could be automatically tailored at the individual level. This would be much more effective than general, reusable content that is typically part of a broader campaign (I’m making some assumptions here). The important thing to keep in mind, however, is that while the quality of the attack could be greatly improved, that is not a transformative event in itself because of all the other security mechanisms that get in the way, such as email filters and DNS firewalls. Such an improvement would be an AI-assisted increase in probable attack success. To summarize: will there be a rise of truly top-to-bottom AI-driven malware campaigns? Absolutely not anytime soon. Will there be AI-assisted malware campaigns that have a content-enabled component? Likely.
To summarize: will there be a rise of truly top-to-bottom AI-driven malware campaigns? Absolutely not anytime soon. Will there be AI-assisted malware campaigns that have a content-enabled component? Likely.
Back to BlackMamba vs EDR
I have many questions about the EDR the BlackMamba team tested with:
- Was the EDR instance on a virtual machine or bare metal host, and did that host have a debugger running? Some EDRs will put certain capabilities in dormancy if a debugger is detected.
- Was the EDR instance licensed or a trial version? Trial versions will sometimes have a limited set of capabilities versus a fully licensed one as they suspect malware authors test against trial versions of EDR software.
- Fully licensed or not, what level of capability was activated with the EDR instance? Some vendors have an upsell model that doesn’t give you the Full Monty unless you pay Full Monty price.
- Was the EDR instance fully updated with the latest EDR rules and signatures? The latter wouldn’t be relevant unless the EDR only blocks based on signatures - but then it wouldn’t be industry-leading, would it?
- Was the EDR managed or unmanaged? A big part of EDR nobody wants to acknowledge is that both human analysts and offline data analysis looking for anomalies (e.g. Least Frequency Analysis) are still required to make sense of events and logs sent to a central location. Not all EDR visibility manifests itself as direct on-host feedback.
- Was the sample run from an admin command prompt that was attained by the user by giving the UAC prompt the thumbs up? Some EDRs will treat perceived actions by a legitimate user differently than unexpected execution, potentially from an exploitation source. Additionally, the EDR may have interpreted the polymorphic step as a legitimate Python developer at the console. That could be a sneaky EDR hole, but it wouldn’t work with a properly configured EDR. The observed success of this attack seems like it has more to do with taking advantage of pre-established trust than it does the success of any AI value-add.
- What was the observed EDR behavior when the original, hand-written, keylogger code was used? Was this tested?
There are many unmentioned variables with the EDR configuration they tested and only one EDR was tested. That’s a definitively insufficient test case to claim that “malware like BlackMamba is virtually undetectable by today’s predictive security solutions”, or even that this has a future in malware.
Without having evidence to support this gut feeling, it sounds like the EDR they tested with either doesn’t recognize the keylogger technique (which has nothing to do with whether or not malware is polymorphic), was not fully updated (EDR rules, signatures, etc), was not fully licensed, or the additional telemetry typically pushed to an EDR server was not taken into consideration.
The BlackMamba research team does not indicate which EDR they tested with, just that it was “industry leading EDR”. But what I can say is that the results of their test do not lead me to the same conclusions – not even remotely close. I think a more accurate conclusion is that they implemented host-side polymorphism that would defeat static scanning – the baseline of endpoint agent technology from 20 years ago. Even the most basic EDRs, in a truly operational case, would detect or block the following BlackMamba actions or characteristics as described by the research team:
- Network activity around the ChatGPT malicious code retrieval in general, and DNS activity around ChatGPT. It wouldn’t be a definitive indicator, but would be noteworthy.
- Network activity by the Python interpreter (if the malware is run via Python) that deviates from previously observed patterns.
- Network activity by an unsigned binary (if the malware is a binary Python executable).
- Installation of the keylogger technique that was used, assuming the EDR detects the technique.
- Generally, the proposed operational scenarios completely ignore the hard parts of malware installation and execution, all of which would be big red flags to an EDR, adding context and confidence that the software is malicious. The success found in the lab does not mirror real-life malware installation and a successful attack, and therefore should not be glazed over or ignored as a research variable.
- All aspects of the auto-py-to-exe strategy would be detected in realistic operational usage. This brings the entire thesis of polymorphism back down to earth, where normal malware challenges such as code signing and reputation remain. It’s also worth highlighting that relying on other ways like C# in memory assemblies and other JIT compiling malware execution techniques will still also be detected. Switching from auto-py-to-exe to a different framework will not help or improve the approach.
Alright, I’m done being a Debbie Downer. But LLM-driven polymorphism is not a game-changing consideration for EDR vendors, and it does not make it “difficult for EDR to intervene”. It’s just a different way to obfuscate that is no different, from an outcome perspective, than non-AI based approaches. Anyone who has written intelligence-grade malware or been part of a world-class EDR team would draw this same conclusion.
But LLM-driven polymorphism is not a game-changing consideration for EDR vendors, and it does not make it “difficult for EDR to intervene”. It’s just a different way to obfuscate that is no different, from an outcome perspective, than non-AI based approaches. Anyone who has written intelligence-grade malware or been part of a world-class EDR team would draw this same conclusion.
Again, I do want to point out that the BlackMamba project, as a research project, is actually pretty damn cool. I’ve given it a hard time in this post, not because of the technical aspects, but because of its conclusions, claims, and subsequent reporting. Functioning host-side polymorphism via integrated AI, from a technical perspective, is one of the most interesting things I’ve read about in a while. Significantly more interesting than the constant stream of ChatGPT outputs one encounters on Twitter. I encourage them to keep going, see what else can be accomplished, and, if they can truly push EDR detection limitations, then they will push the entire industry to be better - not just EDR vendors.
Fortunately, the concept as it is today just doesn’t change the world of EDR, nor does it necessitate an EDR evolution specific to on-host binary polymorphism. There are already mounds of malware source samples on GitHub, MetaSploit, CodeProject.com, Source Forge, vxunderground, and any university or college network. Malware variance, abundancy, or on-host polymorphism isn’t new as a concept (regardless of how it’s being powered). It’s a challenge that defensive endpoint agent authors have been dealing with for decades. This is why a common (and arguably best) first layer of endpoint agent defense is a zero-trust concept around what executes - i.e. is an executable file unsigned, new on the host, or never observed anywhere? In my opinion, the coolest part of BlackMamba is the LLM-driven polymorphism – however, it’s also the most irrelevant when it comes to malware effectiveness and outcome.
Lastly, with great power comes great responsibility. The hype and excitement around AI in the media, the VC space, and the world of entrepreneurship is completely over the top - it’s like blockchain and web3 took steroids and made a baby.
Lastly, with great power comes great responsibility. The hype and excitement around AI in the media, the VC space, and the world of entrepreneurship is completely over the top - it’s like blockchain and web3 took steroids and made a baby. The expectation that it will transform all areas of technology percolates through all industries, and there is a lack of balance or burden of proof. It is an exciting time for AI research and applications of AI, and anticipating what is possible is natural for all of us.
However, the last thing consumers, businesses, journalists, or analysts need to read is that AI will turn malware into unstoppable T-800s on our networks because existing defensive approaches are about to be rendered useless - especially when it’s not true.