Small Language Models Against Ransomware:

Ransomware as a Language Problem

When talking about the generative AI that powers UpSight Security we are often posed the obvious question about the performance impacts of running a generative AI language model on an endpoint computer!  This is an entirely reasonable question given that most people’s experience with generative AI language models are the ‘Large Language Model’ or “LLM” like OpenAI’s ChatGPT.  UpSight is different, and really this is the ‘inventive step’ that makes it all work - the ‘Small Language Model’ or as we’ve started to call it a “SLM”... and we occasionally use that in a sentence like ‘we have a slim model’.

SLM’s punch above their size!

So what makes our model so slim?  The MITRE ATT&CK(™) framework is the quick answer. We know that modern ransomware attacks consist of different stages/tactics, usually starting with initial access and ending with exfiltration and/or encryption for impact, where the overall attack progression can be broken into smaller atomic steps as designated by the MITRE ATT&CK(™) Enterprise Matrix. UpSight starts with the realization that all attacks can be read as traversals across the MITRE ATT&CK(™) framework in a left to right fashion. Which sort of resemble short stories about an attack.  In fact some of our friends at Red Canary have taken the LLM - Using Gen AI to Improve SecOps to give you a fully english language description of an attack once detected. 

But for UpSight’s SLM, we are taking this in a different direction, and ask that you squint your eyes a bit and look at each technique level box within MITRE ATT&CK as a ‘word’ in a language lexicon. There is also a semblance of a grammar and syntax and in fact with some advanced spoken language features like words that change meaning depending on what came before them, words that can only appear at the start of sentences or middle or endings etc..  The UpSight client is in fact primarily designed to construct what we call ‘attack sentences’ starting with the firehose of raw system i/o layer events in a highly optimized and efficient manner.

UpSight is not strictly confined to MITRE ATT&CK(™) but it is a handy reference point and we make an effort to relate what we define as a ‘word’ in our lexicon back to MITRE ATT&CK(™) whenever possible.  We also observe behaviors that are simply not tracked by MITRE ATT&CK(™) and have added those words to our internal lexicon.  We also internally introduce additional linguistic concepts akin to ‘parts of speech’ that can distinguish between techniques that are being done by some active participant vs. techniques that are more past tense and result from the technique.  The difference between setting up a persistence technique and the process that results from the persistence technique for instance.  The end result is in fact a ‘small language’ that can express ideas and concepts about attacks like initial access or ransomware in a sophisticated and accurate fashion, but is also constrained by the size of its lexicon - a few thousand words at most and grammar and syntax rules such that we can create a generative language model that is in fact ‘small’ and can be ran on just about any computing device comfortably.

Here is the attack sentence graph representation of an infostealer establishing persistence and trying to scrape credentials from web browsers and LSASS memory:

The graph above is comprised of the following attack sentences:

T1553.100 T1547.001 T1038.100 T1555.003 T1003.001

T1553.100 T1547.001 T1036.100 T1038.100 T1555.003 T1003.001

You can see here some of the UpSight defined Attack Words like T1036.100 which is using a non-executable file extension to obscure an executable are used in these attack sentences.  At each point where we ‘extend’ a sentence by adding a new attack word the SLM is consulted and asked what it thinks the next words of the sentence might be.  This acts a bit like a sentence auto-completion for MITRE ATT&CK(™) and our client is able to take action if predicted actions are taken by the attacker.

UpSight’s client is built from a sort of layer-cake of models and the SLM is the icing on the top.  We only need to run the model when the lower layers that are optimized for efficiently filtering out uninteresting events (i.e. ones that are not representative of words in our small language lexicon) have identified a new attack word that is either starting or extending an attack sentence.  So not only is our model in fact ‘slim’ we only need to run it occasionally. UpSight makes use of the really great open standard Onxx framework that is built into Windows 10+ for running AI models locally.  The Onxx framework will dedicate the best available hardware resources towards running the model at runtime.

Training

In order to train any NLP model, we naturally need data; and not just any data, we need attack sentences that represent actual attacks using our language lexicon, grammar and syntax rules. Knowing that UpSight’s thin-client is capable of performing malware causal tracking in real time, then collecting the data can “simply” be done by executing a lot of malware samples and collecting their corresponding causal chains as generated by our anti-ransomware client.  You can read more about this process in Part 1 of our UpScan blog series. At the time of this writing UpSight has a dataset with more than one million unique attack sequences! Needless to say, this attack dataset is one our most valuable assets.

At UpSight we have created an (almost) automated pipeline for model training which consists of the following steps:

  • Detonate malware samples

  • Extract attack sentences and augment the existing dataset

  • Train a new model

  • Run automated True and False positive tests with the new model

  • Push the model to our “dog-food” release ring

  • Rinse & repeat

Actually we are training what we call a zoo of models, by varying different hyper-parameters for three basic model types:

  • LSTM (long short term memory)

  • Transformer (as implemented in ChatGpt 2.0)

  • MLP (multi-layer perceptron)

The LSTM model so far consistently outperforms the other two, but perhaps as our dataset keeps growing the transformers will come to their own. The model that we have currently deployed in production has less than 138k parameters!

The End Result

Is UpSight’s SLM effective? (YES!)

UpSight.Ai model based predict, interdict and evict against every active credential stealers

UpSight.Ai model based predict, interdict and evict against very active ransomware payloads

We have verified the effectiveness of our product against the most prominent credential stealers and ransom payloads:

Credential Stealers

  • redline

  • raccoon

  • rusty

  • lummastealer

  • agent tesla

Ransom Payloads

As it turned out, our language model may be small, but it packs a really big punch!

Call to Action

If you want state of the art protection from modern ransomware attacks, Sign up and start using our product, it is free to use on up to 3 devices. Interested in deploying to your enterprise fleet, Contact us!


Previous
Previous

Announcing UpSight.ai ransomware protection for Windows Copilot+ PC powered by Snapdragon ARM64

Next
Next

Introducing UpScan: Make it yours! (Part III)