<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=363521274148941&amp;ev=PageView&amp;noscript=1">
Blog

Angles of ATT&CK

Squid Game

Image: Netflix

 

One of the things I like the most about the MITRE ATT&CK framework is how well it highlights the pervasive use of living off the land techniques used by attackers.

Those of us who were investigating nation state APT groups from the early days saw this trend play out in real-time; as defenders got better at quickly detecting custom malware, attackers did the cost/benefit analysis and realized they could accomplish the mission at a lower cost and with the least risk if they just used built-in Windows, Linux, and OSX tools. This change in approach led to an arms race for weaponizing operating system tools – living off the land – that is still underway.

The downside of the ATT&CK framework is that not all techniques are created equal. The overwhelming majority of them are what I sometimes call dual-use techniques, meaning they describe usage of tools or behaviors equally likely to be used by both legitimate users with legitimate access and attackers trying to infiltrate your enterprise. From the defender’s perspective, understanding which techniques have high correlation and which ones have low correlation to malicious activity is key to planning a log ingestion and detection strategy. This article will explore how the relationship between the attacker’s desire to not get caught moving and the defender’s ability (or inability) to distinguish between legitimate and illegitimate techniques plays out in the defense of a network.

 

Red Light, Green Light

{minor spoiler alert – discusses small plot point from Squid game}

In the popular Netflix TV series Squid Game, destitute contestants compete in deadly versions of children’s games to try to win a large sum of money. The first game - Red Light, Green Light - uses a larger-than-life porcelain doll, with Terminator-like cameras for eyes, as the caller. Players can move toward the goal if its head is turned away, but if it sees players move when it turns back around, they are removed from the game in dramatic and gory fashion. The doll’s eyes use vision-based motion detection to see if players are moving and the players have to figure out how to abuse that fact if they want to live and move onto the next round. Sound familiar?

In the world of SecOps, defenders are the giant menacing doll. Attackers are the players. If the defender sees the attacker moving, they get eradicated from the network, albeit in much less dramatic fashion than in the TV show. So, the game from the attacker’s perspective is to get to the finish line without being seen. In Squid Game, smart players figured out that they could cover their mouths so the doll couldn’t see them coordinating with each other verbally and that a still player in the front of a line could hide moving players behind them. Likewise, in our never-ending SecOps arms race, attackers have figured out that using custom tools makes them easier to detect (and to attribute attacks to them), but living off the land lets them hide in plain sight from security tools.

 

Building a Better Doll

When MITRE ATT&CK launched in 2015, it provided a shared way of talking about the problem that was previously only understood by those who were “in the know.” I love taxonomies and ATT&CK is a fantastic one. Over time it has grown beyond the original enterprise matrix to include mobile, cloud, and industrial control systems (ICS) techniques, as well as adding detailed information about data sources, threat actor groups, malware, and attack campaigns. It might just be my favorite artifact of the security industry.

The rest of the industry would agree. Every security-relevant product or service now has some sort of ATT&CK tagging or mapping feature to tie its outcomes back to the model everyone understands. A side effect of this work is now every major security tool is looking for behaviors instead of indicators of compromise (IOCs), which is significant because while it is trivial for attackers to change the tools or configuration settings IOCs would match it is much more difficult to avoid using ATT&CK behaviors if you want to complete your mission. It’s the Squid Game equivalent of adding infrared, x-ray vision, and voice isolating microphones to the doll so it can catch those sneaky players.

So, for the first time in the history of security, we had a robust taxonomy of attacker behavior and an entire industry enthusiastically supporting a single model. It’s a perfect storm. What could possibly go wrong?

 

The Doll is Colorblind

A lot, as it turns out…none of which are the fault of MITRE. Most of the problems with ATT&CK come from how we as security professionals have attempted to use it to solve problems it was never intended to solve (my favorite one is MITRE Bingo). The most glaring gap for me is that the model does not give any guidance on how closely any given technique correlates to attack activity. In a perfect world the model would give a red, yellow, or green label to each technique to denote that the technique has high, medium, or low correlation to attack activity. Instead, they are all treated the same in the eyes of the model. Our SecOps doll is effectively colorblind.

 

Examples

To illustrate the problem, let’s look at a couple of examples.

 

Boot or Logon Initialization Scripts

Boot or Logon Initialization Scripts (T1037) is a collection of sub-techniques describing the use of methods for allowing scripts and applications to launch when the system boots or when a user logs on. Attackers use this technique all the time to survive the reboot of a system they have access to. You should definitely pay attention to what is running on boot or login. However, system administrators use this technique for a variety of legitimate uses including launching kiosks in the correct state on reboot, launching background processes critical to keeping the business running, and starting endpoint security tools as early as possible in the boot cycle.

 

Hide Artifacts: NTFS File Attributes

My colleagues recently published an article about how attackers are using Alternate Data Streams (ADS) to hide data, scripts, and malware from defenders. It’s part of sub-technique NTFS File Attributes (T1564.004), which is also used by Windows to ensure compatibility with Apple’s HFS and to store metadata for enterprise tools including backup systems.

As I mentioned earlier, almost every product and service has the ability to log ATT&CK techniques, which is great. The unintended consequence is that the overwhelming majority of alerts generated for these behaviors is simply flagging legitimate activity. The false positive rate is exceptionally high for these alert streams. The industry has effectively created a created a high-tech doll that can see through solid objects but can’t pick out the greenish uniforms from the reddish dirt of the play field. Too many players are making it to the next round. This is great for a serial TV drama, but bad for network defenders.

 

Greenfield Solution Space

The most obvious solution is for someone to go through the entire ATT&CK taxonomy, annotating each technique and sub-technique with a score based on how highly the technique correlates to attack activity. A first pass could even be done in a reasonable amount of time. However, the attacker landscape changes constantly, so this would present a maintenance nightmare for whoever takes it on.

The vendor space seems to have settled on correlation rules and risk-based analytics (RBA) as viable solutions. They do a much better job than raw alerts but can still present a high authorship and maintenance burden for SecOps teams because they are sensitive to what normal looks like in any given enterprise.

For our MDR clients, we use prevalence as a shortcut for red/yellow/green tagging. As our threat researchers monitor the landscape, we calculate how frequently each technique is being used by current attackers in current attack campaigns. The risk for a given technique increases or decreases based on how prevalent a technique is. While this process also presents a significant maintenance burden, it is easier than the “obvious” solution because it is a data-driven model that can be calculated through automated processes.

Generative AI will likely provide a better solution in the near future, especially with the current rise of agentic AI models capable of using emergent logical processes to make decisions and perform risk calculations. I don’t know what that solution will look like yet, but I am excited at the possibilities.

 

Moving on to Round Two

It’s clear that, despite significant progress, our industry still hasn’t solved the living-off-the-land problem. The good news? We now have a common model in ATT&CK, backed by widespread industry support. The bad news? We still have a long way to go in using that model to reliably identify and stop attackers. Too many players are still making it to the next round. Our doll has all the sensors it needs — now it needs the intelligence and reasoning to make sense of what it sees, and to keep the attackers on their toes.

 

 

 

 


Ready to strengthen your organization's security posture?

 

For more cybersecurity insights, follow Cyderes on LinkedIn and X.