What to Know, How to Prevent: menuPass
Guest: Jose Barajas
EPISODE 9: THINK BAD, DO GOOD
What to Know, How to Prevent: menuPass
Jonathan Reiber, Senior Director for Cybersecurity Strategy and Policy, AttackIQ
In this episode, Jose Barajas and Jonathan Reiber discuss MITRE Engenuity’s Center for Threat-Informed Defense and AttackIQ’s emulation plan for menuPass. This plan will enable defenders to replicate tactics and techniques used by menuPass, a cyber threat actor that has been active since 2006 and whose goals are aligned with the People’s Republic of China’s Five Year plan. Members of the group have, according to MITRE ATT&CK, worked in association with the Chinese Ministry of State Security’s (MSS).
What has been their impact? menuPass is responsible for global intellectual property theft in at least 12 countries. The group has targeted companies within the healthcare, defense, aerospace, and government sectors, with emphasis since 2014 on Japanese victims. As MITRE ATT&CK describes the group’s behavior, “menuPass leveraged its unauthorized access to these managed service providers’ networks to pivot into subscriber networks and steal information from organizations in banking and finance, telecommunications, healthcare, manufacturing, consulting, biotechnology, automotive, and energy.”
In this podcast, you will see and hear about how AttackIQ incorporates MITRE Engenuity’s Center for Threat-Informed Defense’s emulation plan into the Security Optimization Platform to automate the tactics, techniques and procedures used by menuPass. This allows AttackIQ customers to run the emulation plan against their existing and planned security controls to validate their effectiveness and improve their performance against the group. The Security Optimization Platform then provides detailed gap analysis and remediation reports.
About the Center for Threat-Informed Defense
The Center is a non-profit, privately funded research and development organization operated by MITRE Engenuity. The Center’s mission is to advance the state of the art and the state of the practice in threat-informed defense globally. Comprised of participant organizations from around the globe with highly sophisticated security teams, the Center builds on MITRE ATT&CK®, an important foundation for threat-informed defense used by security teams and vendors in their enterprise security operations. Because the Center operates for the public good, outputs of its research and development are available publicly and for the benefit of all.
Jose is AttackIQ’s Technical Director of North America, as well as a Malware Researcher with over a decade of experience. He works directly with the Center for Threat-Informed Defense to build the foundation of MITRE ATT&CK in order to improve cyber defense.
Jonathan Reiber (00:00):
Hello, everyone and welcome to a very short version of Think Bad. Do Good. Where we’re going to talk today about the menuPass adversary emulation that we’ve built at AttackIQ and we built off research done by MITRE Engenuity, Center for Threat-Informed Defense and we have our illustrious guest, Jose Barajas. Hey, Jose.
Jose Barajas (00:21):
Hey Jonathan. Good to be back. It’s been a while.
It has, in fact, we were just talking. It’s been almost a year since our first episode.
It’s been almost a year?
Mm-hmm (affirmative). Mm-hmm (affirmative). I had thought I hadn’t been on more on these, but I guess I was wrong.
Yes. Well, in the future, maybe we’ll sign you up for once a week. How about that?
So the last time we were on Jose, maybe your plants were like this much of the screen, but can you tell us first before we go into the emulation a little bit about your plants?
Yeah. Not much has changed. They’re still there. They got a little brown. I kind of didn’t want them as much as I should, but I’ve been giving them more loving care lately so they’re getting better.
Yeah, that’s good. So we have a malware researcher who obviously has the attention to keep plants alive, which should, for all of us, speaks well of his attention to detail, which is good, and his commitment to preserving life. So congrats on keeping your plants alive. Good. Okay, so let’s dive in. So today we’re going to talk about menuPass. Do you want to start by telling us a little bit about the Center for Threat-Informed Defense’s research?
Yeah, absolutely. So the Center for Threat-Informed Defense has been building upon the initial adversary emulation plans. I think that was one of the first things that we talked about. At that time, it was FIN6. They took the work that they’d already done for APT29 and APT3 as other projects that they’ve completed. And most recently, from a new perspective, menuPass has been to target and at this point has been released and essentially emulating known attacker behavior, specific to the menuPass group.
Okay, cool. And we are participants in the Center for Threat-Informed Defense and we’re sponsors of its research, but we were not directly involved in this particular project. Is that right?
That’s correct. Yeah. I shared our cyber range, so let some folks test there, but didn’t get to directly contribute to this one. There’s still other projects out there and we’ll continue to contribute as well as those continue.
Cool. Can you bring up the project plan so folks who are watching can see what it looks like?
Yeah, let’s take a look.
See my screen, I take it?
Yep. We got you.
Awesome. Yeah so here we have it. MenuPass, the adversary emulation plans, for those of you that may have taken the course where we introduced the concept of emulation plans and how they’re structured, will be quite familiar with this. But what you’re used to is going to be found here from a quick description of who this actor is, who they’re targeting, to jumping into the attack flows themselves. So we can take a look at what the adversary is doing here from an emulation plan perspective.
That’s great. Now menuPass is a criminal group based in China, and they’ve targeted a whole bunch of actors and across sectors. Can you talk a little bit about some of their strategic goals? What they’re after?
Yeah. What was interesting about this actor was that they didn’t necessarily go directly for the target in this case.
What they actually did is they targeted different organizations that were servicing their target. In this case, third-party MSSP is an example that are typically provided access to RDP, or other means such as that. So by compromising this third party, other organizations are compromised. Meaning as an MSSP was targeted, compromised, those RDP credential then access that they had into those environments that they were supposed to protect, were now the way that the attacker got in. In this case, that was the primary method that the research found was leveraged here.
Great. I think it’s important to talk about here that their objectives, as you can see in the second sentence here, are aligned to the People’s Republic of China’s five-year plan, the strategic objectives within that. So they may be separate and distinct from the government, but their objectives are aligned to that of the People’s Republic. Great. So why don’t we pull this down? Jose, you can walk us through the capabilities that we’ve got. So as a brief description, so maybe over to the attack IQ platform, I think. A brief description, so for those of us who aren’t familiar with attack IQ, you’re on our website, so we hope that you are, but we build adversary assessments and adversary emulation plans to run scenarios against your cybersecurity controls to test their effectiveness. And that’s why we use the MITRE Attack Framework. We work in very close partnership with MITRE Attack and MITRE Engenuity, as we’ve just talked about. And here we’ve got an anatomic engine, which is multi-stage comprehensive attack emulation. And so that’s what we’re doing here with [inaudible 00:05:16] classes. At least it’s my understanding. Jose, correct me if I’m wrong.
No, I think you explained it well, Jonathan. Essentially, we’ve provided our customers the ability to take individual MITRE TPPs and include them as part of an entire, what we call an attack graph, that comes from a DAG specification, for those that are familiar with that. And essentially describing what the attacker would do in what order and what decisions they would make or alternative procedures they would leverage at any stage of the attack.
That’s awesome. But can you zoom in at all and show us what we have going on here?
Yeah, absolutely. So what we have going on here essentially is, we want to capture essentially what the adversary emulation plans are describing. MITRE’s again, done a lot of work here, along with other of the center participants to put together these plans to help the analysts follow through step-by-step to see how their environments would respond. So what we’ve done here is just used our framework to just replicate step-by-step exactly what they’ve described here. And as you can see, for those of you following along, you can take a look at the adversary emulation plan focused on QuickBook here scenario one, and we can see step-by-step how we’re going to recreate these actions, right? All the way from the initial point, all the way to the end. Let’s take a look here.
Cool. That’s awesome.
Now, in terms of these red and the green lines, the red line represents when the attack was not prevented or that specific behavior was not prevented. Green represents when it was actually stopped or prevented.
One of the first things the attacker does, is initiate a bunch of different discovery behaviors. So the two lines moving forward basically means that we’re going to try these regardless. Just like the attack where they’re going to try a variety of different actions, maybe three out of five, four out five, or all of them, will be successful allowing the attacker to find a number of different contexts as part of that process. So we’ve gone ahead and done that all away up until we get to the point where we now have a decision point from the perspective of the attacker. Now that they’ve tried to collect as much information as possible, they’re trying to actually set themselves up for persistence and we actually are going to capture that here and recreate that to tell you what path would have been possible, assuming this behavior occurred in your environment. We can talk a little bit more about that, but I want to see if you have any questions.
That’s awesome. So, basically it’s like, they’re going to move through this chain until, you would just assume that the adversary is going to move through this chain until they can break through, or they’re just going to keep trying. And the anatomic engine allows you to string all the tactics together, the attack tactics together, to run this campaign. And then the red line shows you when it’s been prevented or detected. And then the green line shows you whether it’s been prevented. Is that right?
Correct. That’s exactly right. And at this point, if we follow the adversary emulation plans and as we look at this graph, what the attacker’s intent has been so far, it was really trying to collect a number of information as I’ve just talked about. At this point, before the attacker moves on and maybe risks getting caught as they’re exfiltrating the data, they’re going to try to do some form of persistence, or lateral movement, and we can see those actions being captured here. Essentially, the attacker will first try lateral through, as you can see PAExec, a known methodology as described by MITRE. If that’s not successful, we’re going to go ahead and try to do it via the remote desktop protocol as you can see here. If we’re not successful there, We can go and attempt these two alternative procedures to attempt to do some degree of persistence.
By going through this process and quantifying essentially the same steps that the attacker would have tooken, the idea is that we can help our customers understand what would have been possible in your environment, what policy changes could be modified, and actually showcase what that impact will be, meaning how they’re carrying on this policy, I now get stopped back here versus letting the attacker attempt all these different methodologies. And that’s exactly we want to help our customers understand.
That is awesome. And you can see how a number of different defense capabilities could be tested against this scenario, right? Anyone who has a defense capability to prevent lateral movement, we would run this scenario against your defense capability and our purpose then is to validate whether it works. That’s what we do at AttackIQ but the really neat thing is that this is an automated chance. So let me ask you, so each one, if it fails, does it then automatically move to the next one?
Yeah. That’s how we’ve actually set it up. In this case, if there’s no way to actually move laterally, then the attack is going to get stopped at that point. Once the attacker runs out of alternative procedures, essentially, we’re going to say, “Hey, these are the five ways that the attacker would have attempted to accomplish this phase and we’ve been able to successfully prevent all five of them.” If we’re able to get to that point, that potential we will highlight to the customer, we’ve stopped it here.
But as you can see, we can continue on. If we don’t stop it at the lateral movement piece, maybe as part of the staging or exfiltration piece, we can verify and see that those attempts, assuming the attacker got this far, are also being identified and managed as expected as well.
That is really neat. And can you talk a little bit about within the anatomic engine, we’ve built this capability to test AI, artificial intelligence, and machine learning based cyber defense technologies, have we seen anything in here that focuses on that particular aspect? I’m actually, I don’t know. So it’s a legitimate question on my part. I’m not… [crosstalk 00:11:05] I’m not just leading you here.
No, it’s definitely a legitimate question. There’s a number of technologies in our space that definitely talk about ML AI, and a very simple application of that is building reputation, right? So as we’re doing this behavior and maybe by doing a simple discovery here, a simple discovery there, may not be something that you should be alerted on. It may not be something that should be detected on, because that’s just going to create that fatigue that we’ve been dealing with year over year. But, if we notice a lot of behavior occurring, all occurring on the same machine, and especially something that typically doesn’t occur, we expect these solutions to have that repetition build up and then identify that something suspicious is happening and essentially ring the door alarm. And by actually invoking in this method, we moved from just testing individual unit tests of our control capabilities, to more of a how does it look as part of a larger set of behaviors or in this case in adversary emulation plan of menuPass.
That is a wonderful statement of what our new IdA architecture does, right? We’ve moved from anatomic testing, which is a single point in time tested, for a long time at AttackIQ we ran these kinds of atomic tests, single ones, into the anatomic testing process, which then within an AI ML based defense capability, because of the adversary is behaving in this comprehensive manner and trying all these different behaviors, you can tip off the AI ML based analysis.
So it’s really interesting.
Correct and what I will say and clarify a little bit is, we’ve always been able to change this behavior in this way. All of these were individual Python packages. We’re an open framework here at a AttackIQ so our customers were able to accomplish this in Python and chain these types of attacks together and recreate this. What we’ve done is we’ve abstracted our customers from that to just make it easier. Easier for us and easier for them to capture these behaviors, codify them so that we can spend our time focused on finding the gaps and addressing those versus building out additional things but really speeding up the ability for our customers and ourselves to just more quickly and rapidly emulate these behaviors to a very fine detail is ideal.
So in other words, we’re automating the work now for them in a way that they used to have to do it themselves previously.
Yeah. That’s awesome. This will help the customers quite a bit. That’s terrific. And this is obviously just one emulation plan that we have.
Can you name some other emulations, other multi-stage stimulation plans that we have?
Yeah, absolutely. So definitely the work that we did, I was directly involved in the FIN6 simulation plans, for example. We’ve codified that in the product. We’ve done on the same with the work that the center has published around APT29 as well. And of course with developing other threat actors as we see fit and as we observe in the wild.
Yeah, that’s awesome. Well, that’s very interesting Jose, and you can stop sharing your screen so we can see your face.
Yeah. I learned an immense amount from you every time I get to talk to you. It’s awesome.
And we got to do more of this. This is really, really edifying. So for anyone who has questions for Jose, you can email us at [email protected] and just say question for Jose and ask him there and he’ll field it. And we’ll have you on again here very soon I think. Let’s do another one to walk through more of the anatomic engine and see how it emulates the adversary. Really appreciate it.
Thanks, Jonathan. Appreciate it. Thanks for having me.
Anything else you want to add before we go?
No, not at this time. I think Academy hit one year this week. I think that’s something notable to mention. I really appreciate all the folks that have been attending those [inaudible 00:15:01]. It’s been great to hear the feedback and I’m glad that as a company we’re giving back so just want to call that on, I guess. ].
Yeah, that’s true. Thank you, Keith. Keith Wilson is our director for Academy, so remarkable job. Jose actually, that’s a good point. The first podcast you and I did was about a year ago, and then we did FIN6 which was about, I don’t know, five months ago so it’s not been a whole year.
Good point. Yeah. Good. Thanks man. Really appreciate it. Tune in again soon, everyone. Bye.
Take care, everyone. Bye.