Skip Navigation

June 27, 2024 |

Recovering from a MITRE hangover

By Matt Holland

With contributions from Patrick Smith.

Loading table of contents...
Don’t sleep on Field Effect.  We know what we are doing, and the best is yet to come.

Matt_Table_F1 Only

Data from Google & PitchBook as of June 21, 2024.

Field Effect recently participated in Round 2 of the MITRE Engenuity ATT&CK Managed Services Evaluation.  There has been a considerable amount of bickering and debate on social media about the results, between participants, and skeptical cybersecurity experts.  One of our board members described it as Game of Thrones – pretty accurate, I think.

It doesn’t help that some participants clearly manipulate MITRE data to suit their marketing messages.  Some of the ones I’ve read are straight out of a movie in that the manipulation is so obvious that one expects a plot twist halfway through.  Hats off to marketing creativity!

Considering this, I thought a different perspective might be helpful.  This post is a raw and transparent recap of our experience, the challenges we faced as an up-and-coming company, and an honest appraisal of our results.  I hope that this will be useful to other cybersecurity companies thinking of participating in future evaluations, or external viewers of the results to better understand how results can point to different characteristics in an MDR company.

Before I go any further, you may not have heard of us yet.  Field Effect is an MDR company focused on SMEs through partnerships (MSPs, VARs, etc).  We are unique in that we have built a significant proprietary technology stack (endpoint, network, cloud, AI-powered analytics, etc.) over the past decade via bootstrapped funding.  We built this specifically to provide MDR, not to sell as a technology stack, so our DNA has been MDR service delivery since day one.  We are only 185 people yet have thousands of customers and hundreds of partners and only recently closed a Series A round to accelerate growth.  But fundamentally, we are passionate cybersecurity practitioners who have been doing this for decades.

So when I saw this on the CrowdStrike webpage this week, as founder and CEO, I can say it brought significant joy to my relatively wooden heart:


Screen capture from June 21, 2024.

To say that I am proud of our team here at Field Effect would be an understatement.  I could blather on for pages about how much respect and appreciation I have for our team’s hard work, commitment to protecting our customers, and consistent professionalism.

Tip of the hat to participants

Firstly, I’d like to congratulate all the participants of the MDR evaluation.  If a cybersecurity company chooses to expose itself to such an in-depth third-party evaluation and be a competitor in the arena, they should be commended.  We’ve all won MITRE. 😉

Big congratulations to CrowdStrike for cranking out the lowest Mean Time to Detect (MTTD) value, highest percentage of “Actionable Reporting”, and highest number of “Reported Events”.  This result shouldn’t be a surprise to anybody given that CrowdStrike builds a great endpoint agent, has over 8000 employees, and has been in market for a decade.  I actually find this outcome very motivating and look forward to taking the top spot in at least one category in Round 3 with under 200 employees and a shit-ton of grit.  Here you go. 🍿

Special shout out to the other company or two that participated in this round and are not publicly traded or multibillion-dollar behemoths - they likely had an experience similar to ours.  We should grab a beer sometime and trade stories.

Why did Field Effect participate?

Any company participating in the MITRE evaluation has both upside and downside.  The process extends well beyond the one-week evaluation period and has no shortage of risk and commitment.  Lions, tigers and bears – oh my.

For a large multibillion-dollar company with thousands of employees, the upside is frankly that they justify their market cap and technical chops to existing customers, especially those focused on enterprise customers.  However, the downside is significant embarrassment if they don’t perform well and need to rely on the marketing team to manufacture a compelling message to save face.  We’re seeing this happen in real-time.

For much smaller companies like Field Effect, the upside is a phase of legitimization in an increasingly crowded market polluted with hyperbolic marketing messaging.  However, the downside is that poor performance can relegate a small company trying to grow to the back of the pack until the next round of testing.

Given Field Effect’s size and relative newness in the market, at least in perception, we often hear the following:

  • “Your product can’t be as good as it sounds if you are such a small company.”
  • “I’ve never heard of you, so I can’t really take the risk on an unknown name.”
  • “If you had participated in a MITRE evaluation, I’d consider you more legitimate.”

We participated because we needed to show the market that not only are we punching well above our weight with our technology and services team, but that we can outperform multibillion-dollar companies.  Our time in market does not reflect our quality or ability to deliver phenomenal MDR to SMEs.  Our mission as a company was to quietly build (for many years) until a proper MDR solution for SMEs was ready.  This a testament to our commitment and dedication to protecting our customers and partners – providing the best possible cybersecurity experience for them is what matters (which includes not getting hacked!).

Additionally, it is worth noting that when we decided to participate in the evaluation, we also decided that no matter the outcome, we would conduct ourselves with the following approaches and philosophies during and after the evaluation:

  • We would treat the MITRE evaluation like we would a customer experience – i.e. we didn’t throw a team of 100 people at the evaluation just to game a great score.  Rather, we did the evaluation in line with the rest of our operations; and
  • We would use our standard operational configuration to give a true reflection of what a customer could expect with Field Effect (minus being allowed to block anything); and
  • We would honor the results of the evaluation and the metrics that MITRE released (i.e. we wouldn’t twist MITRE’s data or our internal metrics to suit our marketing message, none of that “we won MITRE” crap that has been observed over the years).  Two of Field Effect’s marketing tenets are honesty and clarity, and we refuse to deviate from this commitment to the market, our customers and partners.

What was our MITRE experience like?

First and foremost, it was a learning experience.  We had never done a MITRE Engenuity Evaluation before, and we didn’t know what to expect ahead of time, which of course was a bit hard on people’s nerves in the days leading up to it (including mine!).  Unlike the Enterprise Evaluation, the Managed Services Evaluation is closed book, meaning the participants had no idea what attack they’d be facing.  But we were confident because we do this every minute of every day and see highly effective results.

Operationally, we block and contain attacks early.  That’s our jam.  We went into MITRE knowing that we could not block any aspect of the simulation and it was definitely an uncharacteristic approach for us.  Monitoring and reporting on an attack, but not being able to do anything about it, certainly felt…odd.  But it provided a learning experience as to how we can improve our reporting to the “right of boom”, rather than focusing on preventing or blocking “boom”.

We came out of the experience with a long list of improvements to our MDR service, several of which we have already put into place.  So the end result is that participating in MITRE has further strengthened our product and service.  Internal observations are already being made that if we did the same evaluation today (four months later), the results would be even better.  We’d definitely be raining on the parade of some other vendor’s marketing messages. 😉

I’d like to highlight MITRE’s commitment to developing an evaluation process that produces useful and accurate measuring and reporting to help the industry at large.  I have read a lot of skepticism on social media about MITRE, but we can say that MITRE’s motivations align with what the cybersecurity industry wants to see.  I would not hold the gamesmanship that vendors have historically utilized during the evaluation, scoring or post-evaluation marketing, against MITRE – they can’t control that.

In summary, our experience was positive and it’s made Field Effect a better company.  Huge thank you to the MITRE team.

How did Field Effect do?

Three months before the evaluation, our Senior Vice President of Service Delivery, Patrick Smith, basically called our shot as to how we’d do: We’d have very rapid detection, we’d miss some sub-steps because we intentionally don’t focus on detecting some things in the MITRE ATT&CK Framework for key operational reasons, and we’d provide effective reporting via our proprietary alerting and reporting, called AROs, that wouldn’t line up directly to MITRE matrix scoring.  He was bang on.

I won’t go over the entire set of results and what they say about our offering, you can find those here.  These results are definitely worth a read if you have a few spare minutes.  It’s our breakdown of our actual MITRE results data (not spun or remixed) and how it serves as direct evidence that the key characteristics of our MDR service meet the needs of SMEs.

There are a few things I’d like to highlight in the three main metrics of the evaluation.  Unlike the standard formula for endpoint agent companies, Field Effect’s technology stack has been purpose-built to provide very rapid and early detection (which normally results in immediate block and containment) with high-fidelity reporting that helps SMEs learn and evolve their cybersecurity posture.  This approach is accurately reflected in our MITRE results.  And we’re proud of that.

Firstly, rapid and early detection.  I think this picture sums things up perfectly:

Stage of detection - 2 - WEB

Quick-fire stats from this:

  • We detected and reported each measured step in the MITRE ATT&CK Framework, which is what many other participants refer to as “100% detection”.
  • We detected and reported in the first or second sub-step, regardless of whether MITRE was scoring it or not.  This is key.
  • We were one of only three of the participants who provided actionable reporting in all measured steps (the other two were CrowdStrike and Bitdefender).
  • Some of our initial detections were as quick as one or two minutes, and in each of the steps, our standard operational EDR configuration for our customers would have blocked the attack.  Gravy.

Second, let’s talk about our detection coverage.  For us, this was probably the biggest surprise from a scoring perspective (i.e. based on our evaluation process, we thought we would have scored higher).  For what it’s worth, a lower-than-expected coverage outcome is common with first-time participants.  Our coverage of 61% stems from a few things:

  • We intentionally do not monitor for certain aspects measured by MITRE during the evaluation, because they don’t make operational sense as they aren’t good for the customer.  For example, we do not inject into third-party processes to monitor API usage because it is fundamentally a terrible practice and introduces stability issues for third-party processes.  In fact, we have seen our endpoint agent process crash due to bugs in some multibillion-dollar competitor’s DLLs that they inject into our process.  Plus, the output is next to useless for SMEs and also generally easy for an attacker to subvert.  In conclusion, the cost/benefit is not in favor of an effective MDR outcome for customers, so we do not do it.
  • We did extract malware from memory and identified it through reverse engineering, but we wouldn't report on the Windows APIs used as it's not relevant information for our audience.  Our ARO reporting always includes lots of great technical details, but the primary driver is to guide client action.  What happened, but more importantly, what should you do about it?
  • There are several examples of activities we detected but our reporting didn't meet very specific detection criteria used by MITRE.  In one case, we reported that domain credentials had been compromised by the threat actor using a particular technique, guiding the client to reset credentials and block the domain/IP/Port used for C2 by the malware.  Our reporting had used the word "compromised" instead of "exfiltrated", and didn't specifically mention "ntds.dit" so this was scored as not reported.  Our initial reaction was very, “WTF?!”, but after taking a deep breath and walking around the block, accepted that MITRE had their scoring criteria and that was that.  So be it, as I said above, it’s a learning experience.

Thirdly, the fidelity of our reporting.  Operationally, issuing a false positive is an MDR service sin for Field Effect.  We issued 100 alerts (which was actually 50 as MITRE double-counted, one for an email and one for the associated dashboard alert).  Either way, if you take the time to review the MITRE data you will observe that our patent-pending approach to reporting is informative, succinct and noise-free.  Ideal for an MSP or SME.

Finally, I think it’s worth highlighting one very important aspect of our MDR offering is threat surface reduction and risk reduction.  This continuous improvement has always been important to us, and because it’s a priority, we do it extremely well.  So well, that upon initial deployment of our endpoint agents in the MITRE simulation environment, we were able to detect the vulnerabilities and risks, and immediately issued reports to that effect.  Ultimately, that reporting actually predicted the attack vector of the entire simulation, and no aspect of this value-add was part of the scoring.


In summary, participating in the MITRE Engenuity Managed Services Evaluation was a great experience, which has helped us to evolve our product.  One more “thank you” to MITRE for being great to work with, and also a huge thank you to the team at Field Effect for all of their hard work.  It is mind-blowing what we are accomplishing with fewer than 200 people in the company.