Demo
Quiz Mode · The full landing page and report are accessible from the tabs above
AirMDR
The Index Findings Full Report Book a Demo
The Lean Security Team Index · 2026

Most security benchmarks aren't built for you.

Gartner reports on Fortune 500 SOCs. SANS data assumes 24/7 staffing. Vendor whitepapers profile teams 10× the size of yours. If you're running security with 1–10 people inside an IT org under 2,000 employees, you've never had a credible peer benchmark — until now.

Take the 2-min assessment → See key findings
N=141 lean security leaders Fielded: April 2026 Run by: Gather for AirMDR
Peer Median
8.25
/ 12
73% of lean teams sit between 7 and 9
12%Leading · 10–12 73%Developing · 7–9 14%Lagging · 4–6 1%Struggling · 0–3
Why this exists

The lean team gap in industry research

When the only available benchmarks assume you have a 24/7 SOC, a Director of Security, and a $5M budget, defending headcount and tooling decisions becomes a fight against irrelevant comparisons. The Lean Security Team Index measures what's actually true for security leaders running 1–10 person programs at 100–2,000-employee companies — and gives you a number to point at.

"How does our triage time compare?" Until now: a Mandiant whitepaper for Fortune 500s.
"What % of alerts is normal to investigate?" Until now: a SANS report assuming a 12-person SOC.
"Are we under-staffed, or is everyone?" Until now: anecdote from a peer Slack.
"Should we audit our MDR's investigation quality, or is the SLA report enough?" Until now: vibes.
The Self-Assessment

How does your maturity compare to your peers?

Eight questions. Two minutes. Get your Lean Security Team Index score (0–12), your maturity Level (1–4), and a side-by-side comparison against the 141 lean security leaders in this benchmark.

Get Started 0 of 8 questions

First, a quick read on your context.

This calibrates your peer comparison so you're being scored against teams your size — not against the whole sample.

Investigation Quality

Question text

Your score is ready.

Drop your work email below to see your Lean Security Team Index score, your Level, your dimension breakdown, and how you compare to peers your size. We'll also email you the 30-page benchmark report.

Please enter a valid work email.

By submitting, you agree to receive your benchmark results and occasional research updates from AirMDR. We don't share emails. Privacy policy.

Level 02 · Developing
"The Capable Strainer"
Doing the work well. Running out of week to do it in.
Your LSTI Score 8.25 out of 12
Peer Median (your size) 8.25 based on N=70 in your size band
Headline Findings

Three things the lean security data tells us — that no enterprise benchmark would catch

88%
are under-resourced — and trying to hire is no longer the answer.

Only 7% feel sized appropriately. The median ask is for 2–3 more FTEs that, in this hiring market, aren't coming. The capacity bottleneck pulls down every other dimension.

65%
would already trust AI to close low-severity alerts under human oversight.

Trust is no longer the blocker. Capacity is. The lean-team market has moved past "should we let AI do this" and arrived at "we can't hire fast enough not to."

50%
audit their MDR's investigation quality — not just the SLA report.

The Leading tier is loud about this: SLAs are a vanity metric, the investigation quality is the actual product. Half the benchmark agrees.

The Full Benchmark

Get the 2026 Lean Security Team Index report.

The full benchmark — 141 respondents, four maturity tiers, ten charts, and the complete profile of every level — delivered to your inbox.

Read the Full Report →
AirMDR

The Lean Security Team Index · 2026 · Research conducted by Gather on behalf of AirMDR.

© 2026 AirMDR. All rights reserved.

AirMDR
The Lean Security Team Index · 2026

A benchmark for the teams nobody benchmarks

141 IT and security leaders at companies between 100 and 2,000 employees on what's actually working in their security operations — and where the lean-team reality diverges from the enterprise-SOC playbook everyone else writes about.

N=141 cleaned responses Roles: IT Director · CTO · CISO Fielded: April 2026
The Findings, in Brief

Lean security teams (1–10 people, IT-led) score well on rigor — half tested an IR plan in the last 90 days, half audit their MDR's investigation quality directly — and respectably on speed: 77% triage new alerts inside an hour, 50% have an MDR or MSSP that would respond at 2 a.m. on a Saturday.

But the picture changes when you look at capacity. 88% say they're under-staffed for the work in front of them, with the median ask being 2–3 more FTEs. The same teams ranked LEADING on rigor and speed are still drowning in alert volume — and the AI trust to do something about it is already there: 65% would let an AI close low-severity alerts under human oversight. The gap between what these teams need and what they can hire is the most important number in this report.

12%
are LEADING. The other 88% sit somewhere between "developing" and actively struggling against alert volume and headcount.
88%
are under-resourced. Only 7% feel sized appropriately. The median lean-team security leader is asking for 2–3 more FTEs they can't hire.
65%
would trust AI to close low-severity alerts with human oversight. Trust isn't the bottleneck. Capacity is.
The Benchmark

A maturity model for lean security teams

We scored each respondent on four dimensions (12 points total) and grouped them into four maturity tiers. The scoring is built directly from the survey: documentation quality and percentage of alerts investigated drive Investigation Quality; triage time and after-hours coverage drive Response Speed; IR-test recency and how the team evaluates its MDR drive Program Rigor; FTE gap and after-hours alert burden drive Capacity Posture.

Tier 01 · Leading
12%
Score 10–12 of 12. Regulator-ready documentation, <15 min triage, audit their MDR's investigations, and feel close to right-sized.
Tier 02 · Developing
73%
Score 7–9. Solid program rigor and speed, but stuck in the messy middle on documentation and badly under-resourced on capacity.
Tier 03 · Lagging
14%
Score 4–6. Multiple weaknesses across dimensions — thin documentation, slow triage, untested IR plans, growing alert backlog.
Tier 04 · Struggling
1%
Score 0–3. Severe deficiencies — minimal documentation, no after-hours coverage, no formal MDR evaluation, and no IR test on record.

What each tier looks like in real life

Built from the actual modal answers given by respondents in each tier — not invented archetypes. If two or more of these descriptions feel uncomfortably specific, that's the bucket.

Level 01 Leading
12%

"The Quiet Operator"

Runs a tight ship. Has actual headroom. Probably your benchmark.

Typical roleIT Director · 3–5 person team
DocumentationRegulator-ready write-ups (65%)
Triage time<15 minutes (76%)
Weekend responseMDR or on-call (95%)
After-hours volumeMonthly or less (53%)
FTE gap35% sized right · 41% want 2–3 more
What they do well

Investigation quality and program rigor are real, not performative. 82% meaningfully investigate over 80% of alerts. 88% tested IR in the last quarter. Three in four audit their MDR's investigation quality directly — they don't take "no incident, good job" for an answer.

Where it could break

Single-vendor concentration. 71% lean on one MDR for after-hours coverage. The program is good because the partner is good — change the partner and the score moves.

Next move

Add a redundant detection layer (AI Analyst or second opinion) so weekend coverage isn't a single point of failure. Stay loud about quality measurement — this tier's discipline is what the rest of the market is trying to copy.

Level 02 Developing
73%

"The Capable Strainer"

Doing the work well. Running out of week to do it in.

Typical roleIT Director · 6–10 person team
DocumentationMix of full write-ups and ticket notes
Triage time15–60 minutes (57%)
Weekend responseMDR (51%) or someone gets paged (39%)
After-hours volumeA few times per month (38%)
FTE gap57% want 2–3 more · 26% want 4–5
What they do well

Half audit their MDR on investigation quality. 84% have tested IR within six months. Tooling is integrated, the SOAR debate is settled, and most of the team can speak to detection logic in plain English.

Where it could break

The middle is where capacity quietly compounds. Two-thirds want 2–5 more headcount they can't get approved. As alert volume grows, the first thing to slip is documentation — which is why a quarter of this tier is already at "solid notes" or worse on the docs question.

Next move

Stop trying to hire your way out. The 88% asking for more FTEs in this benchmark are not getting them. Move the L1 triage and routine investigation work to AI so the existing team can take the L2 cases all the way through documentation. This is where most of the productivity is hiding.

Level 03 Lagging
14%

"The Visible Backlog"

Multiple things are wobbling. Everyone on the team knows which ones.

Typical roleIT Director · 3–5 person team
Documentation"Closed" / "resolved" or status field (58%)
Triage timeA few hours to next biz day (53%)
Weekend responseOn-call (37%) or personal pager (21%)
After-hours volumeSeveral times per week (37%)
FTE gap37% want 4–5 more · 16% don't formally evaluate MDR
What's still working

The team is technical and engaged — almost a third tested IR in the last 90 days, and the people in this tier still personally answer 2 a.m. pages. The will is there.

Where it's breaking

The combination is the problem. Thin documentation (one in three at "just closed"), slow triage, several after-hours hits per week, and over a third asking for 4–5 more FTEs. Investigation depth has dropped to where 26% can't say more than "we look at maybe a quarter of alerts." The rigor is there in spirit but not in evidence — and a small fraction admit they don't formally evaluate their MDR at all.

Next move

Stop the bleeding before scaling anything. Get an AI layer on L1 triage so after-hours alerts get a documented response without a human pager. Re-instate quarterly IR drills. Begin auditing investigation quality directly — the SLA report is not a substitute.

Level 04 Struggling
1%

"The Silent Risk"

Severe gaps across multiple dimensions. Often invisible until something happens.

Typical roleIT Director or CIO · 3–5 person team
DocumentationStatus field, "closed," or none
Triage timeA few hours or longer
Weekend responseEDR vendor or unclear
After-hours volumeSeveral times per week
FTE gap4–5 more, no formal MDR evaluation
A small-sample tier

Only 2 of 141 respondents scored in this band. Real but rare. The shape is consistent: heavy after-hours volume, no documentation discipline, no formal evaluation of whoever is supposed to be helping, and a 4–5 person FTE gap that's never going to close on the current trajectory.

What "Struggling" actually means

It's not that the team is incompetent — it's that the volume of work has so badly outpaced the capacity that the program has degraded into pure ticket-closing. Documentation, IR testing, and MDR oversight are casualties of triage fatigue.

Next move

The investment case writes itself: every other tier looks like a possible future. Start with AI triage on the highest-volume alert sources to give the team back the hours they need to rebuild documentation, evaluation, and IR practice — in that order.

Profiles built from modal answers within each tier (n=17 Leading · 103 Developing · 19 Lagging · 2 Struggling).

Where the average team scores across each dimension

Of the four dimensions, capacity is the weakest — and it's the one that compounds. Speed and rigor are good but get stretched thin when the same 3–10 people are absorbing every alert. Investigation quality starts to slip when documentation is the first thing that gets cut.

66

Investigation Quality

How well alerts are documented, and how many actually get a meaningful look

73

Response Speed

Time-to-triage on a new alert and what happens at 2 a.m. on a Saturday

76

Program Rigor

IR-test recency and whether the team evaluates its MDR on quality, not optics

57

Capacity Posture

Whether the team is sized for its workload and after-hours alert burden

How to read this benchmark

If your scores are at or above the dimension averages on this page, you're inside the 73% Developing band — neither leading nor in trouble. Most lean-team security programs land here. The way out of the messy middle isn't more rigor (rigor is fine); it's solving the capacity bottleneck without growing headcount.

Finding 01

Lean security teams are more rigorous than the stereotype suggests

The cliché is that 3–10 person security teams cut corners on process. The data does not support that. Lean teams test, they document, they audit their MDR — at rates a Fortune 500 SOC would not be embarrassed by. The discipline is real. What lean teams lack isn't rigor — it's hours.

Q: When was the last time you actually tested your incident response plan — a tabletop, a drill, anything that wasn't a real incident?
Half tested IR within the last 90 days
In the last 3 months
49.6%
3–6 months ago
33.3%
In the last year
12.8%
Over a year ago
2.8%
Never / Other
1.4%
Q: When you evaluate whether your MDR is actually doing a good job, which best describes your approach?
Half audit the work, not just the SLA
We audit investigation quality directly
50.4%
We watch what they escalate
21.3%
We track SLA reports they send
21.3%
"No incident = good job"
3.5%
We don't formally evaluate
3.5%
Q: When your team investigates an alert and closes it out, which best describes the documentation you actually end up with?
Documentation quality is bimodal — you're either writing a regulator-ready packet or barely flagging "closed"
Full write-up: regulator-ready
44.0%
Solid notes in a ticket
29.1%
Status field + minimal comment
10.6%
Just "closed" or "resolved"
9.2%
Don't really document at all
2.8%
Other
4.3%

What the rigor numbers tell you

The two-thirds of teams writing solid notes or full investigation packets are signaling that — when the time exists — they take the work seriously. The 22% sitting at "status field" or just "closed" aren't lazy operators; they're triaging because they're outnumbered by alerts. Documentation is the first thing to fall when capacity tightens, which is why it's a lagging indicator of the capacity problem in Finding 03.

““

"We audit the quality of their investigations — sample cases and review the work. SLAs are a vanity metric. The investigation quality is the actual product."

— IT Director · Tech / SaaS · 250–999 employees

"Full write-up with plan, evidence, rationale, and actions taken. We could hand it to a regulator."

— CTO · Tech / SaaS · 1,000–1,999 employees
Finding 02

Coverage looks fine on paper. The fragility is in the wiring.

77% of teams triage a new alert inside an hour and 99% have some after-hours response mechanism. But scratch the surface and most of that coverage is leaning on either a single MDR vendor or one human's pager — neither of which scales.

Q: When a new alert fires, how long does it typically take before someone has actually triaged it?
Triage time
15–60 minutes
46.8%
Under 15 minutes
30.5%
A few hours
14.2%
Same day
5.0%
Next business day or longer
3.5%
Q: If something bad kicks off at 2 a.m. on a Saturday, what actually happens?
After-hours response
MDR / MSSP would respond
49.6%
Team on-call rotation
22.7%
I personally get paged
17.7%
EDR vendor pages us
8.5%
Nothing — waits til Monday
1.4%
Q: Of the alerts that come in, what portion actually get meaningfully investigated — meaning a human looks at context, not just clicks acknowledge?
22% of teams meaningfully investigate fewer than half their alerts
27.7%
Over 80%
50.4%
50–80%
14.2%
25–50%
2.1%
10–25%
5.7%
Under 10% / Other

The single-vendor concentration risk

Half of lean-team after-hours coverage rests on a single external MDR vendor. Another 18% rests on one human getting paged. If either fails — vendor outage, vendor missing the alert, person on vacation — there is no second layer. The absence of redundancy is the real gap, not the speed of the first response.

Finding 03

The capacity gap is the story under the story

Speed and rigor look fine on paper. Capacity does not. 88% of lean-team security leaders say they need more FTEs than they have, and the median ask — 2–3 more — is a 30–60% headcount increase on a team that already can't get budget approved. This is the bottleneck that pulls down every other dimension.

88%
of lean-team security leaders say they're under-resourced. Only 7.1% feel sized appropriately. The most common ask — among IT Directors leading 3–10 person teams — is for 2–3 additional FTEs, an addition most of these companies have not been able to fund.
Q: If you could wave a magic wand and add the right number of FTEs to actually cover everything on your plate, how many more would that be?
FTE gap
2–3 more
50.4%
4–5 more
27.7%
More than 5 more
6.4%
1 more
3.5%
Zero — sized appropriately
7.1%
Other
5.0%
Q: In a typical month, how often does an alert fire outside business hours that requires someone to act on it?
After-hours alert burden
Rarely (monthly or less)
34.0%
A few times per month
33.3%
Several times per week
19.1%
Almost never / don't know
13.5%
Q: When you think about having enterprise-grade security capabilities — 24/7 SOC, proper investigation, full coverage — how does that feel for a team your size?
Enterprise-grade feels out of reach for 1 in 5
Realistic, already investing
44.7%
"Depends what enterprise means"
19.9%
Realistic, but under-invested
12.1%
A stretch — financially painful
11.3%
Genuinely out of reach
9.9%
Q: How would you feel about AI making security decisions for your environment? If an AI tool said "this alert is benign, I've closed it out," which best describes your reaction?
65% would already trust AI to close low-severity alerts
Trust w/ human oversight on escalations
44.7%
Want to review everything before close
26.2%
Would fully trust AI to make the call
19.9%
Not enough exposure to AI yet
5.0%
Don't trust AI for this
4.3%

Capacity, not trust, is the bottleneck

The story buried in this finding is that AI trust is no longer the blocker — staffing is. Combined, 65% of lean-team leaders are willing to let AI close low-severity alerts (with or without oversight). Another 26% want to review everything but aren't categorically opposed. Only 4% reject AI involvement outright. Meanwhile 88% are short on humans. The market has moved past the "do we trust AI to do this" debate and arrived at "we can't hire fast enough not to."

““

"Realistic and we're investing there already — but every dollar I spend on enterprise-grade is a dollar I don't have for two analysts I actually need."

— IT Director · Tech / SaaS · 1,000–1,999 employees

"I'd trust AI with human oversight on escalations. Honestly, the alternative is me getting paged at 2 a.m. — that's already not working."

— CTO · Manufacturing · 250–999 employees
Finding 04

The tooling story is more settled than the headlines suggest

Lean-team stacks are not the sprawling 25-tool monsters of enterprise lore. Most run a tight 5–10 tool stack with most of it integrated — and the SOAR debate has effectively concluded. AI-native automation is gaining ground without much fanfare.

Q: Roughly how many distinct security tools are you running today?
Tools deployed
5–10 tools
71.6%
Under 5
12.1%
15–25
12.1%
Over 25
2.8%
10–15 / Other
1.4%
Q: What portion of those tools are actually integrated — sharing data, correlating alerts, feeding a central view?
Integration is high
Most connected
69.5%
About half are integrated
27.7%
A few connected, most aren't
1.4%
Everything is a silo
1.4%
Q: Which best describes your experience with SOAR, automation platforms, or custom playbook engineering?
SOAR landscape
Using SOAR successfully
41.1%
Using it but constant care/tuning
27.7%
Using AI-native tools
18.4%
Never tried — too complex
5.7%
Tried and abandoned
4.3%

The hidden majority

Forty-six percent of lean security teams are running automation that "needs constant care" or have given up on traditional SOAR entirely. Add the 18% already using AI-native tools and you get a clear directional read: the next purchase decision in this segment is not whether to automate, but which automation layer scales without an automation engineer attached.

From the Sponsor

The capacity gap doesn't close with another hire. It closes with a different SOC architecture.

AirMDR is AI-native MDR built for the exact teams in this benchmark — IT-led security organizations of 3–10 people who can't get the next two FTE headcount approved and can't keep absorbing alert volume manually. Our AI Analyst triages, investigates, and documents every alert at L2-level quality, with a median time-to-verdict measured in minutes — supervised by a human SOC team that holds the AI accountable. 95% of cases investigated in under five minutes, full audit trail, no SOAR engineering required.

See How AirMDR Works →
Methodology

How this benchmark was built

Conducted in April 2026 by Gather on behalf of AirMDR. 794 conversational interviews were initiated; 157 reached completion. After fraud screening — attention check failures, position-token answers, off-topic responses — 141 valid responses formed the analysis base. The conversational format makes single-source data unusually rich: each response is paired with the open-ended context that led to it.

141
Cleaned Responses
10.2%
Fraud Rate
11.7 min
Median Duration
Apr 2026
Field Period

Benchmark scoring

Each respondent was scored on four dimensions out of three points each (12 total). Investigation Quality blends documentation depth and percentage of alerts meaningfully investigated. Response Speed blends triage time and after-hours coverage mechanism. Program Rigor blends IR-test recency and how the team evaluates its MDR. Capacity Posture blends FTE gap and after-hours alert burden — penalizing teams short on people or absorbing weekly after-hours hits. Tier thresholds: 10–12 Leading · 7–9 Developing · 4–6 Lagging · 0–3 Struggling.

Respondent roles
IT Director / IT Manager
70.9%
CIO / CTO / VP Engineering
23.4%
CISO / Head of Security
2.8%
Other
2.8%
Company size
250–999
49.6%
1,000–1,999
29.1%
100–249
17.0%
2,000+ / Other
4.3%
Industry mix
Technology / SaaS
80.1%
Financial Services / Insurance
9.2%
Manufacturing / Industrial
7.8%
Other
2.8%
Security team size
6–10 people
48.2%
3–5 people
45.4%
2 people
4.3%
Dedicated SOC / Other
2.1%

All percentages represent unique respondents (N=141). Multi-select questions in the appendix may sum above 100%. Conversational responses were normalized into option buckets where the respondent typed a near-match rather than selecting verbatim. Internal demographic strata were not weighted; the population skews toward Tech / SaaS by recruitment.

AirMDR

The Lean Security Team Index · 2026 · Research conducted by Gather on behalf of AirMDR.

© 2026 AirMDR. All rights reserved.

0