SecurityBrief UK - Technology news for CISOs & cybersecurity decision-makers
Security operations center analyst multi alert dashboard forensic cloud endpoint

Intezer expands AI SOC to probe every security alert

Fri, 20th Mar 2026

Intezer has expanded its AI SOC platform for security teams that have outgrown managed detection and response services. The update is intended to help internal security operations centres investigate every alert.

The expansion centres on three additions: AI-driven detection engineering, on-demand access to security experts, and continuous feedback to tune investigation models. Together, these changes aim to reduce reliance on outsourced MDR providers by shifting more investigative work into automated workflows, with human review when needed.

Security operations teams have long used MDR services to manage alert volumes that exceed in-house staffing capacity. But Intezer argues that MDR providers face the same challenge as internal teams: too many alerts for analysts to review consistently, especially lower-severity alerts that are often pushed down the queue.

According to Intezer, analysis across enterprise SOC environments found that about 60% of alerts go unreviewed because teams cannot investigate every signal. It also said nearly 1% of genuine threats stem from low-severity alerts, which it equated to an average of 54 true threat alerts a year for a large enterprise.

That claim is central to the company's pitch. Rather than relying on analysts to work through vast queues of notifications, Intezer says its platform can autonomously triage and conduct full forensic investigations across all alerts, leaving human teams to supervise outcomes and respond to incidents that require judgement.

Detection rules

One addition is a closed-loop approach to detection engineering. Investigation outcomes are fed back into SIEM and EDR rule tuning, with the aim of updating detections using real verdicts, threat intelligence, and observed attacker behaviour.

This is meant to address a common gap in outsourced security operations, where the people reviewing alerts are often separate from those maintaining detection rules. In that model, lessons from investigations may not be reflected in the settings that determine what gets flagged next.

Intezer says its platform can identify noisy or broken detections and close gaps over time by adjusting or creating rules based on what investigations uncover. That would bring detection engineering closer to day-to-day alert handling rather than treating it as a separate function.

Expert access

A second addition gives customers direct access to Intezer researchers and analysts for complex cases. Users can request expert analysis of artefacts, alerts, and logs, and get support during active incidents or when suspicious activity needs validation.

That hybrid structure reflects a broader trend in cybersecurity tools, where suppliers increasingly pair automation with specialist escalation paths rather than claiming software alone can resolve every incident. For larger organisations, the challenge is often not just detection, but deciding when a signal merits deeper review by an experienced analyst.

Feedback loop

The third addition focuses on model tuning. Human review of edge cases, along with customer feedback, is used to improve investigation accuracy and align results with an organisation's environment and risk profile.

This matters because security teams often operate in highly varied environments, with different systems, data flows, and risk tolerances. A model that performs well in one enterprise may need adjustment in another to avoid false positives or missed activity.

Intezer's announcement comes as cybersecurity buyers continue to weigh the role of AI in security operations. Vendors increasingly frame automation as a way to address persistent staffing shortages and the growing volume of alert data from endpoint, cloud, and network tools. At the same time, buyers remain wary of overstating what automation can achieve without oversight.

In arguing for wider automation in enterprise SOCs, Intezer cited analysis of more than 25 million alerts. It said that volume shows why low- and medium-severity alerts can become a blind spot for both internal teams and MDR services.

One industry executive quoted by the company pointed to the scale of the problem for large organisations.

"Many organizations handle millions of security events per year. There's no possible way you can go through 100% of your alerts, and resolve them completely, unless you rely on an AI platform," said Cecil Pineda, 4 time CISO and security leader in the healthcare industry.

Intezer's chief executive also argued that the current operating model is no longer sufficient for large-scale security operations.

"Security operations have reached a structural limit. Human teams, whether internal or outsourced to MDR providers, cannot realistically investigate the volume of alerts enterprises now face. Our analysis of more than 25 million alerts makes the risk clear: Real threats are often buried in the low-severity signals that never get investigated," said Itai Tevet, CEO and co-founder, Intezer.

"AI SOC changes the model by making full forensic investigation possible across every alert, continuously improving detection based on real outcomes, and allowing human experts to focus on the incidents that truly require judgment and response," Tevet added.