SecurityBrief UK - Technology news for CISOs & cybersecurity decision-makers
European power grid control room ai cyber risk night scene

Energy boards warned of AI risks, gaps in oversight

Thu, 26th Feb 2026

Software Improvement Group (SIG) has published research linking the energy sector's growing use of artificial intelligence to gaps in software quality, security and governance that boards may struggle to oversee.

The report, The AI Boardroom Gap in Energy 2026, examines how energy organisations are introducing AI into operational and IT environments. It argues that AI ambitions are outpacing day-to-day readiness in engineering practices and risk controls.

Energy companies are using AI for grid optimisation, predictive maintenance, renewable forecasting and control-room operations. These deployments increase the importance of software as a core operational dependency, with implications for resilience and security.

AI still represents a small share of production environments. Across production systems analysed in 2025, SIG found about 1.5% qualified as AI systems. The figure suggests business-critical AI remains in an early phase, even as experimentation and pilots expand.

The report also flags software quality concerns in the AI systems already in production. SIG found that 72% scored below its recommended build-quality threshold, increasing exposure to maintainability problems and operational risk in environments where downtime can have wide impacts.

Security exposure

The research highlights security as a distinct challenge for AI adoption in energy. AI adds attack surfaces through data pipelines, model behaviour, prompts and external dependencies, on top of existing vulnerabilities in many operational systems.

SIG's analysis also concludes that many energy systems already sit at or below industry-average security benchmarks, making uncontrolled AI adoption a material cyber and operational risk.

AI-assisted software development is another area the report examines, with outcomes varying significantly depending on implementation and oversight. SIG reported productivity results ranging from a 19% slowdown to a 26% speed-up.

In SIG's experiments, security outcomes in AI-generated code also raised concerns. AI-generated code showed roughly double the security risk violations compared with comparable human-written projects.

Luc Brandts, chief executive officer of SIG, said boards face a visibility problem as AI moves into operational workflows.

“In energy, AI is turning up the speed and the stakes. Software decisions increasingly shape reliability, security, and operational continuity, yet leadership often has limited visibility into what systems exist, how they behave, and where the risks sit.”

Governance and oversight

The report argues that many barriers to safe scaling are organisational rather than technical. It cites weak software governance, uneven engineering practices and limited board-level oversight as recurring issues.

It also frames software as critical infrastructure in its own right, particularly as AI tools and models become embedded in operational processes. That shift moves technology decisions closer to core risk management and compliance functions.

Brandts said boards need a clearer view of their technology estate and measurable indicators they can track over time.

“What the enterprise needs in 2026 is a clear and unified view of the IT landscape, with measurable KPIs that leaders can question, and act on. With the clarity that comes with continuous software portfolio governance, organisations don't just react to AI's pace; they can steer it with strategic control.”

Regulatory complexity

The report maps regulatory pressures on energy operators across regions and describes divergent approaches in the EU, US, UK and APAC. It also notes that the EU classifies AI used in electricity, gas, heating and water as high-risk, raising expectations for governance, testing and ongoing monitoring.

SIG argues boards will face a shifting compliance landscape, with requirements varying by jurisdiction and evolving over time. This increases the need for governance models that can be applied consistently across business units and regions.

The report also points to international standards as a way to align internal controls with external assurance. SIG says standards can provide a shared language for how AI systems are developed, deployed, monitored and corrected.

Beyond AI governance, the report warns about operational readiness more broadly. It argues that misalignment between business, IT and risk teams becomes more costly as AI moves from experimentation into core operations, particularly in safety- and reliability-sensitive environments such as energy generation and distribution.

SIG recommends software portfolio governance, measurable quality metrics and shared visibility to reduce the risk of operational surprises as AI use expands. It positions these measures as tools for boards overseeing technology risk alongside traditional operational and financial risk.

SIG said it will continue research into AI software quality and security as energy organisations expand deployments from pilots into production systems closer to real-time operations.