SecurityBrief UK - Technology news for CISOs & cybersecurity decision-makers
Illena armstrong 1

Safe AI needs all voices: Celebrating the women who help drive CSA's AI safety initiative

Thu, 5th Mar 2026

International Women's Day invites us to take a moment to both celebrate and assess the progress we're making in the interdependent realms of AI, cloud, zero trust, and all things cybersecurity. It lets us pause to examine where gaps remain and recommit to closing them. 

In the world of artificial intelligence and cloud security, unvarnished scrutiny has never been more urgent. As AI systems grow more powerful and more embedded in daily life, the question of who builds these systems - and whose safety they prioritize - isn't just a matter of equity, but a matter of security, safety, and trustworthiness. 

The Cloud Security Alliance's AI Safety Initiative exists precisely because secure, safe, trustworthy, and reliable AI doesn't happen by accident. It requires rigorous frameworks, cross-disciplinary collaboration, and governance structures that account for the full spectrum of human experience. This International Women's Day, it's worth acknowledging both the many women with diverse backgrounds and areas of expertise who are helping to advance that mission and why their collective experiences are a technical imperative, not just a moral one.

When Bias Becomes a Security Risk

AI systems learn from the world as it is - meaning they can encode the inequities of the world as they are. Research has repeatedly shown that machine learning models, when trained without careful attention to equity and representation, can produce outputs that disadvantage women and other marginalized groups. We've seen reports about algorithms that penalize résumé language associated with female candidates, and we've experienced using generative AI models that default to gender stereotypes when describing leadership roles.

These aren't merely social failures. They represent meaningful system vulnerabilities, where AI behaves in ways we hope its designers didn't intend and its users cannot trust. The CSA's AI Safety Initiative directly confronts these risks through the creation of vendor-neutral frameworks, educational programs, research, guidance, playbooks, events, and more.  

For example, our vendor-neutral AI Controls Matrix, a public framework that enables organizations to develop, implement, and operate AI technologies in a secure and responsible way, includes various technical AI security controls that provide guidance on areas like bias monitoring and mitigation, fairness check, and data lineage. The same can be said for our Trusted AI Safety Expert education and certificate program, which covers critical areas like generative AI architecture and design, ethics and transparency, and continuous learning and adoption. Underpinning both of these initiatives was a bevy of academic and industry experts, from a diverse group of other nonprofits, private companies, service providers, government agencies, and related organizations. 

Confronting these risks effectively requires diversity in the teams doing the work. Homogeneous teams share blind spots, ask similar questions, make similar assumptions, and stress-test systems against similar threat models. These echo chambers simply reinforce existing beliefs, prolong intellectual laxity, and strengthen biases, resulting in unexamined decisions and bad outcomes. Diverse teams are demonstrably better at identifying edge cases, surfacing unintended consequences, and building AI that performs equitably across populations. Representation isn't a soft value bolted onto AI safety work; it makes that work rigorous and delivers results that are effective, impactful, and long-lasting.

Women Advancing AI Safety

Across CSA and its many working groups, women are contributing expertise spanning governance, technical research, risk management, and policy. Whether co-authoring guidance on AI organizational responsibilities, shaping frameworks for large language model (LLM) safety, or helping define standards for AI vulnerability assessment, they are building the scaffolding on which trustworthy AI can stand.

The work is expansive and encompasses myriad projects (e.g., CSA LabSpace, the RiskRubric.ai offerings, and numerous research releases). Women working at this intersection of AI ethics, cloud security, and policy and governance aren't operating at the margins of the field - they're central to it. We're fortunate and grateful to all the women executive leaders and professionals supporting and championing this vendor-agnostic work. 

It's worth naming that contribution explicitly, because visibility matters. When young women considering careers in cybersecurity or AI see themselves reflected in the people shaping the field's most critical safety frameworks, the pipeline grows, the community strengthens, and the work improves.

The Road Ahead: Inclusion as Infrastructure

Celebrating progress and acknowledging gaps aren't mutually exclusive. The cybersecurity industry has made meaningful strides in gender diversity, but women still represent a minority of the workforce - and an even smaller share of senior technical and leadership roles. That gap has consequences for the quality and equity of the systems we build.

CSA's commitment to open, collaborative working groups and the many C-level advisory councils that underpin our initiatives offer a genuine on-ramp. Practitioners at every career stage can contribute. Indeed, actively recruiting women into those working groups is one of the most direct equalizers the community has for improving both representation and outcomes. 

And, that is increasingly important as we evolve our mission to address the agentic control plane, where AI agents and systems are beginning to take autonomous action as part of business processes. While this dynamic shift supports productivity and efficiency, while also enabling businesses to become more strategic and more effective in rapidly advancing their business goals, it also introduces a level of risk at levels we've not seen before. 

All organizations have a role in ensuring that safety, security, trust, equity, and representation remain top of mind as we continue to build AI systems. Conducting honest audits of AI development teams, sponsoring women into AI safety and cloud security roles, and embedding equity review into AI governance processes are practical steps that compound over time. They make organizations safer. They make AI systems safer. And they make the field more reflective of the world it's meant to serve.

A Shared Mission

International Women's Day is about recognizing our collective human potential. Inclusion of all voices is vital to realizing this vast potential. And, undeniably, the creation of safe AI needs all of those voices. This isn't just a progressive sentiment for a single moment in time. It's a vital design principle to ensure we create secure, reliable and trustworthy AI systems for now and our futures.