SecurityBrief UK - Technology news for CISOs & cybersecurity decision-makers
Story image
Bugcrowd launches AI bias assessment for secure LLM application use
Thu, 18th Apr 2024

San Francisco-based company Bugcrowd, known for its crowdsourced security, has launched an AI Bias Assessment as part of its AI Safety and Security Solutions portfolio. Released on the Bugcrowd Platform, this new assessment offering taps into the collective power of the platform's users in order to assist companies and government entities with the implementation of Large Language Model (LLM) applications, aiding them to do so securely, efficiently and with confidence.

As more businesses utilise LLM applications, which depend on substantial data sets and algorithmic models for operation, concerns of data bias are growing. Biases can potentially be inherent in the training data, be it from pre-existing societal prejudices, inappropriate language, or from representation biases. These 'flaws' in the system can result in potentially harmful behaviours from the algorithmic models. As such, the need for effective and comprehensive bias assessment has become a pressing demand, particularly for public sector organisations. The US Government, for instance, has made it mandatory for all its agencies to conform to AI safety guidelines, including the necessity to detect data bias, from March 2024 onwards. This will also apply to Federal contractors later in 2024.

Bugcrowd's innovative approach to this modern security challenge involves their AI Bias Assessment. A reward-for-results scheme hosted privately on the Bugcrowd Platform, it incentivises trusted, third-party security researchers to identify and prioritise data bias flaws in LLM applications. With top-tier findings earning higher payments, this gamified approach adds a novel facet to traditional security checks, which can overlook biases in their scans and penetration tests.

"Bugcrowd's work with customers like the US DoD's Chief Digital and Artificial Intelligence Office (CDAO), alongside our partner, ConductorAI, provides a significant testing ground for AI detection by utilising the crowd to identify data bias flaws," said Dave Gerry, Bugcrowd's CEO. "We are excited about the opportunity to share the lessons we've gleaned with other customers facing similar trials."

ConductorAI's founder, Zach Long, praised the partnership with Bugcrowd on the AI Bias Assessment programme, considering it a resounding success. "This collaboration allows us to lead the way in public adversarial testing of LLM systems for bias on behalf of the DoD. This has prepared a sturdy base for the potential of future bias bounty projects, demonstrating a firm commitment to the ethics of AI," Long said.

Bugcrowd has a strong reputation for its 'skills-as-a-service' approach to security, consistently uncovering more far-reaching vulnerabilities than traditional procedures. Over a decade, this model has proven its worth, with almost 1,000 customers deriving tangible benefit and a clear steer towards ROI. In 2023 alone, customers identified nearly 23,000 high-impact vulnerabilities on the Bugcrowd Platform, potentially forestalling breach-related costs of up to $100 billion.

As Casey Ellis, Founder and Chief Strategy Officer of Bugcrowd observed, "With a leading reputation as a provider of crowdsourced security platforms, Bugcrowd is uniquely equipped to meet the novel challenges of AI Bias Assessment. It adds to our list of previous technology triumphs like mobile, automotive, cloud computing, crypto, and APIs."