SecurityBrief UK - Technology news for CISOs & cybersecurity decision-makers
Story image

Nearly half of security pros see AI as top threat, survey finds

Today

HackerOne has disclosed preliminary findings from its survey of 500 security professionals ahead of the release of its Annual Hacker-Powered Security Report.

The data reveals that 48% of respondents consider artificial intelligence (AI) to be the most significant security risk facing their organisations.

According to the survey, concerns about AI security are focused on several key areas. The most pressing issues identified were the leaking of training data (35%), unauthorised usage of AI within organisations (33%), and the hacking of AI models by outsiders (32%).

In addressing these AI security concerns, 68% of the surveyed professionals indicated that an external and unbiased review of AI implementations is the most effective method for identifying safety and security issues. This external review, often termed 'AI red teaming', involves the global security researcher community examining AI models to identify and mitigate risks, biases, malicious exploits, and harmful outputs.

Michiel Prins, co-founder of HackerOne, commented on the industry's approach to AI security, stating: "While we're still reaching industry consensus around AI security and safety best practices, there are some clear tactics where organisations have found success. Anthropic, Adobe, Snap, and other leading organisations all trust the global security researcher community to give expert third-party perspective on their AI deployments."

In addition to the HackerOne survey, a report sponsored by HackerOne and conducted by the SANS Institute explored the broader impact of AI on cybersecurity. It highlighted that over half (58%) of respondents predict AI could contribute to an "arms race" between the tactics and techniques used by security teams and cybercriminals. The report also noted optimism regarding AI's role in enhancing security team productivity, with 71% of participants expressing satisfaction with the implementation of AI to automate routine tasks.

Matt Bromiley, Analyst at The SANS Institute, elaborated on AI's potential upsides and challenges: "Security teams must find the best applications for AI to keep up with adversaries while also considering its existing limitations — or risk creating more work for themselves. Our research suggests AI should be viewed as an enabler, rather than a threat to jobs. Automating routine tasks empowers security teams to focus on more strategic activities."

HackerOne's own AI-powered co-pilot, Hai, aims to alleviate the workload of security teams by automating tasks and providing deeper vulnerability insights. According to the company, the adoption of Hai has grown 150% since its launch and reportedly saves security teams an average of five hours of work per week. Additionally, AI-focused offerings such as AI Red Teaming have seen substantial growth, with a reported 200% increase quarter over quarter in Q2, and a 171% rise in the number of security programs integrating AI assets.

HackerOne's full Hacker-Powered Security Report, which will include comprehensive details from the survey and additional analysis, is set to be released later this year. The survey covered security professionals from both the US and Europe and was conducted by Opinion Matters between 31 July and 6 August 2024.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X