Experts stress urgent need for AI safety amid rising cyber threats
The rise of artificial intelligence (AI) has introduced a host of new challenges and potential threats in the cyber domain. Experts have underscored the urgency of addressing AI safety, particularly in the context of its burgeoning use in cyber warfare. Alongside these discussions, the UK government launched the AI Safety Institute's new evaluation platform, Inspect, aimed at ensuring the safe innovation of AI models.
Andy Norton, European Cyber Risk Officer at Armis, highlighted the critical nature of AI safety. "AI safety is crucial, particularly as it's already a driving force behind cyberattacks. Organisations can't wait for guidance from these safety summits. AI-powered cyberattacks are already supercharging cyberwarfare and it demands an immediate response."
Norton is adamant that the threat posed by weaponised AI cannot be taken lightly. "Threat actors are weaponising AI to not only manipulate information and opinion on a massive scale but also erode public trust, attack the media and sow discord. Disinformation campaigns that disrupt and destabilise economies will escalate with the rise of Large Language Models (LLMs), deep fakes, sophisticated voice replication and social media technologies."
A survey reveals that 45% of UK IT leaders believe cyber warfare could lead to cyberattacks on the media, and 37% think such warfare could impact the integrity of elections. "Organisations must move towards a more proactive stance and future-proof their defences. Security leaders need to fight fire with fire, using AI-powered solutions that empower them with actionable intelligence to spot AI-fuelled threats in their tracks. With the right foresight, organisations can detect the dangerous misuse of AI, stopping threats before any harm is done. Forewarned is forearmed," Norton concluded.
Parallel to the discussions at the summit, the UK government has continued its drive toward greater global collaboration on AI safety by launching the AI Safety Institute's new evaluation platform, known as Inspect. The platform, which was made available to the global AI community, aims to ensure the safe innovation of AI models.
Veera Siivonen, Chief Commercial Officer and Partner at Saidot, an AI governance platform, praised this move. "The launch of the AI Safety Institute's new testing platform, Inspect, reveals a regularly overlooked aspect of AI safety: evaluations. Much like complex biological systems, generative AI models can be difficult, if not impossible, to understand by looking solely at their component parts. Even if you went line-by-line through the code, you would emerge none the wiser."
Siivonen emphasised that to mitigate risks from 'black box' AI models with obscure decision-making processes and hidden biases in their training data, evaluations are indispensable. "Enterprises need evaluations to understand risks, biases, and safety implications. This should form a critical part of a business AI strategy, enabling them to implement AI safely and effectively."
The Inspect platform's ability to evaluate a wide range of AI capabilities and provide a safety score is a step towards democratising AI safety. "This empowers organisations, big and small, to not only harness AI's potential but also ensure it is used responsibly and safely. This is a step towards democratising AI safety, a move that will undoubtedly drive innovation while safeguarding against the risks associated with advanced AI systems. Risk management is not just about prevention, it's about proactively taking steps to address risks before they have an impact, and allowing businesses to take advantage of the transformative potential of generative AI," said Siivonen.
Amanda Brock, CEO of OpenUK, a non-profit representing open technology in the UK, commented on the strategic positioning for the UK in the AI safety space. "The positioning for the UK in the AI safety space is key to the current strategy and the success of the Safety Institute's new platform can now only be measured by the number of companies who have already committed to their AI platforms being tested who actually start to go through this process. With the UK's slow position on regulating - which I agree with - this platform simply has to be successful for the UK to have a place in the future of AI. All eyes will now be on South Korea and the next safety summit to see how this is received by the world."
As the world watches the outcomes of these discussions in Seoul and the implementation of platforms like Inspect, the importance of AI safety is becoming increasingly evident. Organisations and governments alike have a pivotal role to play in ensuring that the integration of AI into society is done with caution, foresight, and a comprehensive understanding of the potential risks and benefits.