AI tools drive rapid surge in sophisticated phishing attacks, says Darktrace
Darktrace, the artificial intelligence company, has released new data alongside its half-year financials, detailing the changing face of phishing attacks and raising important questions about the readiness of organisations to tackle AI-augmented cyber threats. The data reveals that the sophistication of phishing attacks is growing rapidly, driven by the increasing availability and adoption of generative AI tools by cyber attackers.
Between September and December 2023, novel social engineering attacks - phishing attacks characterised by more sophisticated language and punctuation than typical phishing emails - witnessed a dramatic growth of 35%. This trend supports data previously revealed by Darktrace, showing a sharp 135% average increase in these types of attacks in January and February last year, around the time of the widespread adoption of ChatGPT. The continually rising use of these refined techniques suggests that cyber attackers are becoming increasingly reliant on generative AI tools, using them to optimise their attacks.
Moreover, the scale of phishing attacks is also on the rise. In December alone, Darktrace customers received a staggering 2,867,000 such emails, a 14% increase compared to September. The scale and sophistication of these attacks are emerging as pressing issues for security teams, prompting them to contemplate the preparedness of their organisations to combat AI threats.
This pressing question of readiness has been a focal point of a recent survey conducted by Darktrace among over 1,700 security experts globally. The survey found that a significant 89% of IT security professionals believe AI-enhanced cyber threats could seriously impact their organisation within the next two years. Alarmingly, 60% believe they are currently unequipped to defend against these attacks.
These security experts have two primary concerns that they rated 3.84, on a scale of risk from 1 to 5. The first concern is the increased volume and sophistication of malware attacks, such as those delivered by phishing emails, targeting known software vulnerabilities. The second is the potential risk of sensitive data leaks resulting from employee use of generative AI tools.
The rise of AI-enhanced cyber threats compounds the impact automation and 'as-a-service' attacks already have on organisations. Darktrace's threat report from January indicated that 'as-a-service' attacks, which provide cybercriminals with a robust toolkit for their criminal activities, constitute the majority of attacks.
Darktrace's CEO, Poppy Gustafsson, commented on the rapidly evolving cyber-crime environment: "We continue to see the cyber-crime landscape evolve rapidly in a challenging geopolitical environment and as the availability of generative AI tools lowers the barrier to entry for hostile actors. Against this backdrop and in the period ahead, we are preparing to roll out enhanced market and product positioning to better demonstrate how our unique AI can help organisations to address novel threats across their entire technology footprint."