
Cyber attackers use AI to automate exploits & sell deepfakes
New analysis from ReliaQuest has found that cyber attackers are increasingly commercialising and refining the use of artificial intelligence (AI) in operations, with up to 45% of initial access attempts attributed to automated vulnerability discovery and SQL injection scanning.
AI-skewed threat landscape
The report, based on research and threat detection data from ReliaQuest, details how AI-powered bots and frameworks now automate much of the early stage attack process. These tools are not only accelerating the pace of exploitation but are also reducing the technical barriers to entry for less-skilled attackers, making advanced tactics more widely accessible.
Automation has led to attackers leveraging AI as the "brain" behind malware campaigns. Whereas previous use of large language models (LLMs) and deepfake technology amplified existing strategies, ReliaQuest has seen these techniques become more widespread and sophisticated across both criminal and nation-state operations.
Malware adapts to AI defences
The report observes that while LLM-generated scripts often include distinctive markers such as verbose code comments or generic variable names, attackers are adapting quickly. The 'Skynet' malware, for example, not only integrates sandbox evasion and TOR-encrypted communications, but also employs prompt injection loaded into memory to manipulate AI-based security tools.
ReliaQuest's analysis cautions that "Relying solely on NGAV or other single-layer defences is no longer enough. Enterprises must embrace continuous innovation, combining defence-in-depth strategies with advanced detection capabilities to stay ahead."
Malware and usability
Attackers continue to deploy existing malware variants with newer AI-backed features. The report highlights the evolution of the 'Rhadamanthys' infostealer into an AI-powered toolkit with features including AI-driven password recovery, optical character recognition for data extraction, and AI analytics for data tagging and campaign tracking.
These developments enable even inexperienced cybercriminals to conduct sophisticated campaigns: "Its integrated AI features enable even rookie criminals to conduct large-scale theft campaigns. The latest iteration automatically tags and filters stolen data based on perceived value and provides a dashboard to track campaign statistics."
Commercialisation of deepfakes
"Groups now position themselves as professional 'Deepfake-as-a-Service' operators, blending slick marketing with the shadowy ambiguity of deepfake technology that's dangerous in the wrong hands," the report says.
Services such as CREO Deepfakes and VHQ Deepfake sell highly realistic video content for applications ranging from impersonation scams to cryptocurrency marketing. Deepfake operators advertise advanced features, including geographic targeting and optimised traffic alignment, and the number of service providers is growing. The report notes, "Attacks are becoming smarter, more frequent, and tougher to detect."
Malicious GPTs and jailbreaking trends
ReliaQuest's research finds a growing trend of jailbreaking mainstream LLM models such as OpenAI GPT-4o, Anthropic Claude, and X's Grok. Jailbreak-as-a-service marketplaces now offer pre-built malicious prompts for phishing campaigns, malware scripts, and utilities for credit card validation and cryptocurrency laundering.
Many new malicious GPT offerings are simply repackaged public models sold at inflated prices. "Investigations revealed that many of these models simply utilised open APIs, added bypass instructions, and repackaged tools at significantly inflated prices - sometimes costing three times more than their original versions."
The report adds that, "Jailbroken versions remove ethical boundaries, content restrictions, and security filters, turning regulated tools into unregulated engines of cybercrime." This commoditisation is also lowering the technical threshold for less experienced criminals.
Automating vulnerability discovery at scale
ReliaQuest's latest data shows that 45% of initial access in customer incidents over the past quarter involved vulnerability exploitation, highlighting the impact of AI-driven automation. Autonomous AI frameworks and bots can now handle tasks such as asset scanning, vulnerability confirmation, and exploitation with little human oversight.
The report's findings state, "AI-powered bots are transforming the way weaknesses are identified, excelling at tasks like scanning for open ports, detecting misconfigurations, and pinpointing outdated software with unmatched speed and precision. These bots often outpace defenders' ability to patch vulnerabilities, creating new challenges for security operations teams."
SQL injection automation
Automation is also affecting SQL injection (SQLi) attacks, enabling attackers to discover and exploit web application vulnerabilities with ease. The tool "bsqlbf," for example, specialises in automating blind SQLi, allowing attackers to test payloads and confirm vulnerabilities without directly accessing underlying data.
"Automation has transformed SQLi attacks, dramatically reducing the time, effort, and expertise needed. By streamlining discovery and exploitation, automated tools allow attackers to exploit vulnerabilities at scale, amplifying the risks posed by insecure applications and databases."
Defensive measures and key recommendations
ReliaQuest advises organisations to adopt a multi-layered, proactive security stance. Key recommendations include prioritising threat hunting, ensuring comprehensive system logging, training employees to spot AI-generated attacks, deploying advanced detection tools, and reviewing the use of AI within sensitive operational environments.
The report emphasises, "As AI-powered threats evolve, defenders must stay ahead by focusing on detecting malicious techniques, restructuring security processes, and addressing AI-related risks."