AI crime matures as jailbreak, malware & deepfakes grow
Trend Micro has reported that criminal use of artificial intelligence has moved into a more stable and professional phase, with repeatable services emerging across jailbreak tools, malware development and deepfake-enabled fraud.
The company's Forward Threat Research team based its latest assessment on analysis of underground services, malware samples and active attack campaigns in the closing months of 2025. It said criminal groups now focus less on experimentation and more on improving reliability and lowering costs.
Trend Micro described the shift as similar to the maturation of other underground markets, where offerings become standardised and easier to buy. It said earlier examples of jailbroken chatbots and proof-of-concept deepfake scams have evolved into services that criminals can rent and reuse.
"The 2026 outlook is not apocalyptic but it's incremental", said David Sancho, Senior Threat Researcher at Trend Micro "We won't be seeing a sudden explosion of AI-driven chaos. Instead, we will witness the steady, professional refinement of the toolkit we already see today. This kind of quiet progress is exactly how criminal ecosystems become harder to disrupt."
Jailbreak services
Trend Micro said one of the most visible developments involves consolidation around jailbreak and prompt-abuse services. It said underground forums continue to advertise "uncensored" AI tools, but many of these offerings disappear quickly.
The providers that persist tend to focus on bypassing restrictions in mainstream AI platforms rather than building independent models. Trend Micro said this approach allows criminals to make use of the investment and scale of commercial AI providers.
The company characterised this as a "Jailbreak as a Service" market. It said the trend reduces the need for specialist expertise among buyers. It also said it lowers barriers for criminals who want to use generative AI for fraud or content creation linked to cybercrime.
Adaptive malware
Trend Micro also reported early signs of malware families that query large language models and use the responses to generate or modify malicious code in real time. It said this marked a shift towards more adaptive designs that can vary between infections.
The company said these samples remain limited by practical constraints. It did not describe them as widely deployed at scale. It said the development still signals a change in how some malware authors think about automation and variation across campaigns.
Trend Micro linked this trend to a broader pattern in which criminals treat AI systems as external services rather than building everything into malware binaries. It said embedded or remote AI queries allow malware to change behaviour without the same level of pre-packaged code.
Deepfakes mainstream
Trend Micro said deepfake tools have become cheap and widely accessible. It pointed to face swapping, voice cloning and image manipulation products available at little or no cost. It also pointed to the spread of "nudification" applications.
The company said synthetic media now features in a wider set of crimes. It cited impersonation scams, corporate infiltration, corporate espionage and virtual kidnapping schemes. It also flagged non-consensual synthetic imagery as a growing area of harm.
Trend Micro said these developments raise questions for identity verification, corporate security and individual safety. It described the implications as significant for trust in digital identity and for how organisations verify communications and transactions.
Defenders' position
Trend Micro said defenders still retain an advantage in many areas. It pointed to AI-enabled detection, threat intelligence and automated investigation tools in security operations. It also said the gap is narrowing as criminals become more systematic in how they apply the same technologies.
The company said safeguards may lag attacker adoption in some cases. It framed the wider issue as the normalisation of AI-assisted crime rather than a sudden jump in attacker sophistication.
"For organisations, this shift means preparing for AI-assisted attacks as a baseline condition rather than an emerging anomaly", added David Sancho. "Deepfake-driven fraud, identity abuse and AI-supported malware are no longer fringe threats, but issues that must be factored into security strategy, verification processes and incident response planning."