AI deepfakes to drive rise in mobile cyber threats by 2026
Jamf expects artificial intelligence to fuel more sophisticated social engineering attacks and deepfake scams by 2026, as criminals exploit public events, mobile devices and executive profiles on social media.
The security software company also predicts a shift in how organisations manage mobile risk. It expects mobility teams to play a larger role in incident response as regulation tightens around mobile cyber security.
Michael Covington, VP of Strategy at Jamf, said AI will change the way attackers target people rather than systems.
"In 2026, the most effective malicious application of AI will be its use in crafting attacks that exploit humans. This will prompt an evolution of the spray-and-pray social-engineering techniques we currently see, but with more compelling multi-channel tactics that will chip away at the target's defensive instincts with carefully assembled misinformation.
Thanks to AI, criminals will need only basic information to develop highly targeted attacks. Attackers will link campaigns to events in the news and use social media to identify who is connected to or impacted by those events. These campaigns won't just target high-profile individuals, but anyone associated with them. Businesses are also at risk, as compromised users are also the weakest link in the perimeter defences they rely upon."
Cyber security experts have warned that generative AI lowers the barrier for language-accurate phishing and impersonation at scale. It can also draw on large data sets from social platforms and public sources.
Jamf expects attackers to time their activity around news cycles. It expects them to tailor lures that reflect current events and personal connections visible online.
Mobility role shift
Covington said mobile and endpoint teams inside organisations will change how they work with central security functions.
"Teams responsible for security operations often send a list of potential risks to their mobile counterparts, seeking assurances that the appropriate actions have been taken to minimise the organisation's exposure. This happens because security teams lack real-time visibility into the overall risk posture of mobile devices. To avoid such headaches, mobility teams aim to become more proactive, and faster at how they respond to security issues.
As a result, by 2026, mobility teams will begin to assume responsibility for mitigating mobile-related security risks. They will resolve issues as they are encountered, rather than waiting for the sporadic check-ins with security, minimizing friction for users and reducing the organisation's overall exposure to risk."
Many organisations have increased mobile use in critical workflows. Security tools have not always given operations teams a unified view of risk across laptops, smartphones and tablets.
Jamf's prediction suggests that device management teams will address threats at the point of use. It also suggests shorter feedback cycles between user behaviour and security controls.
Rise in CEO deepfakes
Adam Boynton, Senior Security Strategy Manager, EMEIA at Jamf, expects more use of synthetic media in executive fraud.
"In 2026, we will see more cybercriminals using deepfakes of high-profile CEOs and executives of major brands. These criminals won't just focus on transferring money but also on stealing data. As a result, information may be exposed in ways never seen before.
We have now reached a point where deepfakes are commercialised and freely available to anyone. Combined with CEOs who are highly active on social platforms such as LinkedIn, this will enable criminals to mimic their mannerisms and create highly accurate representations.
CEO deepfake attacks are likely to cause C-suite executives to rethink how active they are on social platforms and what they share. Employees may also become more cautious when asked to join a call with senior executives, for fear it could be a deepfake attack."
Deepfake tools now exist as consumer applications and web services. Many executives publish regular video content and commentary, which provides training material for synthetic voice and face models.
Security incidents involving fraudulent voice calls and video meetings have already led to internal reviews of finance approval flows in some firms. Jamf expects concerns over impersonation to spread into wider communication practices.
Regulation and mobile hygiene
Boynton said he expects regulatory pressure to change how organisations manage mobile devices that handle sensitive information.
"With mobile devices now being used in critical operations and accessing sensitive data, they are increasingly falling under regulations and frameworks such as NIS2 and Cyber Essentials Plus. In 2026, organisations will place greater focus on improving the cyber hygiene of their mobile devices, something they have not previously prioritised.
Businesses are often motivated by the fear of regulatory penalties, and this will be the primary driver behind improved mobile security. Security teams will place greater emphasis on implementing policies that monitor which OS versions mobile devices are running and ensure that users do not download apps from unofficial app stores."
Regulators in several regions have expanded rules around incident reporting, software updates and third-party risk. These rules now cover a wider range of endpoints, including smartphones and tablets used by staff and contractors.
Jamf expects organisations to introduce stricter policies on operating system patching and app sources on mobile platforms. It also anticipates more structured reporting on mobile posture as part of compliance audits.
The company forecasts that by 2026 these trends in AI-enabled threats, deepfake fraud and regulatory oversight will change how businesses govern both user behaviour and device security.