Norton adds AI-powered deepfake detection to Genie assistant
Norton has introduced deepfake detection capabilities in its Norton Genie AI assistant, now available via early access in the Norton 360 mobile app.
The new feature enables users to analyse audio and visual content for indications of manipulation, such as AI-generated voices or altered images. It is currently accessible on mobile devices in the US, UK, Australia, and New Zealand, with support for desktop environments expected in the near future.
Detection features
Norton Deepfake Protection in the Genie AI Assistant allows users to review videos and audio files for signs that they may have been tampered with using artificial intelligence techniques. The detection system works by evaluating for inconsistencies or subtle deformations in the physical characteristics of individuals presented in video content.
The tool extends beyond identifying deepfake voices, and instead focuses on a broader range of manipulation, including changes to facial features and movements that typically occur in AI-generated multimedia scams. If the Genie AI Assistant recognises potential deepfake material, it immediately provides users with relevant cybersecurity guidance and suggestions for next steps.
Leena Elias, Chief Product Officer at Gen, commented on the significance of the new feature. She stated:
"As AI-generated voices and faces become harder to distinguish from the real thing, trust is rapidly becoming one of the most fragile elements of our digital lives. The line between truth and deception is blurring, especially when malicious actors can abuse AI to create scams that replicate voices and imagery with startling realism. This is why we've made our deepfake protection accessible to people who don't have AI hardware, so they can confidently navigate and consume digital content without second-guessing what they see or hear."
The early access version focuses initially on English-language YouTube videos. Users are able to upload YouTube links to the Genie AI Assistant, which then analyses the video for manipulation and provides real-time feedback on its authenticity. According to Norton, if potentially malicious AI-generated content is found, the Assistant will flag it and supply practical advice on how to proceed.
Planned improvements and rollout
The deepfake detection system is currently available in Norton 360 mobile products on both Android and iOS platforms, with expansion to desktop environments in progress. The company has also shared intentions to widen the tool's reach, with future updates planning to add more languages and platforms for deepfake analysis.
In addition, later in the year, support for deepfake protection will include AI PCs powered by Intel chipsets. Norton stated that these devices will benefit from further-advanced detection capabilities on both desktop and mobile applications.
The launch reflects the ongoing shift in cybersecurity priorities driven by the increasing sophistication of AI-generated scams. Cybercriminals have turned to deepfakes as a tool to impersonate people and spread misinformation, leading to greater concerns for digital identity verification and safety. The new protection methods aim to address these risks by giving individuals the ability to check questionable audio or video content before it has a real-world impact.
Norton's development of new deepfake monitoring capabilities forms part of broader efforts to protect consumers from a range of AI-based scams. The company has indicated it will continue to expand its scam protection offerings across its suite of digital safety products.