SecurityBrief UK - Technology news for CISOs & cybersecurity decision-makers
Story image

Organisations ramp up AI tool blocks to counter shadow AI risks

Today

Research from DNSFilter reveals an increase in organisations blocking access to generative AI tools in efforts to mitigate risks associated with shadow AI.

DNSFilter has reported that it blocked over 60 million generative AI queries during March, representing approximately 12% of all such queries processed by its DNS security platform that month.

Notion accounted for the vast majority of these blocked requests, making up 93% of all generative AI queries stopped by DNSFilter, surpassing the combined total of Microsoft Copilot, SwishApps, Quillbot and OpenAI queries.

According to the company's analysis, the average monthly number of generative AI-related queries processed has been over 330 million since January 2024, indicating growing usage and interest in these tools within professional environments.

Alongside increasing usage, organisations are developing policies to manage and regulate the adoption of generative AI technologies among their employees. Many have opted to block specific domains by employing DNS filtering, aiming to exert greater control and comply with internal policies designed to reduce the prevalence of shadow AI - the use of AI tools that operate outside the awareness or control of IT and security teams.

While the presence of malicious and fake generative AI domains such as those impersonating ChatGPT has seen a significant decrease - down 92% from April 2024 to April 2025 - DNSFilter's data shows a notable shift by threat actors towards domains containing "openai" in their names. There has been a 2,000% increase in such malicious sites during the same period, highlighting the evolving threat landscape related to generative AI.

The use of DNS-based filtering allows businesses to manage not just cyber threats but also the internal adoption of AI tools, ensuring that only approved solutions are accessible. This helps mitigate risks associated with unsanctioned adoption of generative AI, particularly when such adoption takes place without oversight from IT or security professionals.

Ken Carnesi, Chief Executive Officer and co-founder of DNSFilter, commented: "Companies know the benefits that generative AI offers, but they also know the potential cybersecurity risks. More organisations are now proactively choosing which tools to block and which to allow. That way, employees can't 'sneak' AI tools into the corporate network or inadvertently use a malicious one. In this way, a DNS filtering solution helps companies enforce policies, block possible threats and enable greater productivity all at the same time."

DNSFilter's data underscores the tension between the drive to leverage generative AI for workplace productivity and the imperative to maintain robust security controls against emerging threats linked to shadow AI and domain-based impersonation.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X