Shadow AI use surges as staff trade security for speed
BlackFog has published research that links widespread workplace use of unapproved artificial intelligence tools with rising risks of data leakage and weak oversight inside large organisations.
The company's survey of 2,000 employees in the UK and US found that 86% use AI tools at least weekly for work-related tasks. The findings point to frequent use of tools outside formal IT controls and routine sharing of business information through those services.
Shadow AI use
The research focused on "Shadow AI", which refers to employees using AI tools that their employer has not sanctioned. BlackFog said 49% of respondents use AI tools not approved by their employer at work.
Among those users, 58% said they rely on free versions of tools. BlackFog linked that trend to uncertainty about where corporate information gets stored and processed.
The survey also found that 34% of all respondents use free versions of company-approved AI tools. That pattern suggests employees may bypass paid tiers, even when an employer has selected an approved product.
Speed over security
The research indicates a willingness to accept risk when deadlines loom. Some respondents said they would use AI tools without IT oversight when no approved option exists.
BlackFog said 63% of respondents believe it is acceptable to use AI tools without IT oversight if no company-approved option is provided. It also found that 60% agree that using unsanctioned AI tools is worth the security risks if it helps them work faster or meet deadlines.
The study also reported that 21% believe their employer would "turn a blind eye" to the use of unapproved AI tools as long as work is completed on time.
Leaders' attitudes
BlackFog broke down responses by seniority and found higher risk acceptance among senior staff. The company said 69% of respondents at President or C-level and 66% of those at Director or Senior VP level believe speed trumps privacy or security.
That compared with 37% in administrative roles and 38% in junior executive positions, according to the research.
BlackFog linked the findings to organisational pressure around productivity and delivery. It said these pressures can lead to staff adopting tools outside standard procurement and security processes.
Data sharing
The research suggests that employees have already used unsanctioned AI tools for material involving business information. BlackFog said one-third of employees have shared research or data sets through these tools.
It also found that more than a quarter have shared employee data such as staff names, payroll, or performance information. The study reported that 23% have shared financial statements or sales data.
BlackFog also pointed to increased risk from plug-ins and connections between tools. The survey found 51% of employees have connected or integrated AI tools with other work systems or apps without IT department approval or oversight.
That can widen the routes by which data moves between systems. It also makes it harder for security teams to track where data goes and which third parties can access it.
Security response
BlackFog sells cybersecurity products and describes its focus as AI security and anti data exfiltration. It said its approach centres on preventing unauthorised data movement, including movement driven by "ungoverned AI tools".
Dr. Darren Williams, CEO and Founder, BlackFog, said, "This research is a stark indication not only of how widely unapproved AI tools are being used, but also the level of risk tolerance amongst employees and senior leaders. This should raise red flags for security teams and highlights the need for greater oversight and visibility into these security blind spots. AI is already embedded in our working world, but this cannot come at the expense of the security and privacy of the datasets on which these AI models are trained."
The findings add to a wider debate in large organisations about the balance between rapid adoption of generative AI tools and control over data handling. Many companies now face questions about which AI services staff can use, what information they can submit, and how to govern connections between those services and internal systems.
BlackFog said the survey covered employees working in companies with more than 500 employees in the UK and US.