Enterprises confront data gaps in scaling GenAI to 2026
K2view has published a benchmark report arguing that many large enterprises are preparing to scale generative AI while relying on data architectures built for analytics, not production AI workloads.
Titled The 2026 State of Enterprise Data Readiness for GenAI, the report draws on a survey of 300 senior IT and data executives at US and UK companies with more than 1,000 employees. It examines readiness for generative AI and agentic AI in production, where systems run on live business data as part of operational processes.
K2view frames the results as evidence of a gap between GenAI ambitions and the data foundations needed for dependable deployment. It links that gap to governance, reliability and security risks, and to the challenge of supplying up-to-date information to models and agents during real-time interactions.
Production pressure
A significant share of respondents reported near-term production plans. The survey found that 45% of organisations plan early production GenAI deployments in 2026, compared with 2% reporting production deployments in 2024.
Despite this acceleration, deployment concerns remain prominent. Responsible-use guardrails were cited by 76% of respondents as a top concern, followed by workforce skills at 66%.
Enterprise data readiness emerged as a major technical barrier: 62% cited it as a top concern for production GenAI. Reliability of large language model responses ranked next at 52%.
Data constraints
The report identifies common obstacles to using enterprise data for GenAI in production. The most frequently cited issue was data quality and consistency (59%).
Fragmentation across systems also featured heavily. Half of respondents cited fragmented data across systems, and the same share cited data security and privacy.
Real-time data integration and access was flagged by a smaller share, with 33% pointing to the challenge of enabling it.
Overall, the responses suggest many organisations see issues spanning governance, architecture and operational practice rather than model selection alone. In production, latency, permissions and traceability can matter as much as pilot performance.
Architecture mismatch
The report argues that many enterprises are building GenAI programmes on platforms not designed for inference-time operational workloads. Respondents most commonly cited data warehouses as foundational sources for GenAI (78%), followed by operational systems of record (66%).
Lakehouses and vector databases were also widely referenced: 58% cited lakehouses and 57% cited vector databases for knowledge bases and unstructured data.
The report describes these technologies as primarily designed for analytics, point-to-point integration and document retrieval. It contrasts that with production GenAI use cases that require governed access to current enterprise data during work, such as agentic workflow automation, claims processing and real-time customer service.
Agentic outlook
More advanced agentic AI use cases appear to be at an early stage despite rising expectations. Only 13% of respondents said they plan to deploy agentic AI applications to production in 2026.
The report also indicates limited adoption of MCP. While 53% of organisations are assessing vendors and approaches, just 1% reported MCP as operational or in production.
Cost is also emerging as an executive consideration. The report estimates that retrieved data context can account for roughly 50% to 65% of query token costs, linking data strategy to the economics of serving AI-driven interactions at scale.
Ronen Schwartz, Chief Executive Officer of K2view, said the pattern reflects a familiar shift from pilots to production.
"The industry is trying to operationalize GenAI on top of data architectures built for analytics. That may be enough for pilots, but it breaks down in production, where AI systems need trusted, governed, real-time access to enterprise data in the flow of work. APIs, lakes, and vector stores each play a role, but on their own, they are not enough to support production-scale enterprise GenAI."
The findings come as many large organisations in the US and UK move from experimentation to operationalising GenAI across customer-facing and internal workflows, while data governance, security and architecture work continues in parallel.