Why IW Demo Cortex Case Studies Blog Partners Team Request Demo
March 18, 2026 12 min read

We Deployed Context Infrastructure at 3 Fortune 100 Companies. Here's What We Learned.

Every enterprise AI team we talk to has the same complaint: their agents have access to mountains of data but can't actually make good decisions. The problem isn't the models. It's the missing layer underneath.

Context Infrastructure Enterprise AI Decision Intelligence AI Agents

Over the past eighteen months, our team at Intelligence Warehouse (part of Questt AI) has deployed what we call "context infrastructure" at three Fortune 100 enterprises — a telecom, a CPG company, and a global healthcare organization. Each had spent millions on AI. Each had agents in production. And each was stuck at roughly the same place: 60-70% accuracy on decisions that mattered.

This post is about what we learned. Not the polished version. The real version — including the things that surprised us, the assumptions we got wrong, and the framework that emerged.

The Pattern We Kept Seeing

When Foundation Capital published their thesis on context graphs as AI's trillion-dollar opportunity, it crystallized something we'd been observing in the field. Their argument is that context — not just data — is the binding constraint on AI's enterprise value. We'd been living that reality for months.

Here's what the pattern looks like from inside these organizations:

A revenue number is just a number. Without knowing which entities roll up into it, which formula calculates it, which thresholds trigger action, and which exceptions apply on Tuesdays in Q4 — an agent can't do anything useful with it.

We started calling this context fragmentation. And it turns out it's a category-level problem.

Context Fragmentation Is Not a Data Problem

This is the thing that took us the longest to articulate. Every enterprise we worked with assumed their AI accuracy problem was a data problem. More data, better data, cleaner data. They'd already invested in Snowflake or Databricks. They had vector databases. Some had knowledge graphs.

None of it was enough.

The issue is that there's a layer between raw data and good decisions — a layer made up of business definitions, metric formulas, decision rules, and the institutional knowledge that experienced operators carry in their heads. This layer doesn't live in any database. It lives in spreadsheets, Confluence pages, Slack threads, and most critically, in the minds of your best people.

Data warehouses solved data fragmentation. What we needed was something that solved context fragmentation — a structured layer where AI agents could query understanding, not just data.

That realization is what led us to build the Intelligence Warehouse.

What We Mean by "Context Infrastructure"

There's been a useful proliferation of tools in adjacent spaces. Gartner is writing about the "context mesh." Several startups are building agent memory layers, context management platforms, and enterprise graphs. These are all real and valuable efforts.

But there's an important distinction we had to learn the hard way: agent memory is not enterprise context.

Developer-oriented memory tools — systems like session stores, conversation memory, and retrieval caches — solve a real problem for individual agent interactions. They help an agent remember what happened in the last conversation or retrieve relevant chunks from a document.

Enterprise context infrastructure is a different thing. It's the structured, governed, queryable representation of how a business actually works. It's not what the agent remembers. It's what the agent needs to know before it can even begin reasoning.

The three layers that emerged

Across all three deployments, we converged on the same three-layer architecture:

Architecture

L1: Business Ontology — The entities, relationships, and hierarchies that define the business domain. What is a "customer"? How does a "region" relate to a "market"? What rolls up into what?

L2: Metrics & Formulas — The precise calculations that define KPIs. Not just "revenue" but the exact formula, the data sources, the edge cases, the adjustments.

L3: Decision Rules — The conditional logic that governs action. If churn risk exceeds X and lifetime value exceeds Y, then escalate. The rules that sit in your best operator's head.

When agents can query all three layers, something qualitative changes in their output. They stop hallucinating business logic. They stop confusing metrics. They start making decisions that look like they came from a 10-year veteran of the organization.

Lessons from Three Deployments

Lesson 1: The hardest knowledge to capture is the most valuable

At the telecom company, we spent the first two weeks trying to extract business rules from documentation. We found maybe 30% of what we needed. The rest lived entirely in the heads of senior network planners — people who'd been at the company for 15 or 20 years and had never been asked to articulate how they actually made decisions.

We built a structured interview process (we call it MORRIE internally) specifically designed to extract this tribal knowledge. It asks operators to walk through real decisions, probing not just what they decided but why — what thresholds they were watching, what exceptions they knew about, what patterns they'd learned to recognize.

The output isn't a document. It's structured context that agents can query at decision time.

Deployment Result

Fortune 100 Telecom: Decision accuracy went from ~65% to 96% in six weeks. The delta wasn't a better model. It was giving the existing model the context it was missing.

Lesson 2: Context has to be computable, not just retrievable

At the CPG company, they had a sophisticated forecasting pipeline and good data infrastructure. The problem was that their demand planning agents couldn't reason about promotions, seasonality adjustments, and regional exceptions the way their best planners could.

The critical insight: it wasn't enough to store context as text that agents could retrieve via RAG. Context needs to be structured so agents can compute with it. When an agent needs to forecast demand for a SKU, it needs to resolve the entity (which product family, which region), compute the relevant metrics (using the exact formula the business uses, not a hallucinated approximation), and evaluate the decision rules (what promotional lifts apply, what safety stock thresholds are in effect).

This is why we built the Intelligence Warehouse as an MCP server with discrete operations — resolve_entity, compute_metric, evaluate_rule, traverse_path — rather than as a document store. Agents don't retrieve context. They query it programmatically.

Deployment Result

Fortune 50 CPG: 96% forecast accuracy with a 34% reduction in stockouts. Six-week deployment. The forecasting model didn't change. The context layer underneath it did.

Lesson 3: Speed of context assembly determines speed of insight

The healthcare deployment was different. The clinical teams weren't struggling with accuracy on individual decisions. They were struggling with the time it took to assemble enough context to make any decision at all. A diagnostic recommendation that requires pulling from five different systems, cross-referencing three different protocols, and checking two regulatory constraints takes hours of manual work.

We learned that context infrastructure isn't just about accuracy. It's about the latency of understanding. When the Intelligence Warehouse provides pre-structured context — with the ontology already mapped, the metrics already defined, and the decision rules already codified — agents can go from signal to insight in seconds instead of hours.

Deployment Result

Global Healthcare Organization: 96% diagnostic accuracy with same-day signal-to-insight. The clinical staff had the same data before. What changed was the speed at which AI could assemble the context around it.

What This Means for the Market

We're watching the AI infrastructure stack mature in real time. The data layer is solved. The model layer is commoditizing. The orchestration layer is crowded. But the context layer — the part that makes AI actually understand a business — is still early.

Foundation Capital's framing of this as a trillion-dollar opportunity resonates with what we see in the field. Every enterprise running AI agents will need some form of context infrastructure. The question is what that looks like.

Our bet — and it is a bet, informed by three hard deployments — is that it looks like a warehouse. Not a warehouse for data (that exists), but a warehouse for intelligence. A structured, governed, queryable system that encodes how a business actually works and exposes it to any agent, any workflow, any decision point.

We think the analogy is precise: just as data warehouses emerged to solve data fragmentation in the analytics era, intelligence warehouses will emerge to solve context fragmentation in the agentic era. That's the category we're building.

Where We Go from Here

We're still early. Three deployments have taught us an enormous amount, but the surface area of this problem is vast. Every industry vertical has different context structures. Every enterprise has different tribal knowledge. The tooling for extracting, structuring, and serving context at enterprise scale is still being invented.

What we know for certain: the enterprises that invest in context infrastructure now will have a durable advantage as AI agents become the primary interface for business operations. Models will keep getting better. Data will keep getting cheaper. But the structured understanding of how your specific business works — that's a moat.


If you're running AI agents in production and hitting the context wall, we'd like to compare notes. We're working with a small number of enterprises on context infrastructure deployments.

Start a Conversation