Case Study
Telecommunications Infrastructure

A Fortune 100 Telecom Provider Went from a Failing AI Pilot to 96% Decision Accuracy in 6 Weeks

One of the world's largest network modernization programs needed AI that understood how the business thinks, not just where the data lives.

--
Decision accuracy
first use case
--
Kickoff to
production
--
Avg. time for each
subsequent use case
<10%
Relative cost per
subsequent use case
01

The data existed everywhere.
The understanding existed nowhere.

A multi-year radio access network modernization across thousands of live cell sites. Four parallel operational tracks. And the knowledge that connected it all was trapped in the heads of a handful of senior program managers.

4,200+
Live sites under modernization
23
Workflow steps, 6 stages
5+
Conflicting site identity systems
4
Parallel tracks per site

Field execution, HSE compliance, outage management, and formal closeout ran simultaneously at every site. Vendor teams, general contractors, program managers, and safety inspectors each held a different fragment of the operational picture.

No single person or system could answer the question that mattered: "What should happen next at this site, and why?" The tribal knowledge that connected these fragments lived entirely in people's heads. When those people were unavailable, the program slowed down.

02

The "talk to your data" pilot produced
if-else answers for a graph-shaped problem.

An initial AI pilot took the standard route: connect an LLM to operational databases, let teams query in natural language. Text-to-SQL. RAG over documentation.

~55%
Decision Accuracy

The system could retrieve data and apply basic conditional logic, but it produced flat, if-else style outputs that missed the relational complexity of the domain. A site could be "Ready" in the tracker and effectively blocked in reality.

×

No relational reasoning. The LLM had access to tables, not to a business model. It could not connect a "Ready" site to its expired maintenance window and a vendor whose HSE certification lapsed last week.

×

Tribal knowledge cannot be prompted. The rules governing vendor eligibility, escalation paths, and exception handling were never written down. No amount of prompt engineering could surface them.

×

Confidence without correctness. Simple lookups worked. Multi-step reasoning across scheduling, compliance, and vendor history produced answers that were articulate, specific, and wrong.

03

The Intelligence Warehouse approach:
encode how the business thinks.

Instead of pointing an LLM at databases, we built a Business Knowledge Graph that encodes the entities, relationships, metrics, and decision rules that experienced operators carry in their heads.

Business Knowledge Graph
Entity
Metric
Decision
42 entities 18 metrics 12 decision rules 6 domain clusters
M
Knowledge Elicitation via Morrie
Adaptive conversational sessions with domain experts

The graph wasn't built by reading documentation or reverse-engineering schemas. It was populated through structured conversational sessions with stakeholders using Morrie, an adaptive system that conducts Socratic-style interviews, asks progressively sharper domain questions, and constructs graph nodes in real-time as experts describe how the business actually works.

14 sessions across program managers, field leads, and HSE supervisors. Each session elicited entities, relationships, thresholds, and exception logic that no documentation captured.

Morrie
When a site shows "Ready" in the tracker but field work hasn't started, what's the most common reason? Walk me through how you'd diagnose that.
Nine times out of ten it's a maintenance window issue. The site is ready from our side, but the MW either expired or was never approved for that week. Second most common is the vendor doesn't have HSE-cleared crew available for that market.
3 nodes created: MW_Approval_Status → Site_Readiness, Crew_HSE_Clearance → Market_Availability, Blocker_Taxonomy updated
Morrie
So a site can be "Ready" in the system but effectively blocked. Is the threshold for MW expiry a hard cutoff, or is there a grace period? And does the PM get notified automatically?
Hard cutoff. 72 hours. No grace. And no, there's no automatic notification. PMs check a separate portal. That's where things slip through.
Decision rule created: MW_Expiry_72hr → triggers Site_Block. Notification gap flagged as process risk.
Morrie
Got it. So the 72-hour expiry is effectively a hidden blocker. Let me add that to escalation logic. Now, when a vendor's HSE clearance lapses mid-program, does that block just their sites, or all sites in that market?

14 sessions · 42 entities, 18 metrics, 12 decision rules captured · Every node traceable to a specific conversation

04

First use case: Site Execution Intelligence

Which sites need attention now? What's blocking them? What should we do? The first agent deployed answered the hardest question in the program.

Decision Accuracy
--
Validated against decisions made by senior PMs on the same site set.
Previous pilot: ~55%
Time to Production
--
From kickoff to production, including knowledge elicitation, graph build, data mapping, and agent deployment.
Blocker Identification
--
Blocked sites correctly identified with root cause and recommended next action.
Existing dashboards: ~60%
Daily Time Saved
--
Per program manager. Previously spent cross-referencing systems and calling field teams.
05

Build the graph once.
Deploy use cases at a fraction.

The Intelligence Warehouse is the compounding asset. The first use case is the investment. Everything after rides on the knowledge already encoded.

Use CaseBKG ReuseAccuracyTime to LiveCost
Site Execution Intelligence
Blocker detection, prioritization, next-action
Baseline 96.1% 6 weeks 100%
HSE Compliance Prediction
Predict likely audit failures by site and vendor
76% 95.4% 13 days 9%
Maintenance Window Optimization
Scheduling, conflict detection, expiry alerts
81% 95.8% 10 days 8%
Vendor Performance & Assignment
Throughput, SLA breach prediction, crew matching
84% 96.2% 9 days 7%
Relative Implementation Cost
Site Execution
100%
HSE Compliance
9%
MW Optimization
8%
Vendor Performance
7%

Why It Compounds

The cost collapse is structural. Each subsequent use case reuses the same ontology, the same metrics layer, and most of the same decision logic. Only net-new decision rules and additional entity relationships need to be built.

6
New entities
for use cases 2-4
7
New metrics
for use cases 2-4
9
New decision rules
for use cases 2-4

Versus the 42 entities, 18 metrics, and 12 decision rules already in the graph from use case 1.

Summary

The Intelligence Warehouse is the compounding asset.

Four use cases. One knowledge foundation. Each one faster, cheaper, and more accurate.

4
Use cases
in production
95.9%
Average decision
accuracy
~11d
Avg. time for
subsequent use cases
~8%
Avg. cost per
additional use case