Week 1 · Days 1–5

Build the base graph from public data and industry knowledge.

IW already carries a deep map of how a CPG company works — SKU pyramids, channels, trade promo, distributor economics, festivals, monsoon. NorthCo binds to that map. We then layer in everything that's already public about NorthCo — annual report, investor deck, org chart, product catalog, twelve months of press — so the graph reshapes into NorthCo specifically. The output is a base graph: structurally complete, no internal data yet.

depends on
Nothing. This is the start.
produces
Base graph · CPG supergraph bound to iw://northco/ + enterprise V0: 14 BUs, 11 categories, 2,140 SKUs, 4 zones, 4 named competitors. ~45 nodes total. Carried forward into Week 2 as the thing SMEs work on.

What happens

  1. Pick the industry, sub-vertical, geography. CDO selects CPG → Foods + Personal Care → India. Takes 15 minutes.
  2. The CPG brain activates. 8,400 standard things every CPG has — SKU hierarchy, channel types, promo mechanics, festival calendar, weather effects on demand — all become available as templates.
  3. IW reads NorthCo's public footprint. Annual report (320 pages), Q3 investor deck, public org chart, product catalog scrape, 12 months of press releases.
  4. The CPG template reshapes into NorthCo. Generic "Brand → Pack → SKU" becomes NorthCo's actual brands. Generic "geographic zones" becomes NorthCo's actual North/South/East/West. Competitors get named: Haldiram, PepsiCo, ITC Foods, Britannia.
  5. 30-min CDO review. Confirms anything the public sources got wrong (typically 2–4 corrections).

Who does what

IW does the loading and extraction (4 days wall-clock, runs in background). The CDO spends 1 hour total — 15 min to pick the industry, 30 min to review the V0, 15 min to confirm the four corrections.

Public sources used

PDFAnnual Report 2024 · 320 pp
PDFInvestor Day Q3'25
WEBOrg chart · 14 BUs
WEBProduct catalog · 2,140 SKUs
RSSPress releases · 12 mo
JSONLinkedIn org · 4,800 employees
NorthCo · Week 1 outcome
45 nodes in the standing graph (24 industry patterns + 21 enterprise nodes). CPG supergraph bound at iw://northco/. Foods + Personal Care sub-verticals active. 14 BUs, 11 categories, 2,140 SKUs, 4 zones, 4 named competitors. CDO time spent: 1 hour. IW wall-clock: 4 days, 6 hours.
Week 2 · Days 6–10

SMEs and consultants thrash out the graph with NorthCo's stakeholders.

The base graph from Week 1 is roughly right but generic. Now SMEs and consultants thrash it out with NorthCo's actual stakeholders — Anuj (demand), Vinod (supply chain), Ritu (category) — fixing what's wrong, adding what's missing, and confirming what's specific to how NorthCo operates. IW asks only the questions that move the graph; the SMEs answer in plain conversation. Three stakeholders × two sessions × ~80 min each. No code, no schema design.

depends on
Week 1's base graph — there has to be something concrete to thrash out. The CPG industry knowledge tells IW which questions are worth asking.
produces
Refined graph · 5-level SKU hierarchy (added Variant + Pack-Config), 4 zones × 7 states, 6 named competitors, 6 distribution centres, owner mappings on every node. ~70 nodes total. Carried forward into Week 3 as the structure that needs data.

How IW asks only useful questions

  1. It scores every part of the V0 for uncertainty. Where is the binding weak? Where do public sources contradict? Where does the CPG template offer multiple options?
  2. It generates only the highest-value questions. Each question is scored by how much the graph would change based on the answer. Top 12 get asked.
  3. It routes each question to the right person. Demand questions go to the demand lead. Supply questions to supply chain. Categories to category managers.
  4. It schedules around their day jobs. 80-min sessions, two per SME, spread across the week.

Real session — Anuj K., Demand Lead

IW
IW
Most CPG firms in your sub-vertical add a Variant level for trade promotion accounting. Add to NorthCo?
A
Anuj K.
Yes — and we also need Pack-Configuration below Pack, used only for modern trade.
IW
IW
Logged. Apply to all 14 BUs or only Foods + Personal Care?
A
Anuj K.
Foods + Personal Care only. Hotels and Paperboards run different.

What got added

  • Two new SKU hierarchy levels — Variant and Pack-Configuration
  • Six named distribution centres — Mumbai, Pune, Delhi, Chennai, Bengaluru, Kolkata
  • Seven state-level geographies under the four zones
  • Two more named competitors — Parle, Mondelez
  • Owner mappings — who in NorthCo owns each part of the graph
NorthCo · Week 2 outcome
+25 new nodes (graph now 70). 12 candidate questions generated; Anuj resolved 7, Vinod 3, Ritu 2. SKU hierarchy extended from 3 to 5 levels. SME time across 3 people: 7 hr 48 min. Zero code written.
Week 3 · Days 11–15

Connect data to every node — IW writes its own connectors and resolves conflicts. (Hydration.)

The refined graph from Week 2 has structure but no real numbers in it yet. This week IW hydrates every node — plugs into all 38 of NorthCo's systems (SAP, Oracle, Snowflake, Excel, Tableau, PowerBI, MCP servers), writes the connector code itself, and stops when two systems disagree on the same number. The SME picks which source is canonical. Then the ontology compiles: 184 derived metrics, full lineage on every one, no recomputation at query time.

depends on
Week 2's refined graph — IW has to know what nodes exist before it can fill them with data, and which SME owns each node before it can route a conflict for resolution.
produces
Hydrated graph + compiled ontology · 47k populated object instances, 312k edges, 184 derived metrics, 100% lineage coverage, 23 source-of-truth decisions logged. ~150 nodes total. Carried forward into Week 4 as the data Morrie reads anomalies from.

Sources connected (all 38)

ERPSAP S/4HANA
ERPOracle EBS (legacy plants)
DWHSnowflake (sales + promo)
XLSExcel · Plant Controller (×4)
BITableau · 32 dashboards
BIPowerBI · 18 reports
MCPMCP · Promo server
MCPMCP · Weather feed
APINielsen · monthly
APIKantar · syndicated panel

28 more sources abbreviated · full list at iw://northco/sources/

Worked example: how one node — cogs_per_sku — gets populated

To make the data ingestion concrete, here is exactly how IW populates a single node from three disagreeing systems.

Node: cogs_per_sku · why it matters

Cost of goods per SKU per period. Feeds into contribution_margin, ebitda_per_sku, promo_roi, the demand forecast, and three OTP agents. If this is wrong, everything downstream is wrong.

1
IW finds three candidate sources. Looking at the connector registry, three systems have fields that map to "cost per SKU":
  • SAP S/4HANA · table /BIC/COGS_M · field STD_COST — finance's standard cost (monthly close)
  • Oracle EBS · table CST_ITEM_COSTS · field ITEM_COST — the legacy plants that haven't migrated to SAP
  • Excel · plant ctrl · workbook cogs_master.xlsx — the plant controller's reconciliation sheet, manually maintained
2
IW pulls a sample from each. For SKU NCF·SNK·CRSP·150g in period 2026-03:
SAP S/4HANA    ₹148.20
Oracle EBS    ₹152.80
Excel · plant ctrl    ₹147.60
The three sources disagree by up to 3.5%. Default tolerance is ±5%, so this almost passes silently — but cost feeds margin, and margin feeds 17 other places, so IW raises a conflict.
3
IW asks Anuj which is the source of truth. Not silently picks one. The question goes to the SME mapped to cogs_per_sku in Week 2 — Anuj K. (Demand Lead, also accountable for margin reporting).
IW
IW
Three sources disagree on cogs_per_sku by 3.5%. SAP=₹148.20, Oracle=₹152.80, Excel=₹147.60. Which is canonical?
A
Anuj K.
SAP. Finance owns the monthly close — that's the auditable number. Oracle is the old plants and they're rolling onto SAP this year. Excel is reconciliation, useful for cross-check but not source of truth.
4
IW writes the connector code itself. Based on Anuj's decision, IW emits a Python connector that pulls from SAP, cross-checks Oracle and Excel within ±5% tolerance, and reopens the conflict if any future sync goes outside that.
# auto-written · iw://northco/connectors/cogs_per_sku.py def populate_cogs_per_sku(): sap = sap_client.query( "SELECT MATNR sku_id, STD_COST cost_value, " " CURR_KEY currency, FISCPER period " "FROM /BIC/COGS_M WHERE FISCPER = LATEST_CLOSED()" ) oracle = oracle_client.query( "SELECT INVENTORY_ITEM_ID, ITEM_COST FROM CST_ITEM_COSTS" ) excel = excel_source.read("cogs_master.xlsx") # SAP is canonical (sot-014, anuj.k 2026-04-09) # Oracle + Excel kept as cross-checks at ±5% return reconcile( primary=sap, crosschecks=[oracle, excel], tolerance=0.05, on_breach="reopen_conflict", )
5
Anuj verifies the generated code. One-click "verify" in the IW UI. The connector is committed to version control and the node populates with SAP's value. cogs_per_sku = ₹148.20 · verified by anuj.k · 2026-04-09 14:22.
6
It now powers the rest of the graph. Every downstream node gets the canonical value with full lineage:
  • contribution_margin = list_price − cogs_per_sku − promo_spend
  • ebitda_per_sku uses it via the margin chain
  • promo_roi uses it to compute incremental margin per ₹ promo spent
  • The TradePromoPlanner agent (built in Week 5) reads it via SkuHealthTraversal
If SAP ever drifts beyond 5% from Oracle, the conflict reopens and Anuj is paged. Decision logged at iw://northco/decisions/sot/sot-014.

Same pattern, 22 more times

The same flow runs for the other 22 conflicts that exceeded tolerance — supplier lead time (SAP MM vs Oracle), DC capacity (WMS vs planning sheet), promo uplift (Tableau Nielsen-backed vs trade marketing's sheet), customer OTIF (PowerBI golden record vs SAP SD), distributor margin (trade terms doc vs Snowflake), and 18 more. Mean time-to-resolution: 18 minutes per conflict.

Then the ontology compiles

Once all base nodes are populated, IW computes the 184 derived metrics — forecast, fill rate, OTIF, days-on-hand, contribution margin, breakage rate, and so on. Each derived value stores its formula, the source nodes it came from, and the version of each source binding. Forward and reverse lineage is materialised — no recomputation at query time.

NorthCo · Week 3 outcome
38 sources bound. 47,021 object instances populated. +80 new nodes in the standing graph (sources + base metrics). 184 reconciliation events; 23 exceeded tolerance and required SME resolution. Mean time-to-resolution: 18 minutes. 312,408 edges materialised. 184 derived metrics with 100% lineage coverage. Anuj's total time across the week: 16 hours.
Week 4 · Days 16–20

Morrie elicits tacit knowledge from practitioners — the rules they use that no system records.

The hydrated graph shows Mumbai breakage at 18% versus a 4% network average. The data doesn't say why. Morrie — IW's elicitation agent — asks Sneha, the practitioner who owns that node. She tells him: monsoon humidity warps the cardboard divider, the new ASRS aisles cause forklift hits on turn-3. That becomes a typed rule on the graph: +14% safety buffer for Mumbai monsoon glass. Twenty-three such sessions across NorthCo's practitioners. The graph absorbs the operating wisdom that lives in people, not systems. Then IW commits live.

depends on
Week 3's hydrated data — Morrie needs real anomalies to know what's worth asking about. Owner mappings from Week 2 tell Morrie who to ask for each node.
produces
Tacit-rule layer + live IW · 2,840 captured insights, 25 typed TacitRules, IW released as iw://northco/v1.0.0 with read access to 23 named users. ~175 nodes total. Carried forward into Week 5 as the rule set agents must honour.

How Morrie picks who to ask, what to ask

  1. Detect anomalies. For every metric, flag values that deviate from peer-group, supergraph priors, or historical baseline beyond a configured threshold.
  2. Generate one question per anomaly. Plain language. Includes the data context — so the practitioner knows what triggered the question.
  3. Route to the right practitioner. Based on owner mappings (from Week 2), org chart proximity, and prior session history. Morrie balances workload — never asks one person more than 4 questions in a week.
  4. Ask for a 15-minute window. No mandatory meeting. Async (typed) or live (voice transcribed). Practitioner chooses.
  5. Adapt as it goes. Morrie's next question depends on the previous answer. Graph state visible alongside the chat — practitioner can correct it directly.
  6. Commit as a typed rule. Each insight becomes a TacitRule with a trigger predicate, an action, an owner, and explicit links to the nodes it affects.

Real session — Sneha K., Mumbai planner · 14 min

Anomaly: breakage_rate(dc=mumbai, sku∈glass_bottles) = 18.4% vs network mean 4.1% (z = 4.7)

M
Morrie
Mumbai DC shows 18% breakage on glass-bottle SKUs, vs 4% network average. The data doesn't tell me why. What's happening on the ground?
S
Sneha K.
Two things. New ASRS — tighter aisles, forklift hits pallets on turn-3. And monsoon humidity warps the cardboard divider, so bottles shift in transit.
M
Morrie
Logging. So forecast safety buffer +14% for Mumbai monsoon glass — apply to Pune too since same DC ships?
S
Sneha K.
Yes. Confirmed.

Some of the 25 typed rules captured

  • tacit/418 · Mumbai monsoon glass +14% safety buffer (Sneha)
  • tacit/422 · Festival demand 8-week pre-build pattern (Anuj)
  • tacit/491 · Demand planner override after long-monsoon weeks (Vinod)
  • tacit/507 · Personal Care humidity-sensitive SKU substitution (Ritu)
  • tacit/623 · Cricket schedule beverage spike, IPL window (marketing)
  • tacit/641 · South India filter-coffee seasonality (regional sales)
  • tacit/688 · Modern trade Q4 negotiation pattern (key accounts)
  • tacit/712 · Distributor channel-stuffing detection (sales ops)
  • tacit/734 · Diwali credit terms exception window (finance)
  • tacit/751 · ASRS Mumbai turn-3 forklift hit pattern (Sneha)
  • 15 more · full list at iw://northco/tacit/

Then IW goes live

End of week 4: validation passes confirm structural integrity, lineage completeness, and tacit-data linkage. IW is released as iw://northco/v1.0.0. Read access to 23 named users across demand, supply, and category. Write access only via SME-vetted action types. CDO + Demand Lead sign-off in 52 minutes.

NorthCo · Week 4 outcome
Morrie ran 23 practitioner sessions, mean session length 15.6 min. Captured 2,840 tacit insights committed as 25 typed rules. +25 new nodes in the graph (one per rule, linked to the metrics they affect). Total practitioner time: 5 hr 58 min. IW released as v1.0.0. Five weeks elapsed since kickoff.
Week 5 · Days 21–25

IW writes its own orchestrator, traversal, and planner agents. Your team builds custom agents on top.

With the live IW in hand, IW generates its own standing set of orchestrators (route a task into sub-tasks), traversals (walk a sub-graph and return a structured result), and planners (propose action sequences under a goal). 14 OTP agents in total — Demand-Forecast, SKU-Health, Stockout-Risk, Trade-Promo, and 10 more — derived from the graph itself and honouring the tacit rules from Week 4. Engineers approve 11 the same week; 3 stay in review. Then the agent builder opens to designated business users. Anuj ships his first custom agent in 18 minutes, no engineer required.

depends on
Week 4's live IW — every agent inherits the full graph, derivations, and typed tacit rules. The action-type registry from Week 3 defines what agents are permitted to do.
produces
OTP layer + agent builder · 14 OTP agents generated (11 live, 3 in review), agent builder open to 9 designated business users, first custom agent (MonsoonSubstitutionAdvisor) shipped. ~200 nodes total.

The three agent kinds

  • Orchestrator — receives a task or query, decomposes into sub-tasks, routes results. Example: DemandForecastOrchestrator.
  • Traversal — walks a defined sub-graph and returns a structured result. No external action. Example: SkuHealthTraversal — walks SKU → COGS, margin, velocity, breakage.
  • Planner — given a goal and constraints, proposes an action sequence. Calls action types under the user's permissions, never silently. Example: TradePromoPlanner.

The 14 OTP agents IW generated

DemandForecastOrchestrator
live
routes(query) → traversal(SKU, DC, Season) → planner(buffer)
SkuHealthTraversal
live
walks(SKU) → cogs · margin · velocity · breakage
StockoutRiskOrchestrator
live
monitors(DOH × demand_vol) → flags 7d ahead
TradePromoPlanner
in review
plans(promo) → uplift · cannibalization · ROI
DistributorAllocationPlanner
in review
allocates(stock, distributors) → fairness · velocity
PriceElasticityOrchestrator
live
measures(price × demand) → elasticity by SKU/zone
MonsoonRiskOrchestrator
live
honors(tacit/418) → flags monsoon-vulnerable SKUs
SecondarySalesTraversal
live
walks(distributor → retailer → consumer)
FestivalReadinessPlanner
live
honors(tacit/422) → 8-week pre-build plan
ModernTradeNegotiator
in review
supports(buyer meetings) → margin · listing · activation

4 more abbreviated · full registry at iw://northco/agents/

Then the agent builder opens

Designated business users — initially 9 across demand, supply, category, and trade marketing — get the agent builder. Custom agents declare a trigger, the OTP agents they inherit, the actions they may invoke, and the owner. Inherited OTP agents carry their tool contracts and tacit-rule honoring forward — the new agent does not re-encode any of that.

Anuj's first custom agent — 18 minutes

CustomAgent( name = "MonsoonSubstitutionAdvisor", trigger = "forecast_var > 0.12 AND season == 'monsoon'", inherits = [SkuHealthTraversal, StockoutRiskOrchestrator, TradePromoPlanner], honors = ["tacit/418"], // Sneha's monsoon buffer rule actions = [SuggestSubstitute, NotifyDemandDesk], output = (substitute_sku, qty, channel, eta), owner = "demand-planning@northco", )

Live to the demand desk by lunch. The monsoon buffer (tacit/418) is honoured automatically because StockoutRiskOrchestrator already honours it — no re-encoding.

NorthCo · Week 5 outcome
14 OTP agents generated from the graph. Engineer review across 8 hours: 11 approved live, 3 in review. +23 new nodes in the graph (14 OTP + 9 custom). First custom agent shipped in 18 minutes with no engineering involvement. As of report date, NorthCo has shipped 9 custom agents on the builder; mean build time 27 minutes.
0
nodes lit
0
edges lit
active week