Diwo
Free Guide · 18 Pages · 2026 Edition

Guide to Decision Intelligence in the Agentic Era.

Why dashboards stop short of decisions — and what an agent-first Decision Intelligence platform actually does. The closed loop, the Semantic Knowledge Graph, the Catalyst + Decide spectrum, and a worked customer example with real before/after accuracy numbers.

Or read belowFree · No credit card
01 · Dashboard
The reader interprets the chart. The decision happens later, somewhere else.
02 · Chat answer
The agent answers. The user still has to decide what to do.
03 · Decision card
CONFIDENCE HIGHApproveModify
A ranked action, with confidence and evidence. The decision is the deliverable.

Dashboard → Inbound chat agent → Outbound decision card

01 · Introduction

The 30% wasn’t the real problem.

Business intelligence was supposed to democratize data. Give everyone dashboards. Let the business answer its own questions. “Self-service analytics.” Sound familiar?

After two decades and billions in tooling, the honest assessment is: it mostly didn’t work. Gartner has been saying for years that BI adoption rates plateau at around 30% of knowledge workers. MotherDuck’s recent guide to agentic BI hammers the same point. The other 70% never logged in.

But the 30% number is a red herring. The deeper problem isn’t adoption. It’s purpose.

Even when the 30% who use dashboards use them well, what comes out the other side isn’t a decision. It’s a screenshot pasted into a Slack message. A number cited in a meeting. A figure on a slide deck. Somewhere downstream, eventually, someone decides what to do.

Dashboards were never the job. They were a means to the job. The job has always been: make the next decision, ship it, measure it, iterate.

This is the gap legacy BI never closed — and it’s the gap that an agent-native Decision Intelligence platform exists to close.

The reporting trap

The classic BI vendor pitch is “single source of truth.” But truth, in business, is the input — not the output. A monthly revenue chart is not a decision. It’s a fact about the past. Whether to discount Brand A by 8% next week, hold a price, or shift promo spend to Brand B — that’s the decision. The chart helps. It doesn’t decide.

The lag from chart to action

Even when the chart is right, the lag is long. Analyst spots an anomaly. Forwards a screenshot to a category manager. Schedules a meeting. Meeting discusses three options. Decision gets made (or postponed). Decision gets implemented in a pricing system or a planogram or a media plan. Two weeks have passed.

Two weeks is a long time when the underlying conditions change weekly.

Stuck in spreadsheets

Once the dashboard delivers the chart, work moves to a spreadsheet. The category manager exports the data, joins it with what they know from last quarter’s tests, adds a few assumptions, plays out three scenarios. The work that actually drives the decision lives in a .xlsx file that nobody else can see.

The pattern is consistent: the analytics tool answers a question. The decision happens somewhere else. And the loop between the answer and the next answer is mediated by humans, meetings, and spreadsheets.

02 · What changed

Three things changed in the last 18 months.

The agentic shift in BI is real — but it has threecomponents, not two. MotherDuck’s guide nails the first two. The third is where Decision Intelligence pulls ahead.

01LLMs learned to write accurate SQL.

A year ago, asking an LLM to query a real warehouse was a parlor trick. Plausible-looking SQL with wrong joins, hallucinated columns. Today, with well-structured data and the right context, accuracy is 80–95% on hard analytical questions. MotherDuck’s DABstep benchmark put their best configuration at 93.2%on 460 payment-processing queries — ahead of submissions from NVIDIA, Google Cloud, and AntGroup. This isn’t a Diwo number, but it’s a public, replicable result that anyone building agentic analytics depends on.

02Open protocols connected AI to live data.

The Model Context Protocol (MCP), function calling, OpenAI’s API patterns — these gave AI agents a standard way to discover what data exists, describe schemas, and executequeries on demand. Before MCP, every AI-to-data integration was custom plumbing. Now it’s a one-line connection.

03Closed-loop agents became possible.

This is the one that changes everything for decision intelligence — and it’s almost absent from the “agentic BI” conversation.

The first two shifts get you to answer the question. You ask, the agent fetches, you read. That’s a chat interface over a warehouse. Better than a dashboard, but still inbound.

The third shift gets you to make the decision and execute it. The agent doesn’t wait to be asked. It monitors continuously. It detects what changed. It generates ranked recommendations with confidence levels. It pushes approved actions into pricing systems, replenishment systems, CRMs. It measures the outcome, and the next decision is informed by the last one.

That’s the loop. And the loop is what BI never had.

03 · The four jobs

The four jobs of a Decision Intelligence platform.

A DI platform — versus a BI tool, versus a chat-over-SQL agent, versus a data warehouse — does four things, in sequence, on a loop.

Decision Intelligence closed loopFour jobs arranged in a cycle — Monitor, Explain, Recommend, Act — with the Semantic Knowledge Graph at the center.CORESemanticKnowledgeGraph01MONITORContinuous · trigger-based02EXPLAINIn business language03RECOMMENDRanked · with evidence04ACTInto systems of record

Fig 1.1 — Every closed loop runs through the Semantic Knowledge Graph.

01Monitor — continuously, with triggers

Not “refresh nightly.” Watch for the conditions that indicate a decision needs to be made. Run trigger logic. Detect opportunities and risks the same way the org’s most experienced analyst would, if they had infinite time and zero meetings.

02Explain — in business language

When something changes, explain whyin the language of the business. Not “revenue dropped 12%” but “revenue dropped 12% because Q4 promo intensity on Brand A is 30% lower than last year, and Brand A drives 22% of category volume.” That answer requires the platform to understand causal structure, not just correlations.

03Recommend — ranked, with evidence

Don’t return “here are 47 charts.” Return “given this drop, here are the three actions with the highest expected lift, ranked. Each has a confidence level, an expected impact range, and the evidence behind it.” The recommendation is the deliverable — the charts are supporting evidence.

04Act — push to systems, measure outcomes

Push the chosen action back into the systems where work happens: pricing engines, replenishment systems, CRMs, ERPs, Slack approvals, scheduling tools. Measure the outcome. Update the model that made the recommendation. Close the loop.

What “platform” actually means here

You can solve any one of those four jobs with a point tool. Tableau monitors (loosely). ChatGPT explains (sometimes). Spreadsheets recommend (badly). Workflow tools act (mechanically). None of them close the loop, because closing the loop requires all four jobs to share the same semantic modelof the business: the same definitions of revenue, the same understanding of “Brand A,” the same threshold for “this is anomalous.”

That shared semantic model is the platform. The four jobs are interfaces on top of it.

Monitor
BI dashboard
Manual refresh
Chat-and-chart
User-initiated only
DI platform
Continuous, trigger-based
Explain
BI dashboard
Reader interprets chart
Chat-and-chart
Text + visual answer
DI platform
Causal, in business language
Recommend
BI dashboard
Out of scope
Chat-and-chart
Implicit at best
DI platform
Ranked, with evidence
Act
BI dashboard
Out of scope
Chat-and-chart
Out of scope
DI platform
Pushed to systems, measured
Shared semantic model
BI dashboard
None
Chat-and-chart
Schema + comments
DI platform
Semantic Knowledge Graph
Table 1 — Closing the loop requires all four jobs to share a single semantic model.
04 · The data layer

The data layer is still the leverage point.

Here’s the part that matters most, and it’s less glamorous than the AI headlines suggest.

The single biggest determinant of whether agents — anyagents, inbound or outbound — work well with your data isn’t the model, the prompt, or the agent framework. It’s the data itself.

Catalyst showing the SQL query it ran to answer the user's question
Fig 1.2 — Catalyst returns the SQL it ran. Trust isn’t claimed — it’s shown.

This is universally true. Whether you’re feeding a dashboard, an inbound chat agent, or a closed-loop Decision Intelligence platform, the same investments matter:

01Compact, well-named schema.

If column names are clear, tables are well-organized, and joins are obvious, the agent (and the human) figures out the rest. fct_orders joined to dim_customers on customer_id is self-explanatory. Star schemas. Fact and dimension tables. A boring, well-modeled schema is worth more than any amount of metadata engineering.

02Comment the confusing stuff.

Not every column. Just the columns with implicit semantics (“NULL means matches all values”), business logic that isn’t apparent from the name, or non-obvious grain. SQL column comments are read by both humans and agents — they become the platform’s primary source of business context.

03Build views for complex business logic.

If analysts regularly need a multi-table join with business rules, build a view that encapsulates it. The agent reads the view through schema introspection and queries it directly. It never has to reconstruct the underlying logic, which is where most errors happen.

04Document business rules and decision context.

This is where Decision Intelligence diverges from chat-and-chart.

For an inbound agent, documentation = “what does this column mean?” For a DI platform, documentation = that, plus:

  • KPI definitions— not just “revenue” but “revenue, net of returns and trade allowances, in reporting currency, for shipments invoiced by month-end”
  • Business rules— “when same-store-sales declines week-over-week for 3+ consecutive weeks in a region, flag for review”
  • Decision context— “for a Category Manager, an out-of-stock event on a Tier-1 SKU is a P1; same event on a Tier-3 SKU is a P3 unless margin > X”

This isn’t analyst documentation. It’s the input to the decision engine.

05Keep it simple.

Resist the urge to build elaborate retrieval systems, multi-agent architectures, or self-improvement loops before exhausting simpler approaches. MotherDuck’s controlled testing found that a single well-constructed prompt with views and macros outperformed a multi-agent system. Start with the simplest thing that could work. Add complexity only when you’ve proved the simple approach has hit its ceiling.

What the Semantic Knowledge Graph adds

For Diwo specifically, this data layer gets one more turn of the screw. KPI definitions, business rules, decision context, threshold logic — these aren’t stored alongside the data. They’re stored as a graph that overlays the data. The same SKG that answers a Catalyst chat question is the one that triggers a Decide opportunity. Defined once, used everywhere.

Catalyst
Inbound chat
Decide
Outbound agent
Semantic Knowledge Graph
KPIs · business rules · decision context · thresholds
Warehouse
Snowflake · BigQuery · Postgres · Databricks

Fig 1.3 — The SKG is the thickest layer. Both products read through it.

05 · Two ends of the spectrum

Catalyst + Decide: the two ends of the agent spectrum.

Diwo’s bet is that organizations need both ends of the agent spectrum, served by the same semantic layer.

Diwo Agentic Workflows UI showing inbound (Catalyst) and outbound (Decide) agents on the same semantic layer
Fig 1.4 — Inbound agents feed the SKG. Outbound agents act on it.
Catalyst · Inbound

“Let me check this number.” The ad-hoc questions, the deep dives, the exploratory analysis. It’s the BI replacement.

Decide · Outbound

“Tell me what to do about it.” The proactive recommendations, the closed-loop optimization, the should-we-act questions. It’s the BI replacement’s replacement — what comes after you’ve replaced your dashboards with a chat agent and realized you still aren’t making faster decisions.

Most organizations need both.

Catalyst: the inbound agent

A user types a question. Catalyst:

  1. Discovers the relevant tables in the Semantic Knowledge Graph
  2. Generates SQL that respects business rules and KPI definitions
  3. Executes against the live warehouse
  4. Picks the right visualization (table, bar, line, geographic, etc.)
  5. Returns the answer with the chart, suggested follow-ups, and the SQL that produced it

The shape of the answer matters as much as the answer itself. A revenue question gets a number plus context (vs last period, vs forecast). A trend question gets a chart with an annotated inflection point. A “why did X drop” question gets a causal breakdown, not just a “revenue went down” restatement.

Catalyst answering a business question with chart, evidence, and SQL
Fig 2.1 — Catalyst answers with evidence, not prose. SQL, chart, and follow-ups travel together.

What Catalyst does well: depth of questioning, fast exploration, putting the data team’s expertise into every employee’s hands.

What Catalyst doesn’t do: tell you what to do without being asked. That’s Decide’s job.

Decide: the outbound agent

Nobody asks Decide a question. Decide watches.

Continuously, against the live business. Trigger logic encoded in the Semantic Knowledge Graph fires when conditions match a known decision pattern. When a trigger fires, Decide:

  1. Frames the opportunity— what changed, why it matters, who owns it
  2. Generates ranked actions — typically 2–4 recommendations, each with expected lift, confidence, and the evidence behind it
  3. Surfaces it to the owner — in the right channel (Catalyst inbox, Slack, ERP, email)
  4. Captures the decision — approve, reject, modify, snooze
  5. Executes approved actions into the systems of record
  6. Measures the outcome and feeds it back into the SKG
Decide opportunity card showing a recommendation with confidence, evidence, and approve / modify buttons
Fig 2.2 — A Decide opportunity card. Title, owner, ranked actions, confidence, evidence — the deliverable is the decision.

The key feature is evidence. A recommendation without evidence is a guess. A recommendation with the underlying numbers, the causal logic, and the past outcomes of similar actions — that’s something a category manager can defend in a meeting.

Why both, on the same SKG

The same semantic layer that powers a Catalyst chat (“why did Brand A drop?”) is what triggers a Decide opportunity (“Brand A is down for the second week — here’s what we usually do”). Defined once, consumed by both surfaces. Same KPI definition, same business rules, same decision context.

This isn’t a feature — it’s the architecture. Two products that share a semantic layer behave coherently. Two products that don’t, drift.

06 · The team shift

What this means for analytics teams.

The role-shift is real, and it’s bigger than the chat-and-chart vendors have admitted.

What disappears

Building one-off dashboards in response to stakeholder requests. The “can you pull this for me?” queue. The dashboard graveyard — 800 stale Tableau workbooks nobody opens. The half of an analyst’s week spent on production work that the agent can now handle.

What emerges

Decision design. What does a good “Brand A inventory replenishment” decision look like? What signals trigger it? What variables matter? What’s the threshold for “this needs a human’s eyes” vs “this can auto-execute”? What’s the expected lift, the confidence interval, the rollback condition?

This is the new craft. It’s surprisingly close to product design — defining the surface of an interaction, the inputs, the success criteria. And it scales the way dashboards never did: a well-designed decision flow runs ten thousand times, not once.

What stays

The data engineering fundamentals. Clean schemas. Documented business logic. Good naming. Pre-computed views for complex joins. The unglamorous work that makes everything else possible.

If anything, agentic analytics raises the value of data engineering. A poorly modeled schema gave you a slow dashboard. A poorly modeled schema gives an agent confidently wrong answers — which is worse.

The new team shape

Most analytics teams will have three roles, not five.

Data engineers

Schemas, models, pipelines, views. Same as before, more valued.

Decision designers

Write the Catalyst prompts that codify domain expertise, design the Decide flows that operationalize playbooks.

Analytics engineers

Bridge the two. Build the semantic models, document the KPIs, maintain the SKG as the business evolves.

Two roles fade: dashboard builders (subsumed by Catalyst), and report-pullers (subsumed by both).

This is consistent with MotherDuck’s research on the inbound side, where they found “data quality and documentation beat prompt engineering” by a wide margin. Diwo’s customer cases reinforce it on the outbound side: the most successful Decide rollouts aren’t the ones with the most sophisticated models, they’re the ones with the most carefully designed decision flows.

Stop building dashboards for every request. Build the decision layer that lets any tool — dashboard, agent, notebook, ERP plugin — serve the answer.
07 · Scaling agentic analytics

A note on scaling agentic analytics.

Agents change the load profile of your data warehouse.

In the dashboard world, queries are predictable: morning refreshes, scheduled reports, the occasional ad-hoc query. Maybe 50 concurrent queries at peak, sized for that.

In the agentic world, every user can ask anything at any time, and every question is novel SQL. If 500 people in your org adopt Catalyst, that’s potentially thousands of concurrent novel queries — none of them pre-cached, none of them pre-optimized.

Three things break in that transition:

01Cost

Sizing for peak load with a traditional shared warehouse means paying 24/7 for capacity you use 5% of the time.

02Isolation

One user’s runaway query slows down everyone else (the classic noisy-neighbor problem, but worse because agents amplify it).

03Predictability

Agent-generated queries are bursty, irregular, and occasionally expensive.

The MotherDuck guide makes the case for hypertenancy — per-user lightweight compute instances. We agree with the diagnosis. The architectural answer depends on your warehouse — Snowflake’s per-warehouse model, Databricks’ serverless SQL, BigQuery’s slot reservations, MotherDuck’s Ducklings, etc. all approach this differently.

Diwo’s Catalyst and Decide both run as a layer above your warehouse, not a replacement for it. We don’t dictate your compute architecture — we work within it, and we keep the decision logic out of the hot path so the warehouse only handles what it has to. (Trigger evaluation, SKG lookups, and decision orchestration run on Diwo’s infrastructure; only the actual data queries hit your warehouse.)

Agents are a new class of workloadon your warehouse. Plan for them — by isolating them, sizing them, and giving them their own lane.
08 · For data leaders

What this means for data leaders.

The shift to agent-first analytics isn’t a rip-and-replace. It’s a reallocation of effort. The data leaders who move now have a meaningful head start.

Invest in the semantic layer, not dashboard proliferation

The highest-ROI investment available today is preparing your data warehouse — and the semantic layer that overlays it — for agents.

  • Clean schemas
  • Documented KPI definitions
  • Decision context as first-class data (thresholds, owners, escalation paths)
  • Trigger logic for the decisions you make most often

This work pays dividends whether you adopt agent-driven analytics next month or in two years. A well-modeled semantic layer makes every existing dashboard better and makes every future agent better. Even if you never roll out Decide, the documentation you built for it makes your existing Tableau workbooks more correct.

Treat agents as first-class consumers

Your data governance, access controls, and compute architecture should account for agent workloads today, not someday. Agents need the same schema discovery, data access, and query execution paths as human analysts — just at higher concurrency and with less predictable patterns.

This means your warehouse needs to handle per-user isolation, not just per-team. Your access controls should work through standard protocols (MCP, OpenAPI), not just through BI tool integrations. Your compute architecture needs to spin up and down with the bursty, unpredictable patterns that agent workloads create.

Make the decision layer explicit

This is the Diwo-specific recommendation, and it’s the one most BI vendors won’t tell you to do.

Pick five decisions your business makes regularly. Not “answer questions” — decisions. Inventory replenishment for top SKUs. Price changes for promotional periods. Account assignment for new leads. Renewal risk flagging for top customers. Hiring approval for headcount requests.

For each one, write down: what triggers the decision, what variables matter, what the playbook is today, what a “good” outcome looks like, and what the threshold is for human review vs auto-execute.

That document — five decisions, fully specified — isthe input to a Decision Intelligence platform. If you can’t write it, you don’t have an agentic analytics problem. You have a decision-design problem. Fix that first.

The data team’s role evolves

It’s almost cliche at this point: data teams aren’t going away, but their value will shift.

Less time building and maintaining dashboards. More time curating data models, building views, writing context documentation, designing decision flows. The work that makes agents — both inbound and outbound — accurate and trustworthy.

This isn’t a threat to data team headcount. The opposite. Every well-designed decision flow scales the data team’s expertise across the entire organization. One analyst’s intuition about “when is a Brand A drop a P1 vs a P3” becomes a trigger that fires every time the conditions match. That’s leverage that didn’t exist in the dashboard era.

The data team becomes the team that makes the entire organization’s analytics self-serviceactually work — not by building every dashboard, but by building the foundation that agents rely on.
09 · The original promise

The original promise, finally within reach.

BI was supposed to give knowledge workers access to data. It mostly didn’t.

Self-service analytics was supposed to let everyone answer their own questions. It mostly didn’t.

The agentic era closes those gaps. Agents that query live data, generate custom visualizations, and answer follow-ups in natural language lower the barrier to “asking” to near zero. No training required. No dashboard to find. No viewer license to provision. Just ask.

But the deeper shift — the one that makes this moment different from every previous “BI revolution” — is the closed loop. The decision doesn’t end at the chart. It ends at the action. And the action’s outcome flows back to make the next decision better.

Decision Intelligence closed loopFour jobs arranged in a cycle — Monitor, Explain, Recommend, Act — with the Semantic Knowledge Graph at the center.CORESemanticKnowledgeGraph01MONITORContinuous · trigger-based02EXPLAINIn business language03RECOMMENDRanked · with evidence04ACTInto systems of recordoutcomefeeds the model

Fig 3.1 — The loop closes. Outcomes inform the next decision.

That’s the original promise of data-driven decision-making. The technology wasn’t ready. Now it is.

The question for data leaders isn’t whether agents will transform analytics. It’s whether your organization is ready when they do.

The dashboard was a means to an end. The end was always a better decision.
Try it

Ready to see what an agent-first DI platform actually does?

Try Diwo Catalyst with your own data. Free 15-day trial. No credit card. No sales call required.

Or read more at /decide and /catalyst

Frequently asked

The questions readers ask.

What is Decision Intelligence?

Decision Intelligence (DI) is the practice of using AI, data, and behavioral science to turn information into decisions — not just insights. A BI tool tells you what happened; a DI platform tells you what to do next, projects the dollar impact, validates the strategy with AI, and pushes the approved action into your operational systems. The output of a DI platform is a recommended action with quantified impact, alternative strategies, and an execution pathway — not a chart.

How is a Decision Intelligence platform different from a BI tool?

A BI dashboard refreshes on a schedule and asks a reader to interpret a chart. A DI platform monitors continuously, explains changes in business language, recommends ranked actions with confidence and evidence, and pushes approved actions into systems of record (pricing engines, replenishment systems, CRMs). The four jobs — monitor, explain, recommend, act — all share the same semantic model, which is what closes the loop between insight and outcome. BI never had that loop.

What is the Semantic Knowledge Graph?

The Semantic Knowledge Graph (SKG) is the layer between an enterprise's data warehouse and its agents. It encodes KPI definitions, business rules, decision context, ownership, and threshold logic — defined once and consumed by every agent surface. In Diwo's architecture, the same SKG that answers a Catalyst chat question is the one that triggers a Decide opportunity. The SKG, not the LLM, is the leverage point: customers see a jump from ~62% to 94% answer accuracy after 6–8 weeks of encoding business context into it.

What's the difference between Catalyst (inbound) and Decide (outbound) agents?

Catalyst is the inbound agent: a user asks a question and Catalyst discovers the right tables, generates SQL, executes it, picks a visualization, and answers with chart + follow-up suggestions. Decide is the outbound agent: nobody asks it anything — it monitors continuously, fires when conditions match a known decision pattern, generates ranked recommendations with confidence and evidence, and pushes approved actions into systems of record. Most enterprises need both, served by the same Semantic Knowledge Graph so they behave coherently.

Why does the data layer matter more than the model?

MotherDuck's DABstep benchmark — 460 hard payment-processing queries — showed raw schema scoring 29.8% accuracy and a simple prompt with well-named views and macros scoring 93.2%. Same model. 64 percentage points of difference. The same investments — compact schema, comments on the confusing stuff, views for complex joins, documented KPI definitions and business rules — pay off whether you're feeding a dashboard, an inbound chat agent, or a closed-loop DI platform. The agent framework is downstream of the data layer.

What does a closed-loop agent actually mean?

A closed-loop agent doesn't stop at answering a question. It (1) monitors continuously, (2) explains what changed in business language, (3) recommends ranked actions with confidence and evidence, (4) pushes approved actions into systems of record, and (5) measures the outcome — which feeds back into the model that made the recommendation. The loop is what BI never had: every action's outcome informs the next decision. Chat-and-chart agents only solve steps 1 and 2.

How do agentic workloads change my warehouse architecture?

Agents change the load profile. Where BI generated predictable, mostly cached queries, agents generate novel SQL on demand from every user. Three things break in that transition: cost (sizing for peak with a traditional shared warehouse means paying 24/7), isolation (one runaway agent query slows everyone else down), and predictability (agent queries are bursty and irregular). Plan for hypertenancy or per-user lightweight compute — Snowflake per-warehouse, Databricks serverless SQL, BigQuery slot reservations, MotherDuck Ducklings — and keep decision orchestration off the hot path so the warehouse only handles data queries.

Does Decision Intelligence replace Business Intelligence?

No. DI sits on top of BI. Most enterprise DI deployments use the same warehouse, the same certified metrics, and the same semantic layer as the existing BI stack. BI continues to serve descriptive analytics and ad-hoc exploration. DI adds a new top layer: a ranked queue of opportunities, AI briefings, what-if simulation, AI-validated alternatives, and outbound execution. BI becomes the data plane; DI becomes the decision plane.