People as the Unofficial Memory of the Data Landscape

Over time, these people become the unofficial memory of the data landscape.

They know which source Finance trusts while Sales keeps using another one anyway. They remember the definition that changed last year but still appears in old reports. They know when a number is technically correct and still useless for the decision on the table. They know which dashboard everyone opens first, even though everyone also knows it needs interpretation.

And often enough, they are the only person in the room who can say with confidence:

“We can answer that, but not from this dataset.”

Everyone who has worked around BI, ERP, CRM or legacy reporting knows a version of this.

The IT gatekeeper for System X leaves. The colleague who understood the old mapping logic is gone. The person who knew why the “temporary” workaround became permanent moved to another department three years ago.

Suddenly, something that looked like a system becomes a memory problem. The data is still there. The reports are still there. The tables are still there. But the meaning has walked out of the building.

Why Governance Does Not End After the Clean-Up

A similar thing happens after many governance initiatives.

For a while, everything looks cleaner. Definitions are aligned. Ownership is assigned. Reports are consolidated. The catalogue is updated. Old logic is documented. A few people protect the structure with real attention.

Then one or two key owners leave, change roles, or simply stop having enough time to defend the system. And slowly the old growth comes back.

A local report appears because the central one takes too long. A new field is added outside the agreed model because someone needs an answer quickly. A temporary workaround becomes part of the process. A metric gets adjusted in one department and not in another. The catalogue still exists, but it is no longer where reality is maintained.

Nobody breaks the system in one dramatic move. It just starts growing around the rules again.

That is why governance rarely ends with one clean-up project. The clean-up matters. Of course it does. But the harder part is protecting the meaning afterwards.

Data landscapes do not stay clean by themselves.

When Analytics Becomes Conversational, It Needs More Than Answers

This matters once people start talking about conversational or agentic analytics.

The hard work is rarely the question someone types into the interface. The hard work is the judgement that happens before a serious answer can be given.

In classical BI, a lot of that judgement stayed with people. A dashboard narrowed the question before the user arrived. A report already contained decisions about sources, filters and definitions. An analyst added the caveat in the meeting. A project team knew which figure was official, which one was closer to operational reality, and which one needed a phone call before anyone should build a decision on it.

Messy, but familiar.

Now more of that middle layer is expected to move into systems. A model can generate an answer. That part has become less impressive. The harder question is whether the answer has any right to sound confident.

The Real Question Is Not Whether the System Can Answer

If someone asks about margin, the system needs more than a column that looks relevant. It needs to know which margin logic the organisation actually uses in that context, whether certain cost elements are included, which revenue logic applies, which source is trusted, who owns the data product, and whether the person asking is allowed to see the underlying detail.

If the definition changed last year or has known exceptions, that has to be available too. These are old BI questions. They become harder to ignore when the interface becomes easier to use.

This is where catalogues, semantic layers, lineage, ownership, permissions and documentation become more than governance vocabulary. In a conversational or agentic analytics setup, they become part of the system’s working context.

The Catalogue Becomes the System’s Working Memory

Take Unity Catalog, or any serious data catalogue layer.

In a normal data setup, it helps organise access, lineage, metadata, ownership and permissions. Depending on the organisation, it may be well maintained, half maintained, or treated like something governance asked for and nobody really uses.

In an agentic setup, the consequences become bigger.

The system needs something reliable to lean on when a business question is ambiguous, badly documented or politically loaded. It needs to understand which source is trusted for a certain kind of question, which definition is current, which assumptions were documented, where lineage matters and where permission logic should stop the answer before it becomes misleading.

A weak catalogue is already annoying in classical BI. In agentic analytics, it becomes weak memory.

Brain, Memory and Skills Need to Become Operational

That is also why words like brain, memory and skills need to become concrete very quickly.

In enterprise analytics, the “brain” is rarely just the model. It is the model together with semantic context, source priorities, permissions, business rules, retrieval logic and the surrounding architecture. Without that, the system may sound fluent while still being wrong in business terms.

Memory should not be a random chat history. Useful memory means controlled organisational context: approved definitions, corrected assumptions, project decisions, known exceptions, user feedback and the things the organisation already learned the hard way.

Skills also need limits. A skill might query a data product, explain a metric, check lineage, suggest a mapping, generate a test, create a ticket, flag a data quality issue or draft a follow-up analysis. All of that can be useful. It also means someone has to decide where those skills stop.

Sometimes, Not Answering Is the Better Answer

That is where the topic becomes operational.

Sometimes the system should answer. Sometimes it should show that the definition is unclear. Sometimes it should point out that two sources disagree. Sometimes it should ask for an ownership decision. Sometimes it should refuse to produce a number that looks precise but is not decision-grade.

That refusal behaviour is not a weakness. It is part of making the system useful.

What Clean Demos Often Leave Out

A lot of clean demos skip this part. They show the question and the answer. They rarely show the source-system compromise behind the metric, the report everyone uses but nobody fully trusts, the definition that changed in a steering committee and never made it into the semantic model, or the permission issue that only appears when someone asks the question from the wrong role.

Those details decide whether the answer is usable. The old BI work comes back with more pressure behind it.

A weak semantic layer turns into fragile answers. Unclear ownership leaves the system guessing. Messy permissions become easier to trigger through natural language. An outdated catalogue gives the system less context than everyone assumed during the demo. Political definitions do not become neutral because a model phrases the answer nicely.

The disagreement may simply surface faster.

That can be useful. It can also be uncomfortable, because it exposes the gap between what the organisation thinks is explicit and what is actually only socially understood.

Agentic Analytics Also Needs Maintenance

And then there is the maintenance problem.

If an agentic system learns from corrections, feedback and approved definitions, someone has to protect that learning. Otherwise the same old pattern repeats itself. The system starts clean, people work around it, exceptions accumulate, ownership gets blurry, and after a while nobody is quite sure whether the answer reflects the agreed logic or just the latest workaround.

That is the part nobody sees in the demo, but it decides whether the system is still useful six months later.

Context needs maintenance. Memory needs ownership. Skills need limits. And the system needs a way to say: “I do not know this well enough to answer responsibly.”


The Old BI Weakness Becomes More Visible

Good BI teams already know this.

Meaning has to be protected. Definitions decay when nobody owns them. Governance does not fail only because the framework was wrong. It often fails because the few people protecting it stop having the time, authority or mandate to keep doing so.

Agentic analytics puts that weakness under more pressure.

The organisation already has analytics agents today. They are the people who know the caveats, the history, the exceptions and the unofficial rules around the numbers.

The next step is deciding which parts of that knowledge can safely move into systems, which parts need better documentation, which parts belong in catalogues and semantic layers, and which parts still need human judgement.

The Question Box Is Only the Visible Part

A question box is the visible part.

The difficult work starts earlier: in the definitions, the catalogue, the ownership, the permissions, the memory of previous decisions and the limits around what the system is allowed to do.

Everyone who has seen a data landscape decay after a governance project knows why this matters. Old ambiguity does not disappear because the interface gets better. It just gets distributed faster.

Photo of Thomas Howert

Do you want to make agentic analytics reliable?

Together, we assess whether definitions, catalogue, ownership, permissions and memory are anchored well enough for your system not only to answer, but to provide reliable context.

Request a consultation

Thomas Howert

Founder and Business Intelligence expert for over 10 years.

Discover more articles

A data robot steers a car carrying two worried employees onto a dirt road.

Data-Driven Decision Making: What It Really Means When the Environment Stops Behaving

Part 1: Why “data-driven” was often mistaken for decision automation, and why the real challenge is deciding whether your current signals still capture enough of reality to guide action.

Mehr erfahren
Text on dashboard image: “Why adaptive orientation matters when the KPI autopilot fails"

From Reporting to Decision Support - Why adaptive orientation matters when KPI autopilot fails

Too often, it was treated as if enough data, enough dashboards, and enough automation would eventually make decisions almost self-executing. That was always too simplistic. But once you stop confusing data-driven work with decision automation, the real question becomes much more practical: "What actually helps an organisation make better decisions when the old reading model starts to weaken?"

Mehr erfahren
Chat window AI/BI Genie Databricks

Databricks AI/BI Genie and the Future of Business Intelligence

We are witnessing a new wave in Business Intelligence (BI): the line between traditional dashboarding and natural data interaction is blurring.

Mehr erfahren