Do we even know which AI and analytics use cases we operate, who is responsible for them, and which obligations follow?

When this steering mechanism is missing, two things typically happen: Use cases grow unnoticed into sensitive processes and compliance becomes a case-by-case project. Neither scales.

Note: This is not legal advice, but a technical, practice-oriented perspective.

Inventory First

AI use cases are rarely a clean product catalog.

They appear as dashboards with embedded logic, notebook prototypes, assistants, scoring models inside workflows, or automation steps that are only later recognized as "AI".
Without an inventory, consistent prioritization and repeatable decision-making become difficult.

From everyday inics practice:
We frequently see many small applications that no one counts as a "system" - until one of them moves into a sensitive process. Then the catch-up phase begins. Owners, purpose, data flows, versions, and risk logic must be reconstructed retrospectively.

What the EU AI Act Makes Operationally Relevant

As soon as a use case is likely to qualify as High-Risk, governance becomes operational. Classification depends on the specific use case and its context — not on the mere existence of a data platform. In such cases, lifecycle risk management and clearly defined responsibilities along the value chain are required.

For this to work in practice, roles must be clear. Under the EU AI Act:

Provider: An AI system is placed on the market or put into service.
Deployer: An AI system is used in operation.

It is also important to note: Under certain conditions, a Deployer can effectively assume Provider obligations, for example, in cases of re-branding or substantial modifications.

For many organizations, this is less a documentation issue than a steering issue:

Who classifies, who operates, and who provides the evidence?
And above all: Who acts as Provider, who as Deployer, and which obligationsfollow from that?

The technical implementation of evidence (lineage, versioning, logging) was described in our Blog Part 2: Traceability by Design. Here, the focus is on which use cases require evidence and who is responsible for it.

Target State: An AI and Analytics Portfolio That Supports Decisions

The goal is not a perfect register. The goal is a working portfolio that enables three things:


1. Triage
Is this an AI system at all, and if so, in which risk category are we operating?
As pragmatic guidance, the Commission's guidelines on the definition of an "AI system" are helpful.

2. Ownership
Who is responsible for what along the value chain, and where could roles shift toward Provider? (🔗)

3. Path
Which minimum requirements apply to the specific use case, particularly if it is High-Risk, and how are they maintained in operation?

The Core Artifact: Use Case Card Instead of a Monster Register

As an inics pattern, we recommend not building a large "governance database," but creating a Use Case Card for each relevant application. It should be concise, but complete enough to support decisions. And it can easily be attached to existing data or product catalogs.

A minimal structure that has proven effective:

- Purpose and process context
 Where does the result take effect? Which decisions are influenced?

- Owner and roles
 Who is responsible for business logic, operations, and changes - including escalation paths?

- Classification
 AI system yes or no, risk category, short justification, including triggers such as purpose changes

- Interfaces and dependencies
 Which data and systems are connected - at a level that enables reviews

- Operational requirement reference
 Which evidence is required and where it is stored (e.g., Evidence Pack from Part 2)

Tip from our AI specialist Thomas
Attaching cards instead of building a parallel governance universe reduces friction and increases adoption.

When It Becomes High-Risk: Three Points Teams Often Underestimate

1. FRIA as a Practical Lever

For certain Deployers and specific High-Risk deployments, a Fundamental Rights Impact Assessment (FRIA) must be conducted before first use and updated when changes occur.
In practice, this provides a structured framework to clarify purpose, affected groups, human oversight, and mitigation measures.


2. Deployer Obligations Are Operational, Not Paperwork

Deployers have operational obligations. These include:

- Use according to instructions
- Human oversight
- Monitoring
- Handling of input data
- Retention of automatically generated logs under their control

This is the point where platform and data teams are almost always involved in reality - even if they are not the business owner.


3. Registration and Incident Pathways Require Clear Responsibility

Certain High-Risk systems must be registered in the EU database. Incident handling pathways must also be defined. Deployers must address risks and incidents and inform Providers or authorities. Providers must report serious incidents to market surveillance authorities.

A Pragmatic Start Without a Governance Mega-Project

This is how we often start in projects - without launching a new large-scale program:

- Establish the portfolio:

 Collect top use cases, cluster them, clarify scope - including hidden AI in dashboards and workflows

- Assign owners:
 One responsible person per use case plus a clear escalation path

- Define class-based paths:
 For each risk category, define a minimal requirement set and reference how evidence is provided (e.g., via the Evidence Pack from Part 2)

Typical inics scenario: An HR dashboard evolves into a ranking logic and later into pre-selection. At that point, it must be clearly justified how the use case is classified, which oversight applies, and who assumes which obligations. Annex III serves as orientation for potential High-Risk areas.


Conclusion

Pressure rarely arises directly from the legal text. It arises when a use case moves into a sensitive process and questions about responsibility, classification, and evidence suddenly surface. With a portfolio, Use Case Cards, and clearly defined roles, this becomes manageable. Governance then remains lightweight and sustainable in daily operations.

Photo of Thomas Howert

Clarity on Your AI Use Cases

We help you establish a lean portfolio, triage High-Risk obligations, and anchor ownership and operational processes so that governance holds up in practice.

Request your free initial consultation

Thomas Howert

Founder and Business Intelligence expert for over 10 years.

Discover more articles

EU star circle around text: “EU AI Act, data quality to Art. 10”

EU AI Act Art. 10: Data Quality That Withstands Audits

What data engineering must deliver before the high-risk rules take effect (Part 1/3). In many organizations, data quality was long treated as a hygiene topic: important, but rarely decisive. With the introduction of the High-Risk rules, data quality becomes verifiable. It must be measurable, controllable, and evidentially demonstrable in operations.

Mehr erfahren
EU star circle around text: “EU AI Act: Traceability by Design”

Traceability by Design: Auditability Is Architecture, Not Documentation

Part 2/3 of the EU AI Act Series In Part 1, we focused on data quality. Part 2 builds on that. Because even with good data, the same question almost always arises in practice:

Mehr erfahren
Is your BI dashboard turning into a compliance risk?

The Impact of the EU AI Act on Business Intelligence

The EU’s Artificial Intelligence Act is the world’s first comprehensive AI law, and while it doesn’t regulate every dashboard, it has major implications for Business Intelligence once AI features are involved.

Mehr erfahren