in

Inessa Gerber on the Data Layer Behind Autonomous Finance

In this conversation, we spoke with Inessa Gerber, Director of Product Management at Denodo, about the data layer behind autonomous finance, why semantic abstraction matters more than raw infrastructure in enterprise AI, and what must change before agentic systems can reliably operate on live financial data.

You’ve spent much of your career working at the point where enterprise data becomes usable. What first made this layer of the stack worth building your career around?

The transition from raw data to usable business context is where technology becomes operationally meaningful. Raw data is like a raw natural resource: valuable, but not directly usable. Turning raw data into usable business-oriented data products without changing the underlying data itself is what makes this layer so interesting. This is where the business merges with technology. 


Your Kinfos talk centers on the data and memory infrastructure behind autonomous AI in financial services. What felt missing in most of the current conversation that made you frame it that way?

When it comes to autonomous AI, there seems to be a misconception that data and infrastructure are the only key requirements.  The heart of the successful AI initiative lies in consumable data.  Without business context, the autonomous agents cannot understand the data, and therefore we see many AI initiatives fail beyond the pilot stage.  Things work with limited scope, but don’t scale.  You have to start with a solid data foundation framework before you embark on the AI journey.

A lot of financial firms are experimenting with agents on top of fragmented systems. In practice, what tends to break first when those agents move beyond demos and start touching real workflows?

The first break would occur in identifying the right data. There are thousands of ETL pipelines being managed in every financial firm, creating copies upon copies of data sets. Making those datasets accessible to AI without centralized semantics becomes extremely difficult. AI agents are designed to be specific and carry out specific tasks, not go hunting for the right data set.  The second breaking point is merging multiple datasets while maintaining performance at scale.  AI agents once again are not designed for this task, and without underlying logical data fabric, things will not scale beyond your pilot. The third critical breakpoint would be around governance at scale.  Unless you have a unified governance layer, you can’t simply delegate all the governance and security tasks to the underlying data sources or try to push them to the AI agents.  It doesn’t scale.

Denodo has been building a logical data layer for years. In an agentic AI setting, what does that approach make possible that still gets much harder when teams rely on moving or copying data first?

Data copy for AI use cases results in stale data.  AI agents need to make decisions on real-time data, especially when we are talking about fraud detection, approval processes, and customer interactions.  A logical data layer makes it possible to create AI-consumable data products across distributed enterprise systems without constantly moving or duplicating data. Data products need to carry business context, semantics, and governance alongside the underlying data, so AI agents can consume them reliably at scale. This lets AI teams focus on building reliable agents rather than managing data delivery and governance complexity.


People often use the term enterprise memory very loosely. In product terms, what does a memory layer actually need to do before it becomes useful for a bank, insurer, or regulator?

For enterprise memory, being contextual and situation-aware is critical. AI mimics human behavior, so its memory needs to do the same.  It is all about the context of the interaction and data products at play.  It must go beyond just a conversational history, especially in the highly regulated environments such as financial institutions. It must be able to determine the proper use based on the context of who and why is requesting it. In regulated financial environments, auditability is non-negotiable.

DeepQuery suggests a shift from retrieval toward something closer to analytical reasoning on live data. What had to change in the product to support that move in a way that still feels trustworthy to enterprise users?

When we look at BI dashboards and reports, we intuitively trust them as they have been built and curated by others in our business. When DeepQuery generates a report, which may resemble analyst-generated work, you can’t just blindly trust it. DeepQuery needs to access and build the report on top of multiple tool calls to different data sets. The distributed data and ad-hoc consolidation can be incomplete or inaccurate without proper semantics. The only way a DeepQuery-generated report should be trusted is if it provides full traceability into how the data was accessed, how it was used, how metrics were calculated, and what methodology has been involved in the process. You need to trace the thought process and data sets behind the DeepQuery report.

Financial institutions care less about a clever answer than a reliable one. What has to be true at the data layer before an agent can work against live financial data without creating unnecessary risk?

The data layer must provide for real-time data access with unified business definitions. Data products need clear business definitions before they become consumable to AI agents. If you expose technical data sets directly, you risk inconsistencies and hallucinations.  The only reliable answer is the transparent and auditable answer.  To build transparency and improve accuracy, you must provide for the business semantic layer before the data is exposed to AI consumers.

Denodo’s newer MCP support brings your platform closer to how agents connect to enterprise systems. What makes that meaningful in practice for customers rather than just another protocol to support?

MCP matters because it gives agents a standardized way to access governed enterprise data across different systems without rebuilding integration logic for each deployment. It also makes multi-agent systems easier to coordinate against a shared semantic and governance layer. 

Some of the most valuable financial context still sits across policies, documents, operational systems, and other data that was never designed for AI. What have you learned about making that context usable without flattening it into something less reliable?

The approach would need to be tailored to the specific data sets and business requirements. For document-based unstructured data, vectorization and semantic search are often the most practical approaches. Now you can build a logical layer across both structured and unstructured datasets. In a single data view, you can filter structured data while also running similarity search across unstructured sources, without losing the underlying business context. Operational datasets require a similar semantic layer that adds business context without compromising operational integrity.

With Denodo 9.4, performance became a bigger part of the story through the Lakehouse Accelerator and Velox. What kinds of agent or analyst workflows start to feel realistic when the execution layer gets materially faster?

Faster execution layers make it practical to query large-scale distributed datasets in real time rather than pre-aggregating them into separate systems.

That changes what agents and analysts can realistically do with historical and operational enterprise data.

You’ve worked close to both customer problems and product direction for a long time. When you decide where to place a bet, what tells you a problem is structural enough to build for, not just part of the current AI noise cycle?

When the business drives the AI initiative and there are clear objectives and measures for success, it will become apparent where the real problems lie.  The AI noise usually originates in chasing the coolest tech and trends.  Organizations need to take a step back and evaluate their business objectives and start with the business initiative which will drive the AI project. 


As financial institutions move from assistants toward more autonomous systems, what part of that shift feels most important to get right while the rules of the category are still being written?

Financial institutions must get the data foundation right. Without a solid data foundation, you can’t succeed with AI. You need to abstract the complexity of distributed enterprise data silos, build a logical data layer with enriched semantics, and make these data products available to your AI agents in a secure and governed environment. Autonomous systems specifically need access to real-time data, so you can’t continue the paradigm of data copy and building yet another combined data silo. 

Chuck Lee, Head of Business Division at ESTsoft

Chuck Lee on Bringing AI Humans Into Real-World Interaction

SusHi Tech Tokyo 2026 urban AI and sustainable city innovation

SusHi Tech Tokyo 2026 Scales Urban AI