in

Mo Alami on Building AI-Native Financial Infrastructure

In this conversation, we spoke with Mo Alami, Head of Product at OFZA, about AI-native financial infrastructure, how programmable markets reshape trust and regulation, and why accountability must remain human even as systems become increasingly autonomous.

After nearly three decades in sovereign development finance, what core assumption about financial stability did you realize no longer holds once systems become programmable and AI-mediated?

Traditional financial stability assumes latency; time to react, time to intervene, time to regulate. It is often static, slow, and expensive. Programmable and AI-mediated systems compress that latency dramatically. Capital can move and react at machine speed. The assumption that policy, oversight, and even liquidity buffers can adjust at human speed no longer holds. Stability must be engineered into the architecture itself; through real-time constraints, automated risk controls, and enforceable governance; not layered on top after the fact.

Was there a moment when you recognized that crypto markets were not just volatile financial instruments, but early prototypes of AI-native financial infrastructure?

Yes. The simple moment was realizing I wasn’t trading with people; I was trading with systems. In crypto, it becomes obvious quickly that a lot of liquidity and price alignment comes from automation: market making systems quoting continuously, arbitrage bots aligning prices, and in DeFi, smart contracts executing trades based on code. That’s when “AI-native infrastructure” clicked: these markets are already API-first and machine-readable. An AI agent doesn’t need a human UI; it needs a state it can read, rules it can simulate, and an interface it can call.

You’ve worked inside heavily governed institutions and now inside programmable markets. Where do you believe AI should sit in financial systems: decision layer, compliance layer, or capital allocation layer?

AI can and should sit across all three layers and beyond, but with graduated authority. In the compliance layer it should be deeply embedded (monitoring, anomaly detection, surveillance). In the decision layer it should augment; not replace; human judgment. And in the capital allocation layer it can optimize and model, but accountability must remain clearly human and institutionally governed, especially for regulated markets.

As regulatory frameworks like VARA formalize digital asset markets, do you see AI as a tool to enforce compliance, or as a force that will eventually reshape how regulation itself is written?

Initially, AI will serve compliance enforcement: faster onboarding controls, better transaction monitoring, and accelerated AML/CTF workflows. Over time, as financial systems become more programmable, regulation will likely become more structured and machine-operationalizable; moving from static text into policy logic, measurable controls, and continuous compliance expectations.

If autonomous AI agents begin allocating capital independently, what changes first: exchange architecture, liquidity design, or risk governance?

Risk governance changes first. Autonomous allocation increases systemic velocity. Before you redesign liquidity or UI, you need permissions, constraints, and security for what automated systems are allowed to do. For sensitive actions like movement of funds, that means controls such as OTPs, multi-factor authentication, biometric verification, device approvals, and policy-based limits. After that, exchanges need real-time circuit breakers, adaptive margin frameworks, and continuous stress testing to prevent cascading feedback loops between autonomous agents.

Most exchanges are still human-interface-first systems. Are you preparing for a future where APIs are consumed primarily by AI systems rather than human traders?

Yes. Human interfaces will remain important, but machine-to-machine interaction will dominate over time. OFZA offers trading APIs today. OFZA’s APIs are deterministic, resilient, low-latency, and structured for automated consumption. We also see this as a product opportunity: enabling AI trading bots via APIs, and building a marketplace where users can access trading bots in a structured, governed way; while keeping risk limits, monitoring, and accountability explicit while enabling our users to trade freely and in the ways they want.

When designing rule-based crypto indices, are you effectively creating machine-readable investment logic that future AI agents can execute without interpretation?

In many ways, yes. Rule-based indices encode methodology in a transparent way that is inherently machine-readable and repeatable. That’s valuable; because it increases predictability and auditability. The key is ensuring robustness under extreme market conditions and maintaining strong governance over methodology changes and edge cases.

In highly volatile markets, correlations break down under stress. Could AI-driven adaptive index governance outperform static rule-based systems, or does that introduce unacceptable opacity?

Adaptive systems can outperform static rules by reacting to regime changes in volatility and liquidity. However, opacity becomes a governance and regulatory concern. In regulated environments, explainability matters. The right balance is bounded adaptivity: adapt within published guardrails, with auditability and clear governance, rather than a black-box allocation process. 

“Enterprise-grade security” in an AI-accelerated threat environment is no longer static defense. How does machine intelligence change the definition of trust in digital financial infrastructure?

Trust shifts from perimeter defense to continuous verification. AI-driven threats evolve in real time; therefore defense must be behavioral, predictive, and adaptive. Trust becomes less about institutional reputation and more about demonstrable resilience: strong authentication, continuous monitoring, anomaly detection, rapid response, and clear audit trails.

Do you believe the long-term value of regulated exchanges lies in matching buyers and sellers, or in becoming structured data environments that AI systems can reason over?

I don’t think it’s either/or. Matching and price discovery remain core; but they’re increasingly engineered around automated liquidity and market making systems. A regulated exchange creates the rules, incentives, and integrity layer that attracts reliable liquidity and supports fair price formation. At the same time, as markets become more machine-driven, regulated exchanges increasingly become structured, compliant data environments; because automation and AI depend on clean data, clear rules, and auditability.

Looking back, was there a strategic decision at OFZA where you underestimated the impact AI would have on compliance, liquidity, or product design?

Yes. Early on, we primarily viewed AI as a compliance accelerator; onboarding, transaction monitoring, and surveillance. In retrospect, we underestimated how much AI is a general-purpose “mind” that can improve almost any workflow: support operations, product design, engineering velocity, risk investigations, and even how you model and monitor liquidity. The lesson is that AI isn’t only a safeguard; it is a multiplier across the company; if governance and accountability are designed in from day one.

Ten years from now, if sovereign digital reserves and AI-driven capital allocation systems converge, what role should regulated exchanges play in an AI-native financial order?

Regulated exchanges should operate as trusted execution and validation layers in an increasingly autonomous financial system. If sovereign-grade digital money and AI-driven allocation converge, exchanges must provide verified liquidity, standardized infrastructure, enforceable governance, and systemic stability. They should also play a role in education and literacy; helping users and institutions understand the limitations, risks, and accountability model of AI-driven finance; while enforcing secure modules and clear limitations over sensitive automated actions.

Tobias Antonsson, CEO of Bitcraze AB

Tobias Antonsson on the Constraints Shaping Micro-Robotics

Sicheng Yang, founder of Dexcelbot

Sicheng Yang on Why Embodied AI Breaks at the Hand