DeFi adoption is not blocked by lack of education, but by structural execution failures.
Pheasant founder Tomo Tagami explains why fixing the execution layer, not teaching users more, is the real prerequisite for cross-chain DeFi at scale.
1. You started your career in crypto through education and community building, not infrastructure. At what point did you conclude that education alone could not fix DeFi adoption, and that the real failure was structural execution rather than user understanding?
Tomo Tagami: I began my career in crypto through education and community. What I learned there was twofold. When people truly understand something, they are more willing to act. But understanding alone does not make it possible to act. My first startup, centered on education and community building, benefited from strong market timing and I was able to sell the company in roughly four years. After that experience, the limitation became very clear to me. Even if individual users study harder and become more sophisticated, the DeFi experience does not automatically improve. The real barriers are structural. Bridging and swapping, gas, different assumptions across chains, recovery when something fails, and constant headlines about security incidents all create friction that exists regardless of user knowledge. At some point, I stopped thinking of adoption as a user education problem and started seeing it as an execution problem. When I asked myself what to build next, I concluded that if we want DeFi to become a real part of everyday economic activity, we need to fix the execution layer first. That conviction is what led me to start Pheasant as an infrastructure company. We are still far from mass adoption, and the urgent work is not to expand education, but to build an execution layer where users do not get lost and where failure becomes the exception rather than the default.
2. Your background includes working closely with Japan’s regulatory and institutional ecosystem. Did that environment shape your belief that cross-chain infrastructure must eventually serve institutions and AI agents, not just retail users? How does that influence Pheasant’s long-term design choices?
Tomo Tagami: My experience working close to Japan’s regulatory and institutional ecosystem has absolutely shaped how I think. Japan is strict across many industries. Products are expected to meet high standards of accountability and safety by default. If you grow used to that bar, it becomes easier to satisfy regulatory expectations globally. More importantly, the trajectory toward institutional participation in crypto was already visible to me several years ago. That implies different requirements. It is not only about convenience for retail users, but also auditability, incident resilience, and the ability to contain damage in worst case scenarios. At Pheasant, security and resilience are core design priorities. We focus on making the system structurally hard to attack, for example by segmenting execution paths and limiting blast radius if an attack ever happens. As AI agents become more common, the target of attacks may shift from humans to agents, and the amount of capital controlled by automated systems can grow quickly. Preparing for that world means building cross-chain execution that is secure by construction, not secure only in good times.
3. Many teams today treat zero-knowledge bridges as the inevitable endgame. You’ve consistently defended optimistic verification on the basis of cost and speed. Do you see optimistic bridges as a temporary compromise, or as a model that can dominate retail and agent-driven flows for the next several years?
Tomo Tagami: ZK is often discussed with a sense of inevitability, sometimes abstracted away from current cost and latency constraints. I respect the direction, but most ZK systems are not yet at a point where they can be deployed into consumer-grade cross-chain execution without meaningful tradeoffs. In the networks we integrate, ZK-based approaches can still be expensive and slower than what users expect when they are trying to move value and act quickly. That is why we have consistently defended optimistic verification. Rather than optimizing for a theoretical endgame, our focus has been on shipping execution models that work in practice today, and that can scale to agent-driven flows over the next several years. At the same time, the role of ZK is evolving. We are actively exploring where ZK can be applied surgically, for example in verification components rather than as an all-or-nothing architecture. My expectation is that the pragmatic standard will be a hybrid. Optimistic models will remain dominant for many retail and agent flows because they are cost efficient and fast, while ZK will be used where its guarantees provide the most value.
4. Optimistic systems rely on a critical assumption: at least one honest watcher must always be online. Critics argue that watcher incentives decay when fraud is rare. What is the weakest point in this assumption today, and how are you designing Pheasant so the system remains secure even when nothing goes wrong for long periods?
Tomo Tagami: The weak part of optimistic systems is still the same. They rely on the assumption that at least one honest watcher is always online. Critics are right that watcher incentives can decay when fraud is rare, because the work can feel like insurance you hope you never need. Our approach is to treat this as a long-term systems design problem, not a temporary inconvenience. We plan to create stronger and more persistent incentives by rewarding watchers in PNT, so that participation is economically rational even during long calm periods. At the same time, decentralization should not mean the last line of defense disappears. Today, our core development team is designed to operate as a constant final backstop watcher. In addition, we architect the system to be modular and segmented so that if something goes wrong, it does not automatically cascade through the entire system. Optimistic designs are most dangerous when nothing goes wrong for a long time, because complacency grows. We design for that reality. Security must hold in quiet periods and remain resilient in stressed periods.
5. You’ve reduced dispute windows for certain L2-to-L2 transfers to roughly an hour. From a capital-efficiency perspective, that still exposes relayers to inventory risk. How do you price that risk, and does your system dynamically adjust fees or routing decisions based on volatility and liquidity conditions?
Tomo Tagami: It is true that dispute windows can be shortened in some L2-to-L2 contexts, but it is also true that many approaches achieve that by sacrificing aspects of decentralization. We view capital efficiency and security as a real tradeoff, not a slogan. A period of locking is painful from an inventory and relayer perspective, but it is also part of the cost of maintaining robust security properties. The practical question is how to manage that risk responsibly. We do that through continuous optimization based on large-scale historical execution data. We monitor gas costs and adjust fees and minimum transaction thresholds accordingly. We also rebalance routing as network conditions change, for example when congestion spikes on specific chains. There is still plenty of room to improve, but having processed more than 200 million dollars of volume gives us something important. We can improve based on observation and measured behavior, not just intuition. Over time, this makes pricing and routing more adaptive and more aligned with real liquidity and volatility conditions.
6. With the launch of AIntent, Pheasant has repositioned itself from a bridge to what you describe as a DeFAI logistics layer. Strip away the narrative for a moment. What exactly in your system is “AI” today, and where does algorithmic routing end and autonomous decision-making begin?
Tomo Tagami: We aggregate large amounts of execution data to improve fee setting, routing decisions, and transaction sizing. Right now, the system is still more about algorithmic optimization than fully autonomous decision making. But that boundary will shift as AI agents become active DeFi users. In that world, it is not enough to provide a UI for humans. We need interfaces designed for agents to discover, evaluate, and execute intents programmatically. Once that foundation exists, cross-chain movement can become only one step inside a larger automated workflow. For example, a user could express an intent like deploying USDC held on Base into a yield strategy on Arbitrum above a certain threshold, and the execution layer would complete the full sequence, not just the bridge. As intents become more complex, the number of possible execution paths grows exponentially. It is not realistic to handcraft every combination for every user and every chain. That is exactly where AI becomes necessary in DeFi, not as a marketing label, but as a way to manage complexity responsibly.
7. A growing view in crypto is that AI agents, not humans, will soon be the primary users of DeFi. Are you already designing Pheasant for machine-to-machine interaction rather than human UX? What interfaces or abstractions are you building so agents can natively discover, evaluate, and execute cross-chain intents?
Tomo Tagami: Based on current trajectories, the shift toward AI agents as primary DeFi users is likely to accelerate within the next few years. With that assumption, we design Pheasant not only for human UX, but also for machine-to-machine interaction. We are already expanding support across multiple intent interfaces, including ERC-7683, and we believe agent-native abstractions are likely to emerge as an important standard layer. If they do, supporting them becomes a top priority. In some cases, we may also build parts of that abstraction ourselves, especially where execution safety is concerned. The major challenge is key management and authorization. Letting an agent directly hold a private key introduces serious risk. Agents can sign transactions the human did not intend, or use permissions in unexpected ways. That means agent interfaces must include explicit guardrails. We need clear permission models, constrained execution policies, and verification friendly workflows so that agent behavior remains bounded and auditable. The goal is not to remove humans from control. The goal is to let machines execute safely within rules humans can understand and enforce.
8. AI-driven routing requires data: transaction history, liquidity behavior, congestion, and preferences. How do you reconcile intelligent decision-making with decentralization and privacy? Where does inference happen, and how do you avoid creating a new centralized control point?
Tomo Tagami: Data is essential for AI-driven routing, but crypto is built around values that can look opposite to typical data-hungry AI systems, such as privacy, encryption, and pseudonymity. I do not see this as an excuse to avoid data. I see it as a design constraint that should shape how data is used. It is possible to work with behavior and market data without tying it to personally identifying information. It is also increasingly feasible to use privacy preserving techniques, including ZK approaches, to enable aggregation and inference while maintaining anonymity. At Pheasant, we have processed over 200 million dollars and more than 100,000 transactions. This data is not linked to personal identity, but it does capture on-chain facts such as networks, tokens, gas conditions, and congestion dynamics. A key point is that much of what we rely on is already on-chain and publicly observable. Our differentiation is in how we structure and aggregate it to make execution decisions more reliable. As inference becomes more sophisticated, we believe transparency and decentralization become more important, not less. We focus on standardized interfaces and verifiable execution so that intelligence does not become a new centralized control point.
9. ERC-7683 pushes cross-chain intents toward standardization, which could commoditize liquidity and execution. In that world, where does Pheasant’s durable advantage live: execution speed, routing intelligence, distribution, or something else?
Tomo Tagami: Intent standards will keep multiplying, and it is difficult for anyone to confidently say which will become the durable global standard. That creates two requirements. Do not misread where standardization is heading, and stay flexible enough to integrate quickly when direction becomes clearer. With ERC-7683 in particular, many of the projects publicly supporting it are networks. They are not necessarily the execution layer that coordinates across chains. As standards increase, integration cost becomes real. Each new interface can demand meaningful engineering effort. Pheasant was designed from the beginning to support multiple standards and to switch context as the ecosystem evolves. That flexibility, combined with real-world execution data, is where I believe durable advantage lives. In a world where interfaces become standardized, the differentiator shifts to how well you optimize execution and how safely you complete it under changing conditions.
10. Pheasant supports more than 30 networks, including many long-tail L2s. Is this a deliberate strategy to win early ecosystems, or a structural bet on fragmentation? At what point does supporting more chains become a liability rather than an advantage?
Tomo Tagami: Supporting more than 30 networks is both a deliberate strategy and a structural bet. We launched Pheasant when L2 ecosystems were just beginning to emerge. Integrating new L2s quickly helped us participate in early ecosystem growth, and that was an important part of our traction. At the same time, too many integrations can fragment liquidity, degrade UX, and increase operational burden. We address this with two parallel strategies. One is our own protocol path, where we focus on the networks we believe are truly important, and we are willing to remove support when a network declines over time. The other is to be an aggregator that integrates external DEXs and bridges, which diversifies long-term risk. Aggregation reduces the need to permanently maintain our own liquidity everywhere and can lower per-chain maintenance cost. More chains become a liability when the number itself becomes the goal and execution quality or security starts to suffer. For us, coverage is only valuable if it improves execution outcomes.
11. Your roadmap positions the PNT token as a core component of the DeFAI economy. Beyond fees and staking, do you envision AI agents directly holding, spending, or even participating in governance via PNT?
Tomo Tagami: I believe a future where AI agents directly hold PNT, spend it, and participate in governance is very likely. Governance in particular is becoming too information dense for humans alone to keep up. Proposal evaluation and on-chain decision making will increasingly rely on automated analysis and assistance. Our roadmap includes the possibility of AI-driven governance, which means building interfaces where agents can hold PNT and participate in decision making. But again, the core requirement is safety. Agent participation needs permission design, accountability, and verifiability. Agents should operate within clearly defined and verifiable permission boundaries. They should operate within constrained models where humans can validate outcomes and where governance remains resilient to manipulation.
12. Looking ahead to 2027, if Pheasant succeeds completely, what disappears first: manual bridging, chain awareness, or even the Pheasant brand itself? In your ideal end state, what should users and agents no longer need to think about?
Tomo Tagami: If Pheasant succeeds completely by 2027, the first thing that should disappear is the fragmented manual workflow. The next thing that should fade is chain awareness. Users should not need to pick chains. They should only need to express objectives. Today, if someone holds USDC on Base and wants to capture an attractive ETH denominated yield opportunity on Arbitrum, they are forced through many steps. Bridge, swap, search for protocols, connect wallets, and deposit. In the ideal future, a user expresses an intent like using their assets within a defined risk range to target five to ten percent annual yield, and the execution layer completes the process end to end. In that world, Pheasant’s value goes beyond bridging. It becomes an interface for safely completing objectives, from intent to execution, across fragmented infrastructure. In the end state, both users and agents should no longer need to think about routes, steps, or chain-specific differences. They should focus on outcomes, while the system absorbs complexity and executes safely in the background.
Editor’s Note
This interview argues that DeFi will not reach mass adoption until execution complexity is absorbed by infrastructure rather than pushed onto users.

