in

Ayako Kawano on Where Climate Systems Break

In this conversation, we spoke with SoranoAI founder Ayako Kawano about the gap between weather forecasts and real-world decision-making.

1. For decades, weather technology has focused on improving forecasts, yet economic losses from extreme weather continue to rise. From your perspective, where exactly does the system break down between prediction and real-world action?

Ayako Kawano: Translating weather forecasts into preparedness, policy planning, and public warnings requires meteorological expertise. Countries and companies that can invest in that expertise tend to respond better; those with limited resources are constrained. That’s where the system often breaks down in preventing and mitigating economic losses from extreme weather.

2. Many weather platforms still rely on dense dashboards and layered visualizations, especially for enterprise users. In high-stress moments, do dashboards themselves become a structural bottleneck in climate decision-making?

Ayako Kawano: Weather dashboards are often optimized for monitoring, not decision-making. They still require humans to interpret uncertainty and translate visuals into actions, and in high-stress moments people may not have the time or capacity to do that reliably. So yes, in my opinion, dashboards can become a bottleneck in such an environment.

3. Despite zettabytes of new satellite and sensor data, organizations still struggle to act in time. Does this suggest that sensing is no longer the constraint, and that reasoning speed has become the dominant failure point?

Ayako Kawano: Having data doesn’t tell you a story on its own. Organizations still need technical and domain expertise to translate data into context, impact, and concrete actions. In urgent, time-constrained environments, reasoning speed matters a lot because the bottleneck shifts from sensing to making timely, decision-ready interpretations.

4. Sorano defines Kumo as an autonomous AI meteorologist agent rather than a decision-support tool. What was the moment you realized that assistive AI was insufficient, and that agency had to move from humans to machines?

Ayako Kawano: One of SoranoAI’s visions is to democratize weather intelligence for everyone, including countries and organizations that don’t have the resources to employ meteorological expertise to translate forecasts into action. This vision stems from my PhD research at Stanford. I used machine learning and satellite data to estimate air pollution concentrations so that countries with limited monitoring infrastructure could still inform policy and protect people. 

5. Kumo is built as a multi-agent system rather than a single large model. In chaotic environments like weather, what specifically breaks when decision logic is centralized instead of distributed across specialized agents?

Ayako Kawano: In chaotic environments, centralized decision logic becomes a bottleneck and a single point of failure. Weather workflows involve many distinct steps, such as data quality checks, selecting and weighting models, translating weather into local impacts, and generating actions or messages for different stakeholders. When that’s all fused into one brain, it’s harder to adapt and harder to verify. Distributed, specialized agents let you modularize those steps, run them in parallel, and make the system more resilient. 

6. Within Kumo, do agents ever disagree on outcomes or recommended actions? If so, how does the system resolve those conflicts, and why is internal agent negotiation preferable to exposing uncertainty directly to the user?

Ayako Kawano: In our current architecture, each step has one specialized agent, so we don’t have internal agent disagreement in the debate sense. 

7. Many climate AI systems perform well in data-rich environments but fail elsewhere. Was serving agriculture in India primarily a moral choice, a commercial strategy, or a technical stress test for agentic systems?

Ayako Kawano: We’re still early and haven’t officially launched an agriculture product yet, but we’re piloting with a partner agricultural company in India. It’s mission-aligned because the biggest gaps in weather intelligence are often in resource-constrained settings. But it’s also very pragmatic: it’s a real-world use case with clear outcomes, which makes it a strong test bed for building reliable agentic workflows.

8. You spent years inside UN institutions designed to manage global crises, where response speed is often constrained by governance. Was there a specific experience that convinced you climate response required algorithmic execution rather than policy coordination?

Ayako Kawano: When I worked on a project in Djibouti for building disaster resilience amongst the local community, the hardest part wasn’t getting data but translating risk into locally usable guidance that fits culture and how decisions are actually made. AI helps because it can rapidly tailor the same risk signal into clear, culturally aligned guidance for different audiences, at scale and with consistent quality.

9. You’ve said that when you feel constrained, you change your environment. Kumo appears to apply this philosophy at scale by reshaping users’ information environments. Was this an explicit product thesis from the beginning?

Ayako Kawano: Honestly speaking, it wasn’t the explicit thesis from day one. We started by trying to make weather intelligence easier to access and use. But as we built and watched how people actually work using weather data, it became clear that the real leverage is changing the environment around the decision. Kumo reflects my philosophy as a founder – reshaping the information environment so people can move faster, with more clarity, when it matters.

10. As Kumo moves closer to autonomous action, liability becomes unavoidable. If an agent recommends a costly preventive action and the event does not occur, who should bear that cost, and how do you design trust around that risk?

Ayako Kawano: Liability shouldn’t sit on an AI agent alone. When Kumo recommends an action, we make it auditable by letting users see how it generated and ran the analysis code, what data it used, and the assumptions behind the result. 

11. Looking ahead, where should autonomy stop? In a world of increasing climate volatility, what decisions should never be fully delegated to agents, even if humans are slower?

Ayako Kawano: Especially when ethical boundaries are involved, humans should remain responsible for decisions even if agents can act faster. For example, in a wildfire with limited evacuation transport, an AI can optimize routes and timing, but people must decide the priorities – whether vehicles go first to residents without cars, medically vulnerable people, or communities closest to the fire line. Humans must set the definition of fairness because those are value choices, not technical ones.

Corvera Raises $2M for CPG Supply Chain AI