in

Siavash Ghorbani on Coordination as the Bottleneck

In this interview, Stilla founder Siavash Ghorbani argues that as AI accelerates individual output, coordination and shared context have become the true limiting factors inside technical teams.

1. At Shopify, you helped reduce global transaction friction through systems like Shop Pay. At Stilla, you are tackling a different problem: internal alignment friction inside technical teams. From an engineering perspective, do you see team decision-making as a distributed consensus problem, similar in nature to large-scale payment reliability?

Siavash Ghorbani: Both require coordinating independent actors (services or humans) toward a consistent state despite latency, partial information, and failure modes.

The difference is that distributed systems have clear protocols, such as two-phase commit, eventual consistency, and CRDTs. Teams mostly don’t. They coordinate through chats, meetings, or serendipitously in hallway conversations. When AI makes individuals 10-100x faster, this informal consensus model breaks down. You can’t sync through weekly standups when work velocity is that high.

That’s why we built Stilla as a streaming context layer. Just as distributed databases need transaction logs to stay consistent, teams need a living record of decisions, work, and reasoning that flows continuously across tools. The bottleneck shifted from capability to coordination. The only fix is automating alignment itself, at the pace of AI.

2. Tictail democratized access to commerce infrastructure for independent brands. Stilla appears to democratize organizational memory and execution for smaller teams. In an AI-native era, do tools like Stilla compress the advantage of large organizations, or do they ultimately widen the gap between high-discipline teams and everyone else?

Siavash Ghorbani: At Tictail, we democratized commerce infrastructure. The constraint was technical: small brands couldn’t build their own payment processing. Remove that constraint, and they competed on execution, taste, and community. Infrastructure became a leveler.

Stilla does the same thing for organizational discipline. The constraint isn’t that teams don’t want to be well-organized—it’s that maintaining discipline is expensive. Someone has to manage the task status in Linear. Someone has to update the spec in Notion. Someone has to chase follow-ups, update statuses, keep documentation current after each meeting or completed task.

World-class teams needed to have systems and discipline. They wrote everything down. They kept tasks pristine. They documented decisions. They followed through. That overhead was the price of excellence, and most teams couldn’t afford to pay it.

Stilla automates that overhead. You don’t need someone to maintain your Linear. Stilla keeps it current based on the actual state of your progress. You don’t need process cops chasing action items. Stilla turns decisions into executable work immediately. You don’t need documentation sheriffs in Notion or Google Drive. Stilla helps you do that and is itself the living record.

Any team can now perform like world-class teams because the discipline layer is automated. And on top of that, AI accelerates the actual work—writing code, drafting docs, creating designs. You get both: the organizational excellence you couldn’t maintain manually, plus 10x velocity on execution.

That’s the compression. Small teams without dedicated program managers, without rigid processes, without someone babysitting the tools—they can now operate with the same coherence and follow-through as companies 10x or 100x their size. The costs of scale evaporate when coordination work is free.

3. Stilla deliberately avoids meeting bots and instead captures context directly on the user’s device. This no-bot approach improves trust and privacy, but dramatically increases technical complexity. What were the hardest architectural trade-offs you had to solve to make this model reliable at scale?

Siavash Ghorbani: I personally dislike when robots join a meeting. It shifts the entire mood. Everyone becomes aware they’re being recorded by a third party, even if they already knew it intellectually. It changes how people talk.

Instead, I assume everyone has their own setup to augment their meeting experience—whether that’s transcription tools or tools like Stilla that also let you gather information during the meeting to understand subjects, research context, or execute tasks in real time. We think this model is the best option: each participant controls their own augmentation, and the intelligence layer works for them individually while building shared organizational memory.

The trade-off is that we miss out on some API capabilities of the meeting platform. But the upside is worth it: Stilla works with every meeting app (Zoom, Meet, Teams, FaceTime and others) and even in-person meetings. Universal compatibility beats platform-specific integration.

The technical challenges are substantial though. Natively capturing audio, doing so performantly and within the appropriate permission models of the OS, transcribing the audio streams, identifying speakers—all of this requires low-level code in C++ and Objective-C. Echo cancellation alone is extremely finicky: if your microphone picks up system audio playing through speakers, you get feedback loops and garbled transcription. We had to build a custom solution that works in real time across different audio devices, OS quirks, and varying latencies.

Ultimately the challenge wasn’t just building it. It was making it super reliable across all kinds of situations and setups. Bluetooth headphones, weird audio setups, different OS versions, varying microphone sensitivity—every edge case breaks differently. We’ve made it work really well now, but getting to that reliability wasn’t trivial.

4. Many teams already record meetings, summarize conversations, and store decisions. Your argument is that this still fails because context decays across tools and time. At what point did you realize that meeting summaries alone cannot function as a system of record for real decision-making?

Siavash Ghorbani: At face value, most meeting notes seemed good. We tried all of them ourselves. Ideally we would have relied on them instead of providing our own solution. Most provided a decent high-level summary, but once you start to actually rely on them as a system of record, where the details of decisions are important, you quickly realize they lack depth. Even if the notes look like a well-written document, you can tell they don’t understand what they’re documenting. They don’t understand the connection between the topic and all the other information within the organization that’s relevant to the subject.

Stilla has far more depth in its understanding because it connects the meeting to everything else happening in the organization. It doesn’t just transcribe what was said. It grounds it in its understanding and knowledge of everything else within the organization. This grounded understanding allows Stilla to identify the actual decisions and next steps, not just surface-level talking points.

Once Stilla understands the decisions, it can ensure all your other systems stay up to date. A decision in a meeting becomes a Linear ticket, which triggers a GitHub PR, which gets announced in Slack, which updates the Notion roadmap. This creates a stream of context as things happen throughout the company—information flows continuously across tools, keeping everyone aligned on the current state.

This is fundamentally different from static summaries in a dynamic system. Real work is constantly moving. And because Stilla keeps all your systems synchronized, context doesn’t decay—it streams forward as the organization moves.

5. You have described Stilla as a shared intelligence layer rather than an assistant for individuals. In practice, what breaks first when AI is deployed at the team level instead of the individual level, and why do single-user AI tools consistently fail in multi-stakeholder workflows?

Siavash Ghorbani: Context fragmentation and permission breakdown.

Single-user AI is easy: you work in your own context, with your own permissions. You ask Cursor to write code, it uses your GitHub credentials. But the work you’re doing probably depends on decisions happening in real time across other places—meetings, Slack threads, Linear updates, GitHub discussions. That information is super valuable to what you’re building, but it’s not instantly available to your AI.

Some teams have tried variants of creating shared repos and using Claude Code to share context across the team. This works for a tiny team of 3-4 people who all trust each other completely. But the second you scale that even a tiny bit, the whole permissioning structure breaks down. Who should have access to the HR discussions? Payroll conversations? Customer pricing negotiations? You can’t just dump everything into a shared context that everyone’s AI can read.

Stilla solves this with granular controls while inheriting your permissions to the target systems. You have access to Linear, so Stilla can read your Linear tickets. You have access to GitHub, so Stilla can read your repos. But Stilla also knows the boundaries: your private chat is yours, shared canvases are visible to the team, and HR content stays restricted to people who should see it.

This allows Stilla to provide context in real time to other team members or their agents so they can act on the latest information. If I’m in a meeting deciding to change our pricing model, and you’re writing code for the pricing page, Stilla can surface that decision to you immediately—even though you’re not in the meeting. Your AI can adjust the work based on current context, not stale assumptions from last week.

This is how we improve our ability to align with each other. And if we do that well, we can truly run with our AI agents at 10x or 100x velocity while still remaining aligned. The alternative is everyone moving fast in their own bubble, building the wrong things really efficiently.

6. Stilla integrates deeply with Linear, GitHub, and Slack, turning conversations into tasks, pull requests, and execution steps. Do you see these tools remaining systems of record, or do you expect them to evolve into execution surfaces powered by an upstream intelligence layer like Stilla?

Siavash Ghorbani: They’ll become execution surfaces.

No company works exclusively in a single ecosystem. You don’t do all your work in Slack. You don’t do all your work in Notion. You don’t do all your work in Linear or GitHub. Real work happens across all of these tools, and often many more, simultaneously.

This fragmentation is precisely why you need an intelligence layer that works across everything. Linear, GitHub, Slack, Notion—they’re all amazing at what they do. Task tracking, code hosting, communication, documentation. But they’re fundamentally input systems: you type in a box, hit submit. They store state, but they don’t synthesize it across boundaries.

The intelligence layer moves upstream. Stilla becomes the place where context lives, decisions get captured, work gets orchestrated across all your tools. Then it pushes execution steps downstream: create this Linear ticket, open this GitHub PR, send this Slack message, update this Notion doc.

We’re already seeing this shift. Our deepest integrations—Linear, GitHub, Slack, Notion—are moving from “places we pull data from” to “places we push actions to.” The system of record is the collaborative context layer that understands your entire stack. The execution surface is wherever the action needs to land.

7. As AI-generated code, tasks, and documentation increase exponentially, many CTOs worry about data entropy rather than data scarcity. How does Stilla prevent its memory layer from becoming another noisy archive instead of a high-signal decision engine?

Siavash Ghorbani: Traceability.

The core problem with “AI memory” is that most systems treat context as append-only. Every conversation, document, summary gets dumped into a vector database. Retrieval becomes a search problem: how do I find signal in exponentially growing noise?

Stilla enables traceability instead. You can trace the lineage of work backwards: the code came from a PR, which came from an agent instruction, which started from a meeting where a decision was made. Every output has a clear origin.

This traceability is surfaced in the interfaces you already use. When you’re looking at a Linear ticket in Linear, you can see it was created from a Stilla canvas and click through. In GitHub, the PR description links back to the Linear ticket. In Slack, the announcement references the PR. The entire chain is connected across tools.

But more importantly, Stilla follows this trace to understand context. When you ask “Why did we build this feature?”, Stilla doesn’t just search for keywords—it traces the PR back to the ticket, the ticket to the canvas, the canvas to the meeting transcript where the decision was made. It reconstructs the full context by following the lineage of work.

This prevents entropy in two ways:

  1. Context preservation: The reasoning behind decisions doesn’t get lost—it’s linked directly to the work
  2. Accountability: You always know where something came from and why it exists

When AI generates exponentially more artifacts, traceability becomes the difference between a useful memory layer and a junk drawer. If you can’t trace how a piece of work came to exist, you can’t trust it, maintain it, or decide whether to keep it.

This also makes Stilla self-organizing. You don’t need someone to manually tag, categorize, and maintain your knowledge base. The connections are created automatically as work flows from decisions to execution. The graph structure emerges from actual work, not from someone’s attempt to impose order after the fact.

8. Stilla supports Model Context Protocol (MCP) to allow agents to access tools and repositories safely. From your experience, where does MCP fall short when dealing with long-lived, politically constrained team decisions, and what additional context do human organizations generate that protocols still struggle to encode?

Siavash Ghorbani: MCP is great for capability discovery (“what can I do with this system?”). It falls short on political context (“should I do this?”).

MCP is perfect for technical execution but human organizations generate implicit metadata that protocols struggle to surface:

  • Trust relationships (“ask Kaj before touching that codebase”)
  • Temporal states (“we’re revisiting this decision next week”)
  • Emotional context (“the team is burnt out on this topic”)
  • Power dynamics (“run this by Siavash before announcing”)

MCP gives us the plumbing. But team AI needs a decision history (and perhaps a social graph of sorts) to navigate organizational reality.

Long term, I expect this meta understanding to be embedded into the organizational context, and that the agents will become incredibly capable of navigating it.

9. You emphasize strict data ownership and avoid training on customer data, which is critical for companies like Ramp or Spotify. As models increasingly rely on fine-tuning to improve understanding, does this stance impose a ceiling on intelligence, or do you believe retrieval-driven context can outperform personalization through training?

Siavash Ghorbani: We’re not inherently against fine-tuned models. If large customers like Ramp or Spotify come to us and want to explore it, we’d partner with them to do so—but that would be within rather narrow aspects of where we use AI models.

There are situations where fine-tuning makes sense. But the general models are incredibly capable. Our bet has been, and continues to be, that general models will beat fine-tuned solutions within three months anyway. The pace of foundation model improvement is faster than the pace at which fine-tuned models can stay relevant at least in our space.

10. Today, many teams operate a fragmented stack: Otter for notes, Slack for discussion, Linear for tasks, GitHub for code. Convincing organizations to centralize their collaborative memory is a high-trust decision. What objections do CTOs raise most often, and which ones turned out to be structurally correct?

Siavash Ghorbani: One of the first decisions we made with Stilla was to always be backwards compatible with how the organization works. If someone doesn’t want to use Stilla, that’s fine. They’ll still see up-to-date tasks in Linear, well-written PRs in GitHub, organized Slack threads. At the same time, those who do choose to use it will simply move much faster—all in the same org.

That means the decision is not binary. It’s not about throwing everything else out. It’s about enabling the capability. And from what we see, once someone starts using Stilla, they’re incredibly likely to continue doing so.

We go very far with this perspective, even with key Stilla capabilities. For instance, if someone wants to use a different note taker, that’s fine. We instead think of them as just another tool in the suite an organization will use and focus on how we can bring that context into Stilla rather than figuring out how to prevent you from using a tool you really like or that works really well for you.

11. As AI systems move from execution to orchestration, the role of senior engineers shifts toward defining context and reviewing system behavior. Do you worry that over-automation risks eroding engineering intuition, and how should leaders use tools like Stilla to develop judgment rather than dependency?

Siavash Ghorbani: Short term I don’t worry about eroding intuition. I worry about teams losing the habit of building it.

Automation is like GPS: it gets you to the destination, but you never learn the city. The risk isn’t that senior engineers forget how to code—it’s that junior engineers never develop judgment about where to put their attention.

The failure mode isn’t “AI makes engineers dumb.” It’s “teams forget that judgment is a muscle, and muscles atrophy without use.” Right now that muscle is still quite valuable when you write software.

With that said, I don’t know how much longer that necessarily remains true. If I were to guess, by the end of this year the dynamics will have shifted significantly.

12. Looking ahead to 2030, do you believe teams will still rely on meetings and status updates at all, or will most coordination happen implicitly through shared context layers? In that future, what remains uniquely human about founding and leading a company?

Siavash Ghorbani: The pace is so fast right now that I think it’s hard to even reason about where we will be by the end of the year. With such huge error bands, reasoning about 2030 is a shaky exercise.

I would confidently assume, these systems will become incredibly capable, and they’ll operate autonomously. They’ll collaborate with each other, and quickly adapt to the input of humans. What form that human input has remains to be seen. Perhaps we’ll just go about our days and Interacting with each other may remain central. Humans spending time with and caring about other humans is perhaps the most human thing there is. Agents may simply observe those interactions, infer intent, and execute, infer what needs to happen, and execute on that.

The teams that are winners in this transition will be the ones where agents can stay aligned with each other while operating at machine speed, observing human collaboration, understanding intent, coordinating execution across systems. The shared context layer won’t just be nice to have. It’ll be the substrate that makes autonomous team coordination possible at all.

Ultimately, the judgment about what’s worth doing won’t be a technical optimization problem. It’ll be about understanding what the world needs, wants and cares about. Some of that will be directed by humans, some from the agents themselves.

Editor’s Note

This reflects a broader shift from tools for individual productivity toward systems that encode decisions, context, and execution at the team level.

Scholé Raises $3M for Workforce Learning

Daniel Keinrath on Voice AI as Infrastructure