in

Sebastian Völkl on Why Engineering Breaks at Requirements | Interview

Requirements were meant to translate human intent into executable systems, but in modern engineering they have become a primary failure point. In this interview, Dalus founder Sebastian Völkl explains why document-driven requirements collapse under complexity and why executable, AI-native system models are becoming unavoidable.

1. Before Dalus, you worked on brain–computer interfaces, translating noisy biological signals into precise machine actions. When you look at modern systems engineering today, do you see requirements as a similar problem of human intent failing to become machine-executable reality? How did that background shape how you think about engineering tools?

Sebastian Völkl: Yes. Requirements are intent expressed in a lossy medium, just like neural signals were intent expressed through messy biology. In BCI work you’re looking at something real and meaningful that’s buried under noise, drift, and context. The job is to turn that into reliable machine action with guardrails. 

Systems engineering has the same shape. Requirements are usually the best available representation of what people want, but they’re written in prose and documents that aren’t built for execution. When you try to turn that into architecture, interfaces, and verification, the same failure modes show up: ambiguity, miscalibration, unspoken assumptions, and brittleness when things change. 

That background made me opinionated about tools. The core job isn’t documenting reality, it’s making intent operational. A good tool behaves more like a signal-processing pipeline than a filing cabinet. It makes semantics explicit, supports continuous validation, and catches drift early. If the tool can’t preserve intent as the system evolves, it’s just a nicer way to lose information.


2. Many founders in systems engineering come from decades inside aerospace or defense incumbents. You entered as an outsider focused on information flow. In your view, is today’s engineering crisis driven more by systems becoming inherently more complex, or by the fact that our cognitive tools for managing complexity have barely evolved?

Sebastian Völkl: Systems are absolutely getting more complex, but the bigger problem is that our cognitive tooling hasn’t kept up. More software, more connectivity, more cross-domain coupling, more safety and regulatory constraints, more stakeholders, faster timelines. That’s real. But what’s striking is how much of the workflow still revolves around PDFs, slides, and spreadsheets. Those artifacts are easy to circulate, but they’re hard to compute on. We’re managing graph-shaped reality with linear documents.

So if I have to pick the main driver, it’s the mismatch between the complexity of modern systems and the way we represent system knowledge. Complexity is inevitable. Losing information to our tooling is optional.


3. When large engineering projects fail, what usually breaks first: documentation, communication between teams, or trust in the system itself? Why have document-driven workflows survived for so long despite repeated failures?

Sebastian Völkl: Trust breaks first. Documentation and communication degrade early, but the first fatal moment is when engineers stop believing the “official” representation is authoritative. Once people think the model or spec is outdated, they route around it. They keep private notes, shadow spreadsheets, and side-channel decisions. From there, multiple inconsistent truths emerge and integration turns into negotiation instead of engineering.

Document-driven workflows survive because they feel legible and controllable. They also map well to how responsibility gets distributed in big organizations. Delivering a document is defensible even if the system fails later. And historically, the alternatives were painful. Traditional MBSE often required specialists, heavy overhead, and still didn’t close the loop into verification, so teams stuck with what they knew even when it failed.


4. You’ve spoken about how much elite engineering time is wasted maintaining spreadsheets and static artifacts. From a talent perspective, do you see legacy tools as a hidden bottleneck on innovation by consuming cognitive bandwidth rather than enabling judgment?

Sebastian Völkl: Yes. Legacy tools aren’t just inefficient, they’re a cognitive tax, and it’s one of the biggest hidden bottlenecks on innovation. The real tragedy isn’t only time wasted, it’s who wastes it. Your most experienced engineers end up acting like librarians. They reconcile versions, maintain spreadsheets, and update artifacts to satisfy process. That’s time and attention that should be going into synthesis and judgment.

It also changes behavior. When keeping artifacts current is expensive, people postpone updates. Then the artifacts become less trusted. Then everyone routes around them. And eventually the real system knowledge lives in individuals’ heads and private files. That’s when organizations get fragile.


5. Dalus is built natively on SysML v2. For readers unfamiliar with the standard, why is the shift from diagram-centric models to structured, textual system definitions so important? Do you see SysML v2 as the first moment where system architectures become truly AI-readable?

Sebastian Völkl: Diagram-first modeling is great for communication, but it’s ambiguous by default. Two people can look at the same diagram and silently assume different constraints, different interface semantics, different behavior. Textual, structured definitions force explicitness. They’re diffable, versionable, testable, and closer to the way software actually evolves.

For AI, that shift is foundational. If the system knowledge is structured and semantically meaningful, the AI can query it, reason over it, transform it, and validate it. If it’s mostly pictures and prose, AI is forced into interpretation, and interpretation is where safety and correctness degrade. SysML v2 is one of the first broadly accepted standards that actually wants to be computed on. It treats the model as the underlying definition, not the picture of it. That’s the prerequisite for system architectures to become truly machine-readable.


6. Legacy MBSE platforms are deeply tied to SysML v1 and decades of technical debt. Do you believe the industry is entering a forced tooling reset similar to what software experienced with cloud-native development? What happens to organizations that delay this transition?

Sebastian Völkl: I do think we’re entering a tooling reset, not because it’s trendy, but because the old stack can’t meet the new constraints. Software went through this with cloud-native and CI/CD because the workflow needed to match how software evolves. Systems engineering is hitting the same wall. Programs want faster iteration, continuous verification, and distributed collaboration, but the toolchain still assumes slow, document-based handoffs.

Organizations that delay the transition pay compounding costs. They accumulate model debt the way software accumulates technical debt. They struggle to hire and retain people who expect modern workflows. They end up doing compliance theater, lots of artifacts, not enough truth. And they lose schedule in integration because they couldn’t validate continuously. Delaying feels safe in the short term, but once competitors have tighter feedback loops, “conservative” becomes “slow,” and slow becomes existential.


7. Dalus positions its AI as system-aware, capable of generating architecture and detecting inconsistencies. When an engineer asks the AI to modify a system, how do you ensure it respects physical laws and safety constraints rather than producing plausible but invalid results?

Sebastian Völkl: You don’t “trust the AI.” You constrain it. If you let AI produce plausible output and then treat that as engineering truth, you’ll get plausible nonsense. The right model is that AI proposes changes inside a constrained system, and the system enforces correctness.

That starts with typed, structured models so the AI isn’t editing freeform prose. Then you need constraint checking so interface contracts, requirements constraints, and domain rules can be evaluated automatically. You need simulation and verification hooks so changes are run through analysis instead of being accepted because they look reasonable. You keep edits scoped so engineers ask for bounded modifications with clear objectives. And you keep human sign-off with traceability, especially in safety-critical contexts. The goal is to make invalid but plausible changes hard to merge into the authoritative model without being caught.


8. By directly coupling system models with Python or MATLAB analysis, Dalus closes the loop between requirements and verification. In practice, how does this change how engineers make decisions day to day? What breaks when validation is no longer delayed or manual?

Sebastian Völkl: Closing the loop changes the daily cadence from staged gatekeeping to continuous decision-making. When the model is directly coupled to analysis in Python or MATLAB, validation stops being something you do at the end and becomes something that happens as you design. Engineers iterate in smaller steps because feedback is cheap. Requirements stop being static promises and start being live constraints. Trade studies become reproducible rather than slide-based folklore. Teams argue less about whose spreadsheet is right and more about what the model implies.

What breaks when validation isn’t delayed or manual is a lot of comforting illusion. You can’t hide behind “we’ll validate later.” In the short term that feels uncomfortable because inconsistencies surface early. In the long term it’s exactly what you want, because it forces reality checks when changes are still cheap.


9. Traditional MBSE tools often create a small group of specialists who control the model. Dalus pushes toward real-time collaborative modeling. How do you balance broad participation with the rigor required in safety-critical systems?

Sebastian Völkl: Collaboration and rigor aren’t opposites, but you need governance designed for participation. Traditional MBSE often centralizes control because the tooling is fragile and the model is hard to maintain. You get a small group that “owns” the model while everyone else submits requests. That creates a kind of rigor, but it also creates bottlenecks and loss of context.

Real-time collaboration can work in safety-critical environments if you treat the model like code. Permissions and roles matter. Not everyone can modify everything. Review workflows and baselines matter. Promotion to an authoritative baseline requires checks and approvals. Automated validation gates matter so breaking changes are caught immediately. Traceability matters so you always know who changed what, why, and what evidence supports it. Participation should be broad, but authoritative change should be earned through validation.


10. Your customers operate in highly conservative environments like aerospace, defense, and energy. What is the hardest barrier to overcome when asking a chief engineer to trust an AI-assisted, cloud-native platform with their core system knowledge?

Sebastian Völkl: The hardest barrier is risk perception tied to accountability, not technology. Chief engineers don’t fear tools, they fear unbounded uncertainty. In aerospace, defense, and energy, a mistake isn’t just a bug. It’s an investigation, a grounding, a hearing, and sometimes lives.

So the real objection isn’t “AI is scary.” It’s “If we adopt this, can I still defend the engineering truth?” Cloud and AI raise questions about data control, auditability, reproducibility, baselines, exportability, and compliance. Winning trust means answering those questions with mechanisms, not marketing. Security posture, audit logs, deterministic pipelines where possible, validation gates, and workflows that preserve accountability. The pitch isn’t “trust the AI.” It’s “trust the process that constrains the AI.”


11. Given your background in recruiting and team building, do you believe AI-native systems engineering tools will mainly accelerate existing experts, or will they lower the barrier for less-specialized engineers to perform high-level systems work?

Sebastian Völkl: Near term, these tools primarily accelerate experts. The biggest leverage comes from people who know what to ask, what to distrust, and how to interpret results. AI-native systems engineering removes busywork and surfaces inconsistencies faster, which turns experienced engineers into force multipliers.

Long term, it also expands the pool of people who can participate meaningfully. If the system model is legible, queryable, and executable, onboarding gets faster and interdisciplinary collaboration gets safer. It doesn’t magically turn juniors into chief engineers, but it does make system knowledge less tribal, which lowers the barrier for less-specialized engineers to contribute inside clear constraints.


12. Looking ten years ahead, if Dalus succeeds at its deepest ambition, how does the role of the human engineer change? Are we moving from writing requirements toward defining intent and constraints, with AI acting as a true co-designer?

Sebastian Völkl: If Dalus succeeds at the deepest ambition, the human engineer moves up the abstraction stack. Less time authoring and maintaining artifacts. More time shaping intent, constraints, and trade-offs. Humans focus on framing the problem, defining what must be true, and making judgment calls under uncertainty. The system, with AI assistance, handles representation, consistency maintenance, impact analysis, and a lot of generation like scaffolding architecture, suggesting interfaces, surfacing conflicts, and proposing verification paths.

Yes, I think we move from “writing requirements” toward defining intent and constraints, with AI acting as a co-designer. But only if the substrate is right. The future I believe in isn’t AI replacing engineers. It’s AI making the system model a living, executable object and then helping humans explore design space safely. The human stays accountable for truth and risk. The machine makes that accountability scalable.

Editor’s Note

This interview surfaces a structural failure at the requirements layer, where human intent consistently fails to survive system complexity.

Anton Vice on Why AI Still Depends on Human Data | Interview

Jeroen Borloo on Why AI Agents Break Without State | Interview