in

Alexander Le Maitre on Attritable Systems and Agentic Warfare

In this interview, Alexander Le Maitre, Founder of Seeing Systems, discusses the shift from high-cost, platform-centric warfare toward attritable, modular systems optimized for mission reliability and autonomy.

Modern warfare is increasingly shifting from exquisite, high-cost platforms toward attritable systems optimized for precision at scale. From your background in EOD hardware, where failure is not an option, how did you reconcile extreme reliability with the idea of building systems that are intentionally disposable?

My background, even going back to non-military projects, ingrained a deep respect for reliability because lives were directly dependent on that hardware. That discipline remains, no matter what we make – because at the end of the day we’re building platforms that will be handed to our troops who will be asked to perform in the most intense of environments.

The shift isn’t away from reliability, it’s more of an adjustment for what we’re optimizing for. Previously, I would optimise for durability and repeat survivability. In attritable platforms, we optimize for mission reliability within tight cost constraints and production scalability.

As a company, that means we’re disciplined about defining mission life, allocating margin where it matters, and resisting the temptation to overbuild for scenarios that don’t drive mission success. Reliability must still be a top priority but now it’s achieved through scalable, cost-effective execution rather than platform preservation.

Whereas one unit could be extremely reliable – it’s just as good to use many disposable units, which, taken in aggregate, are still extremely reliable at completing the mission. 

Electronic warfare has turned GPS denial and communications disruption into baseline conditions rather than edge cases. In that environment, what does agentic autonomy enable that traditional automation does not? How does your system continue to function tactically when external signals degrade or disappear entirely?

Everyone’s interpretation of autonomy is slightly different, but this is how I see it: traditional automation is, for the most part, instruction-oriented. It executes predefined routes and responses. 

Our system is goal-oriented rather than instruction-oriented. Instead of following fixed waypoints, it operates against mission intent such as maintaining ISR coverage, shadowing a target, or looking for anything interesting. The key is that it can dynamically re-plan in real time. If jamming increases, a route becomes obstructed, or a target maneuvers unexpectedly, the system adapts its geometry and behavior to preserve the objective. If it’s totally jammed, it continues by itself, but it will try to regain connection to the operator.

Most importantly, it is designed to operate on degraded and uncertain inputs. Rather than requiring clean signals, it continuously updates confidence levels and adjusts behavior accordingly. I should mention that this is all true for flight control – any instruction that would require an offensive action still requires a human-in-the-loop.

Seeing Systems often frames itself as a response to platform obsolescence rather than as a drone manufacturer. Can you walk through how your hardware modularity is designed at the interface level? What were the hardest constraints in making sensors, compute, communications, and payloads genuinely swappable under combat conditions?

Yes, we built the platform from the ground up to address rapid obsolescence, so modularity is at the core of our systems – and we built them in collaboration with operators in the UK armed forces.

At the hardware level, sensors, compute, communications, and payloads connect through standardized interfaces. Power is regulated and abstracted so new modules don’t require redesigning the distribution system, and data runs through defined interface layers so autonomy and flight control are decoupled from specific hardware drivers.

That means that when technology advances, whether in sensing, radios, or onboard compute, we can upgrade subsystems without replacing the entire platform.

The hardest constraints were coupling integration and survivability. Modularity increases complexity at the software level, so we had to enforce strict interface discipline to ensure different module combinations work reliably. On the hardware side, making components genuinely swappable while remaining rugged, sealed, and reliable under combat conditions was non-trivial. 

The result is a system designed to evolve over time rather than become obsolete in a single procurement cycle. I’d like to see our systems still being an effective solution 5 or more years from now, something I don’t foresee the current iteration of combat FPV drones being able to deliver.

Your modular architecture appears to decouple expensive compute and sensing from airframes that are expected to be lost. Is it fair to describe this as applying a microservices mindset to physical warfare? How does that change how militaries should think about loss, replacement, and resilience?

It’s a fair analogy. Traditionally we would see military platforms bundle sensing, compute, payload, and more into a single expensive asset. When that platform is lost, you lose everything.

Our architecture deliberately decouples high-value components from airframes that are expected to be attritable. The airframe becomes a commodity layer, while mission-critical capability is modular and selectively deployed.

That changes how militaries think about loss. Instead of treating every platform as a capital asset that must be preserved, they can tailor cost to mission intent. For high-risk, one-way missions, you field minimal, low-cost configurations. For recoverable ISR or precision missions, you attach higher-end modules.

Resilience shifts towards absorbing loss intelligently. Replacement becomes a matter of swapping modules or airframes rather than rebuilding a fully integrated system.

In that sense, it does apply a microservices mindset: capability is distributed, composable, and not tightly bound to a single platform lifecycle. The ordering of such systems can also be tailored by adjusting order compositions to procurement needs, and we can provide continually up-to-date “upgrade packages” at significantly lower costs compared to complete system replacement. 

We can even offer this as a subscription, so the end-user doesn’t need to do the heavy lifting of keeping their systems constantly updated.

Traditional drone operations still suffer from the single pilot bottleneck. In your system, how does the operator’s role change when commanding multiple autonomous units? At what point does command itself become a software abstraction rather than a human task?

One operator actively flying one system is cognitively taxing. This creates a scaling bottleneck. I like to call it the one-pilot problem. For as long as one pilot can only control one drone, the option of throwing more men at that person is still viable.

In our architecture, the operator shifts from direct nuts-and-bolts control to mission command. Instead of piloting, they assign intent through a drag-and-drop mission interface or an agentic, human-language layer on their ground control station. Commands are things such as “patrol this area”, “follow this target”, “strike this asset”, “see if there are any people near the radio mast”. The system handles navigation, deconfliction, formation logic, and execution autonomously after the human command is given.

When commanding multiple units, control becomes supervisory rather than manual. The operator intervenes only when a decision point requires human judgment. 

For example – current FPV pilots need to control the throttle to control altitude. For most of the mission, this is a dangerous waste of human resources, and should trivially be automated.

At scale, command does indeed become a software abstraction. The autonomy layer would manage the swarm coordination, role assignment, and dynamic task distribution whether that’s multi-role swarms or mothership-style deployment. The human defines objectives and makes the important decisions across a swarm; the system handles execution.

That solves our one-pilot problem and becomes an almost endlessly scalable solution.

You’ve worked closely with frontline units, including Royal Marine Commandos, and iterated based on live deployments. Can you share a concrete example where real-world feedback fundamentally altered your system design, not just refined an existing feature?

One of the biggest lessons from working closely with frontline units was how quickly complexity becomes failure under stress. If something can go wrong, it will! An obvious insight, but the things people get wrong isn’t always obvious! One small example – our users often attached replacement propellors the wrong way, and time was wasted whilst users debugged the drone. In response, we designed an attachment mechanism that makes it impossible to attach propellors the wrong way. We want to make invalid states unrepresentable – a lesson learned from software engineering. 

That fundamentally shifted our design philosophy toward usability and reduction. Instead of asking what features we could add, we started asking what we could remove. We stripped workflows down to the lightest possible interaction set required to achieve robust mission success.

With the Commandos, the most valuable insights didn’t just come from formal feedback, it came from watching how they adapted the system in real time, the questions they asked, and where friction emerged. That directly influenced how we now simplify mission configuration and strengthened autonomy so the system absorbs complexity rather than pushing it onto the operator.

In Ukraine, we’re seeing similar lessons around rapidly changing EW conditions. That reinforced the importance of modularity, being able to swap communication modules depending on the front-line environment has allowed platforms to remain relevant rather than becoming obsolete. Systems that might otherwise have been sidelined can be reconfigured and redeployed.

The bottom line is design for reality, not ideal conditions.

Seeing Systems is sometimes compared to Anduril, occasionally jokingly as better banter, worse weather. Beyond culture, where do you deliberately diverge from large defense technology incumbents in how you design, deploy, and evolve systems under real combat timelines?

We’re intentionally lean. That means the same people designing the system are often the ones deploying it, integrating it, and receiving feedback directly from operators. There’s no long translation chain between engineering and the field.

Large incumbents tend to design around multi-year acquisition cycles. We design around real combat timelines. That changes priorities, speed of iteration, modular upgrades, and rapid reconfiguration matter more than locking a platform for a decade.

We also diverge in how tightly we integrate. Rather than delivering a closed system, we work deeply with units to configure capability to specific operational needs – whether that’s EW adaptation, autonomy tuning, or mission-specific modular setups. We use the “forward deployed engineer” model and try to maximise how “forward” our engineers are, so we can have the tightest possible feedback cycles.

In short, we optimize for responsiveness and evolution under pressure, not just scale and program longevity.

Lifecycle cost reduction is a core claim of your platform. In practice, does that efficiency come more from durability, from modular upgrades that extend platform life, or from changing procurement assumptions altogether? How receptive have defense buyers been to this economic logic?

I’d say the biggest one comes from the modularity. At a procurement level, the largest savings come from keeping systems operational and relevant for longer, which modularity enables. Rather than forcing customers into full platform replacement cycles every few years, modular architectures allow incremental upgrades – whether that’s new sensors, software, or communications packages. This extends platform life significantly and protects the original capital investment while ensuring capability remains current.

In operational environments, durability plays an equally important role. Our systems are designed to withstand repeated use in harsh training and field conditions without frequent downtime. That resilience directly reduces maintenance cycles, spare part consumption, and the logistical burden typically associated with fragile or over-engineered platforms. Repeatability in both manufacturing and servicing further lowers lifecycle costs, as standardized components and streamlined maintenance processes reduce time and expense across the fleet.

Finally, we challenge traditional procurement assumptions by deliberately avoiding unnecessary complexity. Rather than developing bloated platforms packed with “nice-to-have” features that inflate R&D costs and long-term sustainment expenses, we build precisely what has been requested—mission-focused, modular, and scalable systems. This disciplined approach reduces upfront development costs and prevents downstream servicing and upgrade complications.

Defense buyers have responded positively to this economic logic. Many militaries are still early in their adoption of drone ecosystems and are actively reassessing how they procure and sustain these capabilities. Our approach – prioritizing longevity, modularity, and lean design – has been well received and often praised for offering a more sustainable and strategically aligned way of thinking about drone systems procurement.

Your co-founder Matthew’s background at Jane Street brings a culture of extreme precision and low-latency systems. How did that mindset shape your approach to agentic autonomy, particularly in environments defined by partial information and time pressure?

There are a surprisingly high number of similarities between the two worlds. Jane Street rightfully cares a lot about correctness in their systems, which is challenging when the real world is messy and full of uncertainty. That’s something fundamental that we share.

Agentic autonomy is a way to make it possible for the humans to focus on the things that really need their attention, and increases the amount that can be done by one individual. That’s another thing that we share: Jane Street, like many other companies, is working hard to leverage the most cutting-edge technology in order to focus human attention where it has the most impact. 

Ultimately it’s all about efficiency. Efficient use of capital, efficient use of human attention, efficient use of time. Trading firms think about many of the same things as us – their goal is making profitable trades while making markets more liquid, and ours is empowering those who need defending. 

Autonomous strike and swarm-capable systems raise unavoidable ethical and governance questions. How do you define the boundary between machine autonomy and human authority in your platform? Which decisions are structurally non-delegable, regardless of operational urgency?

Absolutely, and I think it’s a really important question. We’ve done a lot of thinking about this. How I see it, autonomous systems, built well, have the capacity to be safer than their conventional counterparts.

Autonomy in our platform governs navigation, coordination, deconfliction, target tracking, swarm behavior. It reduces technical burden and allows operators to command intent rather than manually pilot systems.

However, any action such as target confirmation and strike authorization is non-delegable. The system cannot independently initiate lethal effects. Human authorization is structurally required, regardless of operational urgency.

So just to be clear here – the autonomy layer can recommend or present options, but it cannot execute kinetic actions without explicit human confirmation.

We believe autonomy should compress technical complexity, not moral responsibility. The machine handles execution mechanics. Authority remains human.

The scary part is, what happens when your opponent removes their human-in-the-loop. This could give them an edge, so you’re stuck with a difficult choice – either you remove your human in the loop, or you are slower to act than your adversary. This is not a problem we’re facing, and we don’t have the answers, but my hope is that instead, the world moves towards a “drone-vs-drone” kind of warfare, where the human cost is greatly reduced.

Defense procurement has historically favored long cycles and fixed requirements, while modern conflict demands rapid iteration. As a YC-backed company operating inside this tension, how do you balance venture-scale speed with the trust and accountability required for national security systems?

Speed and national security accountability don’t have to be in conflict but they do require careful planning and discipline.

We move quickly in iteration cycles but we separate iteration from validation. Rapid prototyping happens early; fielding happens only after rigorous testing and operator validation, we also do this quickly but not rushed.

A key part of balancing that tension has been building trusted relationships with frontline units. That gives us direct feedback loops while maintaining transparency and accountability in how systems evolve. And, by going straight to the front-line, we can skip a lot of the red tape, and bring tools to those who need them faster.

Looking ahead five years, do you see Seeing Systems evolving into something closer to a defense operating system rather than a collection of vehicles? Beyond aerial drones, how broadly do you believe agentic, modular autonomy can extend across other domains?

Over the next five years, we absolutely see ourselves evolving beyond a collection of vehicles and toward something closer to an autonomous backbone for modern operations. We’re particularly excited about returning to Search and Rescue – where we first focused our efforts before YC – and expanding into the maritime domain. 

Having grown up on a tiny island, around boats and on the sea, maritime operations are second nature to us. In the medium term, we’re specializing in maritime USV integration and maritime drone operations, building systems designed specifically for coastal environments, offshore monitoring, and vessel-based deployment. From there, expanding into lifeboat services, search-and-rescue coordination, and coastguard assistance feels like a natural progression, bringing modular autonomy to more missions where response time and resilience matter.

Longer term, it’s easy to envision us becoming increasingly software-led. Agentic, modular autonomy is not limited to defense or even to drones – it has broad, cross-sector applications. The same autonomous backbone that coordinates aerial and maritime systems can extend into public safety, critical infrastructure, environmental monitoring, and beyond.

In many ways, the vehicles become endpoints in a much larger, intelligent operating system. As autonomy matures, we see our role as enabling that scalable, modular layer one that can integrate across domains and industries. If autonomous systems are present in a sector, we intend to be close behind, delivering the integration, orchestration, and operational logic that makes them maximally effective.

Editor’s Note

This interview reflects a broader transition in defense strategy from preserving platforms to designing resilient, composable systems that absorb loss and operate under degraded conditions.

Michelle Lim on Autonomous Websites and the End of Manual Optimization