In this interview, Marco Prata, Co-Founder of Seed Robotics, explains why durability, not precision, remains the main bottleneck in dexterous robotic hands, and why tactile sensing and serviceable hardware matter more than incremental mechanical refinement for real-world deployment.
When building a dexterous robotic hand, what trade-off tends to matter most in real-world deployment: precision, durability, or controllability?
Durability.
In a research environment it is somewhat acceptable that repairs will occasionally be needed if, for example, an actuator fails. In real-world applications that is no longer the case. Users expect a high degree of reliability, similar to what they expect from everyday appliances like a dishwasher.
However, a robotic hand has dozens of moving parts and operates in unstructured environments, which makes achieving that level of reliability much more challenging.
Here humans still have the advantage. When one of our “actuators” breaks down, it usually fixes itself after some time.
Precision is closely intertwined with controllability, and software can improve both over time through remote updates and better models. Durability, on the other hand, depends mostly on hardware and must be designed in from the start — it cannot simply be updated later.
What design decision in your tendon-driven architecture has had the greatest long-term impact on product reliability?
One of the main advantages of tendon-based designs is that they allow actuators to be moved to more convenient locations where there is more space, in our case, the forearm.
This enables us to use larger actuators that are easier for end users to replace. We consider ease of maintenance itself to be an important contributor to reliability.
Another advantage is that tendon-driven architectures allow us to protect the actuators from external impacts. For example, the fingers in our hands can sustain heavy side impacts, bend out of the way, and snap back into position without damaging the actuators.
The downside of tendons is that they can wear out. This is mostly caused by friction. Any unnecessary friction leads to wear, reduced efficiency, and eventually failure.
For this reason, we worked to almost completely eliminate tendon friction points.
These measures have resulted in a level of reliability that many customers report exceeds their expectations for a tendon-driven design.
The second failure mode is flexion fatigue in the tendons themselves, but this usually appears only after hundreds of thousands of cycles, which we consider acceptable for our target market. And if tendons do eventually fail, they can be replaced by the user.
How do you determine when a hardware platform is mature enough to support repeatable AI research rather than experimental prototyping?
It mostly comes down to operational cycles and repeatability.
AI research, especially reinforcement learning, requires extremely large numbers of interaction cycles, often several hundred thousand or more. If a platform cannot sustain that number of cycles without failure, experiments become difficult to reproduce.
For a platform to be useful for AI research, it must be able to run for long periods with consistent performance and minimal maintenance.
In manipulation tasks involving fragile or irregular objects, where does tactile sensing make the clearest difference?
A lot of robotic manipulators focus on precision and repeatability. While those metrics have value, they are also relatively easy to optimize to look good in marketing materials.
Humans are successful at grasping not because of extreme positional precision, but because of rich tactile feedback. We do not pre-plan closing our fingers to a specific angle with decimal-point accuracy. Instead, we close them until we feel an appropriate tactile response from the object.
We also continuously adjust our grasp strategy based on the feedback we receive, for example if the object is heavier, more slippery, or shaped differently than expected.
Tactile sensing allows a robotic hand to adapt to irregular shapes and apply just enough force when handling fragile objects. It enables detection of slip, uneven contact, and subtle pressure changes.
This adaptive capability is where tactile sensing makes the biggest difference.
Why was it important for you to expose raw tactile data instead of only higher-level force interpretations?
Many of our clients process tactile data using neural networks.
Any preprocessing or interpretation we apply risks removing information that could be useful for downstream models. Because of that, we try to keep the data as close to the sensor output as possible.
The only processing we perform is temperature compensation and calibration, so that sensors behave consistently between different units.
As sensing expands from fingertip contact to distributed coverage, what system limitation becomes most visible?
Bandwidth.
Machine learning models often benefit from high-frequency tactile data. However, maintaining high frame rates while increasing the number of sensors quickly becomes challenging.
Data throughput, processing pipelines, and communication interfaces quickly become bottlenecks as sensor coverage increases.
How do you decide when adding new hardware capability advances research versus increasing integration complexity?
We try to follow a market-led strategy.
We speak frequently with existing and potential users to understand their research goals and feature wish lists. Then we evaluate whether the added capability meaningfully expands experimental possibilities.
If the complexity, and potential reliability reduction, outweighs the benefit for most users, we usually postpone it until the technology becomes easier to integrate.
Looking back at earlier versions of your hand designs, what engineering assumption changed the most over time?
The biggest shift has been the importance of rich tactile sensing.
Today it often feels like the amount and type of sensors drive research interest more than the purely mechanical features of the hand.
This shifted our focus more toward sensing and control rather than purely mechanical refinement. For that reason, the base hardware model of the hand has remained relatively stable for some time, while we continue to expand and improve the sensing capabilities.
As a hardware lead, how has your definition of “robust design” evolved through real deployment experience?
Initially we thought robust design mostly meant strong mechanical components and reliable actuators.
Over time we learned that robustness also includes serviceability and predictable failure modes. Systems should be easy to repair, easy to recalibrate, and tolerant to imperfect conditions.
For example, we now deliberately leave more internal space between actuators so that a user can replace one without disassembling the entire hand.
It also cannot be something that only a technician in the lab can work on. It should be intuitive enough that someone without prior experience can open it, look at it, and understand how to replace a component.
A robust system is not one that never fails, but one that continues operating, or can be restored quickly, when something eventually does.
What type of user feedback most influenced the direction of your product development?
Feedback from researchers running long-duration experiments has had the biggest influence.
When someone runs reinforcement learning or continuous manipulation experiments for days or weeks, small reliability issues, such as tendons breaking or actuators overheating, quickly become major problems.
This kind of feedback pushed us to prioritize durability, maintainability, and stable sensor behavior across devices.
What capability is still missing in robotic hands that limits broader real-world deployment?
Reliability remains the biggest limitation.
Humans expect tools to work consistently for long periods without maintenance. Robotic hands are still far from that level of dependability.
Until robotic manipulation systems can operate for very long periods with minimal intervention, large-scale real-world deployment will remain limited.
Looking ahead, what development in dexterous manipulation would most expand how robots interact with everyday environments?
At this point I think the biggest improvements will come from the AI side.
Even ignoring reliability limitations, the main challenge is operating effectively in unstructured environments.
We frequently receive requests to integrate anthropomorphic hands into manufacturing environments to take advantage of their increased capabilities. However, these environments are used to simple grippers with one or two degrees of freedom, and they often underestimate the complexity of controlling a dexterous hand.
Some companies have developed solutions, but they are usually very specific to their own hardware and operate in closed ecosystems.
It is similar to knowing how to drive a petrol car but having to learn everything again when switching to an electric car.
I feel what we need is a “ChatGPT moment” for robotic hands — a system that is simple to operate and hardware-agnostic. Someone at home, maybe even a child or an elderly person, could ask a robot to perform a task in natural language, and the robot would execute it within the constraints of whatever hand is available, instead of requiring a completely different solution for each hand model.
Editor’s Note
This interview examines the shift in dexterous manipulation from mechanical optimization toward reliability, tactile sensing, and hardware platforms robust enough for repeatable AI research and long-duration use.

