In this interview, Asad Tirmizi, CEO of Trener, explains why industrial automation is shifting from code-defined workflows to model-defined skills, and why factory intelligence now depends less on programming hours than on adaptive performance in production.
You argue that industrial automation is shifting from code-defined workflows to model-defined skills. When programming hours stop being the primary constraint, where does durable economic power move?
Under the traditional approach, an integrator’s value was trapped in the hours they spent hard-coding a single part. In a model-defined world, the value is in the fleet’s performance and productivity gain. Economic power shifts to the platform that can take a thousand messy edge cases from different shops and turn them into a reliable, out-of-the-box skill. The winner is whoever owns the intelligence that makes a robot actually perform better on day 100 than it did on day one.
What specific real-world variable has most frequently broken a model that worked in simulation, and what structural change did that failure force inside Acteris?
The contact forces from synthetic data generated through simulation or video are not accurate. We had to improve our haptics capabilities as well as invest in physics solvers.
Replacing deterministic scripts with adaptive models increases flexibility. What certainty or control do manufacturers give up in that trade?
They give up the fixed path. Under the traditional approach, the robot arm moves exactly the same way every time. With an adaptive model, the robot might take a slightly different angle to grab a part because it sensed the part was tilted or there was a pile of chips in the way. Plant managers have to trade the comfort of seeing the same repetitive motion for the certainty that the part actually gets loaded correctly, even when the environment is unstructured. For many of them, that is a welcome trade-off.
You narrowed Trener’s focus to CNC machine tending despite broader applicability. What downside risk did you knowingly accept by choosing discipline over expansion?
I accepted the risk of being misread as a niche player. In this industry, there is a huge temptation to try and solve every low-complexity task at once, like basic warehousing or simple picking. Under the traditional approach, companies spread themselves thin. We did the opposite. We ignored the easy wins to solve the hardest problem first: high-precision metal parts in unstructured, dirty environments. The risk was being dismissed by people looking for quick scale, but we chose the technical high ground. The reality is that if a model can handle the tight tolerances and grit of a CNC cell, it has already mastered the hardest variables in the factory. Everything else becomes a downstream application of that core intelligence.
What was the first production deployment where Acteris proved something non-obvious to you?
It was a chip blow-off step in a machine cell. We thought the big win for AI was going to be the precision of the grip. It wasn’t. The real value was the robot realizing that if metal chips were stuck on the fixture, the part wouldn’t sit right. Instead of just stopping and throwing an error, the robot figured out it needed to hit the jig with another blast of air from a different angle. We didn’t program that specific “if-then” logic. The model just knew the goal was a clean seat and it kept working until it got there.
If Acteris becomes the intelligence layer across multiple OEM ecosystems, what is the structural vulnerability in that position?
Under the traditional approach, hardware manufacturers keep their ecosystems closed. The value has to be in the interoperability layer. We make a mixed shop floor work better as a whole than any single brand could on its own. We have to be the reason different machines actually talk to each other and share intelligence.
You separated T-Labs from deployment teams. Where has that organizational split created friction, and what nearly failed because of it?
Under the traditional approach, the split was designed to create a productive tension between theoretical capability and shop-floor reality. Early on, this gap forced a critical realization: a model that performs in a clean lab is only half the solution. Our deployment teams pushed back on the researchers because high-level intelligence is useless if it cannot meet the rigorous cycle times of a high-volume production line. This internal stress test is what actually hardened our stack. It led to a non-negotiable standard where speed and edge hardware efficiency are baked into the research phase from day one. Solving for latency as aggressively as accuracy made the system useful beyond the lab and fast enough for real factory deployment.
System integrators historically monetize programming hours. When those hours compress, what must replace that value for the ecosystem to remain economically stable?
Instead of billing for 500 hours of custom code, they charge for the speed of the deployment and a guaranteed production rate. They move from being guys who write code to being “Fleet Architects.” Their money will come from the ability to manage ten times as many cells with the same number of people.
At what moment did you realize that general AI narratives collapse under industrial uptime requirements?
In the tech world, a 95% success rate is a massive win. In a factory, 95% is a total failure. If a robot misses its mark one out of twenty times, the whole line stops and production targets are missed. Industrial AI has to be reliable, robust, and a little bit boring. It is a different beast than the general AI concepts coming out of Silicon Valley.
As robots gain higher levels of autonomy, where do you draw the boundary between machine initiative and human accountability?
The human sets the intent; the machine handles the variation. The robot should be smart enough to figure out the best path to grab a part or how to retry a grip if it slips. But the human always defines the goal and the safety limits. We look at the human as the supervisor of what needs to happen, and the robot as the one who executes the “how” in an unpredictable environment.
If physical AI scales, which asset compounds fastest over time: model weights, proprietary data, integration depth, or installed base?
Anyone with the installed base will automatically get the remaining three.
When does the shift from program-driven factories to model-driven factories become irreversible, and what does that end state look like?
Manufacturing has traditionally been a static, program-driven process. The shift to a model-driven factory becomes irreversible the moment the unit economics of an adaptive facility outperform the legacy model. The end state is a shop floor where you do not program anything. You show the fleet a new part and the system figures out how to run it. The factory becomes an asset that actually gets more efficient the longer it operates. Once a manufacturer can launch a new product line in days instead of months, the old way of building things is no longer viable.
Editor’s Note
This interview examines a broader shift in industrial automation from deterministic programming toward model-driven systems, where uptime, edge-case handling, and deployment speed matter more than static repeatability.

