in

Mohit Panwar on Why Industrial GenAI Must Stay Deterministic

In this conversation, we spoke with Mohit Panwar, Principal Product Manager for GenAI and ML at Honeywell, about applying generative AI in zero-tolerance industrial environments and how autonomy, safety, and human accountability are balanced in practice.

Q: Your career spans consumer AI at Ford with SYNC4a, personal agent systems at Lenovo with Qira AI, and now industrial GenAI at Honeywell. When moving from millions of drivers and device users to warehouses and factories, which AI design principles carried over unchanged, and which assumptions had to be fundamentally rewritten to meet industrial-grade constraints?

A: Look, the core principles of AI—making things usable, building trust, and not overwhelming the user—don’t change just because you move from a consumer app to a factory floor. Whether you’re building for millions of drivers or a handful of warehouse workers, the system still needs to be resilient and explainable.

But here’s where it gets messy. In the industrial world, you can’t just assume everything will work perfectly. You have to throw out your old assumptions about error tolerance and high-speed internet. Out there, the “edge cases” are actually the daily reality.

You’re often dealing with spotty connectivity, harsh environments, and workflows where a single mistake can be a disaster. Unlike a social media algorithm, industrial AI needs to be deterministic and auditable. You can’t just have a “black box”; you need to know exactly why a decision was made. That shift forces you to rethink everything—from the models you pick to how you validate the entire architecture.

Q: Honeywell is executing a multi-year transition from traditional automation toward more autonomous systems. From your perspective, how do you define industrial autonomy in practical terms, and what role should generative AI play in that stack? Is it becoming the decision-making layer, or the cognitive interface between humans and machines?

A: When we talk about “autonomy” in a factory or a plant, people often picture robots running the whole show by themselves. But in the real world, it’s not about machines being loners; it’s just about making things less of a headache. It’s about cutting out the guesswork and the constant manual “babysitting” of equipment, while still ensuring a person ultimately calls the shots.

Think of it this way: autonomy is really just a tool that helps us diagnose problems faster and figure out what needs to be fixed first so the work actually stays consistent.

This is where Generative AI fits in. It’s basically the “brain” that sits between the human and the machine. It sifts through all those messy signals and old data logs, turning them into something a person can actually use. But—and this is a big “but”—when it comes to safety, we don’t just hand the keys over to the AI. The final word still belongs to humans and the rigid, “fail-safe” systems we’ve always relied on. At the end of the day, AI is there to help us understand the situation faster, not to take the responsibility off our shoulders.

Q: In the Handheld Device hardware and software portfolio, GenAI is positioned as a strategic lever rather than a feature add-on. How did you decide where GenAI could genuinely change value creation instead of simply enhancing existing automation, and were there use cases you deliberately chose not to pursue?

A: Whenever I’m looking at GenAI, I always ask myself three things: Does this actually change who can do the job, how fast they can get it done, or how much we can trust the result? If it doesn’t move the needle on at least one of those, you’re usually just adding a bunch of tech for no good reason. The real “win” is when the AI can act as a shortcut for years of experience—basically taking all that complicated expert knowledge and making it accessible to someone who hasn’t been on the job for a decade.

Just as importantly, we had to learn when to say “no.” If a task was already working fine with a simple set of rules or old-school automation, we left it alone. There’s no point in using a complex AI model for something that a basic piece of code can do more reliably. We had to be really picky to make sure we were actually solving problems, not just playing with a shiny new tool.

Q: Operational Intelligence has evolved from asset tracking into a system that analyzes device health and operational risk. How are you using generative AI to move customers from reactive maintenance toward prescriptive insights, particularly in reducing No Fault Found returns and unnecessary RMA cycles?

A: Right now, most systems just yell at you when something breaks. But GenAI changes the game by moving us from “it’s broken” to “here’s why it happened.” Instead of just getting a random red light on a dashboard, the AI looks at how the machine was being used, what the sensors are saying, and what the last guy wrote in the service logs. It puts all those pieces together to tell you if you actually need to go out there and fix it, or if it’s just a fluke.

By digging through old technician notes and messy maintenance records, the AI can spot the real problem before you even open your toolbox. This means way fewer trips where you show up only to find nothing is actually wrong. It helps us prioritize the big stuff and gives us more confidence in our decisions, rather than just guessing or letting a machine make the final call for us. With thiss, we are moving from ‘reactive’ to ‘proactive’ mode.

Q: In zero-tolerance industrial repair scenarios, what architectural choices did you make to prevent hallucinations, and how do approaches like retrieval-augmented generation, domain-constrained models, or smaller specialized models factor into those decisions?

A: When safety is on the line, it’s not about how “smart” or huge the AI model is—it’s about how you build the cage around it. You don’t need a massive, all-knowing bot; you need a system that knows exactly when to shut up. We use guardrails to make sure the AI only talks about facts we’ve already verified. If the data isn’t there to back it up, the system shouldn’t be allowed to say a word.

Usually, a small, specialized model that’s built for one specific job is way better than a giant, general-purpose one. By keeping the “thinking” part of the AI separate from the “fact-checking” part, we make sure it sticks to the truth instead of just making things up to be helpful. In our world, “I don’t know” isn’t a failure—it’s actually the safest and most important answer the system can give.

Q: SwiftDecoder represents a shift from hardware-centric scanning to software-defined, AI-enhanced vision. As mobile devices gain more powerful NPUs, how do you decide which intelligence belongs at the edge versus in the cloud, and what does generative AI add to machine vision that traditional computer vision could not?

A: Choosing where to run your AI really comes down to how fast you need an answer and how much data you’re willing to move around. If you need a split-second and cost effective decision—like when a frontline worker scans multiple bar codes—you keep that logic right there on the edge. You don’t want to wait for a signal to travel to a server and back, adding to the cost of processing. But if you’re looking at the big picture, like spotting trends across ten different factories over the last month, that’s where the cloud shines.

The real “secret sauce” of GenAI is that it adds a layer of meaning that old-school cameras just didn’t have. Instead of just “seeing” a blur, the AI actually understands what it’s looking at. It doesn’t just flag a problem; it tells you a story—what happened, why you should care, and what your next move should be. It turns raw data into a clear plan of action.

Q: In healthcare and life sciences, AI is used in scenarios where errors can directly impact patient safety. What fail-safe mechanisms and validation layers are required before deploying AI-driven vision systems in these environments, and how do you balance speed, robustness, and absolute reliability?

A: When you’re working anywhere near healthcare, you can’t just “move fast and break things.” You have to build the AI with strict boundaries. We’re talking about multiple layers of double-checking and always—always—having a human in the loop to make the final call. The AI is there to give advice and show its work, not to act as a black box that nobody can explain.

Getting the balance right between speed and safety means taking it slow. We don’t just flip a switch; we start by using the AI as a second opinion, then move to supervised “ride-alongs,” and only consider more autonomy once it’s proven itself. In this world, how the system handles a rare, weird edge case is much more important than how it performs on an average day. If it can’t handle the outliers safely, it’s not ready for the floor.

Q: Honeywell emphasizes Secure by Design and adherence to standards such as ISA 62443, NIST, and ISO 27001. From a product leadership standpoint, how does this security-first mindset constrain or shape GenAI development in OT environments compared with consumer or enterprise IT AI systems?

A: In the OT world—where things can actually blow up or break—security isn’t just a checkbox; it’s the foundation. While a consumer AI might prioritize being “clever” or fast, we have to prioritize being predictable. Following standards like ISA 6244 or NIST means we build GenAI with a “cage” around it. We don’t allow the AI to be creative with safety protocols or send sensitive plant data into a public cloud. Instead, we bring the AI to the data, keeping it local, private, and strictly focused on verified facts rather than “best guesses.”

This security-first mindset changes the AI from a solo pilot into a disciplined co-pilot. In a standard IT office, “good enough” might be fine, but in a refinery or power plant, we have to account for the weirdest edge cases before we even think about deployment. Every suggestion the AI makes has to be traceable back to a manual or a sensor, and a human always keeps their finger on the physical button. We’re essentially trading “unlimited potential” for “total reliability,” making sure the tech works in the mud and heat without ever compromising the perimeter.

Q: Honeywell’s partnership with Google brings Gemini’s multimodal capabilities into the Forge platform. From your perspective, which multimodal combinations of text, images, sensor data, and operational logs are most transformative for front-line industrial use cases, and where do you still see practical limits?

A: The most exciting thing about bringing Google’s Gemini into the Forge platform is that it finally lets us combine “eyes” with “brains.” In the past, a technician had to look at a sensor reading, then dig through a 500-page PDF manual, and then try to remember what a similar problem looked like last year. Now, a worker can just point a mobile camera at a vibrating pump, and the AI instantly layers that live video over the machine’s vibration data and its entire repair history. It’s that combination of live video and sensor telemetry that’s the real game-changer; it turns a confusing alarm into a clear, visual instruction like, “I see the seal is leaking exactly like it did in 2022—here is the specific wrench you need.”

The biggest limit isn’t the AI’s “intelligence,” but the reality of industrial environments. If you’re in a remote corner of a refinery with zero Wi-Fi and wearing heavy gloves, a cloud-based multimodal AI doesn’t do you much good. There’s also the “noise” problem—both literal and digital. We’re still working on making sure the AI can distinguish between a critical mechanical hiss and background factory clamor, or between a harmless steam vent and a dangerous leak. We’re getting closer to that “Star Trek” level of interaction, but making it rugged enough for a person standing in the rain at 3:00 AM is the final hurdle..

Q: Industry 5.0 emphasizes human-machine collaboration rather than full automation. As AI systems increasingly generate recommendations, diagnostics, and actions, how do you design products so workers gain superagency rather than becoming passive reviewers of AI output, and how do you mitigate loss of situational awareness?

A: Industry 5.0 is really about turning the AI into a power tool rather than a replacement. To keep workers from just “zoning out” while the AI does the heavy lifting, we design the system to show its work, not just its answer. Instead of a screen that simply says “Replace Valve 4,” the interface should present the evidence—sensor spikes, historical logs, or similar past failures. This turns the worker into a high-level detective who uses the AI to see patterns they might have missed, giving them a kind of “superagency” where they are more capable, not less involved.

To stop “situational awareness” from slipping away, we avoid the “set it and forget it” trap. We build in checkpoints that require the human to validate the AI’s logic at critical junctures. By keeping the worker active in the decision-making loop—asking them to confirm a diagnosis or choose between two AI-suggested paths—we ensure they stay mentally connected to the physical environment. The goal is to make sure that if the AI ever needs to be turned off, the person standing there hasn’t lost their “feel” for the machine; they’re still the one in charge, just with a much smarter set of eyes.

Q: Some studies suggest generative AI can shift work from production to evaluation, sometimes reducing net productivity. In your deployments so far, where has GenAI clearly improved operational outcomes, and where has it introduced new forms of complexity or friction that required redesign?

A: In the real world, the biggest trap with GenAI is turning everyone into an “editor” who spends all day fact-checking AI hallucinations. We’ve found that if you just use it to generate generic reports, you’re often just trading the work of writing for the even more exhausting work of proofreading. The real “win” has been in knowledge compression. Instead of a technician digging through 50 years of messy, handwritten maintenance logs or 400-page manuals to find one specific torque setting, the AI surfaces that answer in seconds. When it cuts down the “search and rescue” mission for information, productivity actually spikes because the expert is back to fixing machines rather than shuffling papers.

However, the friction usually shows up when we try to force the AI to handle stuff that’s already working fine. We’ve seen cases where adding a “smart” interface to a simple, deterministic process—like a basic inventory check—just made things more confusing for the operators. It added a “black box” element to a task that used to be transparent. We had to redesign those workflows to keep the AI in its lane: as a tool for interpreting messy, unstructured data (like technician notes), while leaving the predictable, rule-based logic to the old-school systems that people already trust.

Q: Looking toward 2030, do you expect the handheld device industry to be primarily selling devices augmented with AI, or fully autonomous, self-optimizing workflows delivered as ongoing services? More importantly, what is one capability you believe industrial GenAI will not achieve in that timeframe, despite current hype?

A: By 2030, we won’t just be selling handheld tools; we’ll be selling “outcomes.” The device itself will basically be a window into a self-running service that spots problems and coordinates the fix before a human even steps in. It’s a shift from carrying a passive gadget to carrying a smart partner that manages the entire workflow for you.

But even with all the hype, GenAI won’t reach true legal or moral accountability by then. It can suggest a repair or predict a crash with incredible accuracy, but it can’t sign a safety permit or take the fall if something goes wrong. We’ll still need a human to make the final “go/no-go” call in high-stakes environments because AI can’t own the consequences of a mistake.

Alain Dijkstra on Turning Skincare Packaging Into Active Systems