in

Muhammad Ali Khan on Orchestrating Hybrid Quantum Computing

In this conversation, we spoke with Muhammad Ali Khan, CEO of SuperQ Quantum, about hybrid quantum-classical orchestration, mathematically deterministic decision systems, and why the operating system layer will define practical quantum computing adoption.

You’ve written extensively about uncertainty in quantum mechanics, yet SuperQ is built around delivering deterministic, outcome-driven value. How did your philosophical work on uncertainty shape the results-first architecture of the Super™ platform? In your view, what is the real bottleneck holding quantum computing back today: physical qubit stability, or humanity’s ability to correctly model and frame problems?

This is a slight misconception. I was likely comparing AI and quantum computing. Results obtained from both AI and quantum computing are probabilistic; that is, solving the same problem multiple times may not yield the exact same result. But the key difference is that AI predicts a mathematical model fitting patterns in data while quantum computing solves a fixed deterministic mathematical model. When using an AI model, both the formula and the results are probabilistic. In quantum computing, the formula is deterministic but the results are probabilistic. Super™ delivers mathematical determinism. In my opinion, the biggest bottleneck to quantum computing utility is the lack of multi-modal operating systems that can operate classical and quantum hardware in a monolithic architecture. This hampers the current potential of both qubits and problem modeling layers. Super™ aims to bridge this gap.     

SuperQ positions orchestration rather than hardware ownership as its core moat. Why do you believe the future of computing advantage will be decided at the orchestration layer rather than by hardware breakthroughs alone? What risks does this model help enterprises avoid as quantum hardware roadmaps continue to diverge?

This relates to the previous question. Several quantum computers exist today. While significant room remains to improve error correction and increase the number of logical qubits, the missing piece has been a truly hybrid quantum-classical system. I am not talking about hybrid solvers, rather an OS. Windows or MacOS seamlessly control both the CPU and the GPU to deliver utility and hide complexity from the user. We need to add QPUs to the mix. Now obviously we can’t have all of this on the same hardware right now, but we can combine them at the software level. This is SuperQ’s moat through Super™ and ChatQLM™ for our B2B and B2C users. I would like to note that we are also developing our quantum hardware, although its goal differs from that of the current major market players.

Super™ automatically routes workloads across classical HPC, GPUs, quantum annealers, and gate-based quantum systems. From a systems-design perspective, what makes orchestration non-trivial at scale, and how do you prevent enterprises from being locked into a single vendor or compute paradigm as this stack evolves?

The secret sauce is breaking a problem down into components that can benefit from various quantum devices vs those that must be solved classically. Another one is substitution. If a particular quantum computer is unavailable, how do you transpose the problem to another device while minimizing latency. Both of these are non-trivial when solving decision problems from a variety of domains on a variety of machines. Super™ solves problems in transportation, logistics, manufacturing, government operations, materials, healthcare, quantum AI, scientific research, finance, and energy in the most vendor-agnostic manner possible today. 

In 2025, SuperQ introduced the industry’s first subscription-based recurring revenue model for quantum computing. Given the volatility and cost asymmetry of quantum resources, how did you design a pricing and credit system that remains economically viable while still lowering adoption barriers for enterprises and researchers?

Our systems use quantum computers only when needed and only for sub-problems. This means that the bulk of the work is done classically. Quantum computing is used as a subroutine in a larger algorithm. High-performance computing (HPC) and optimization solvers can collectively handle complex small- to medium-sized, and even some large-scale problems on their own. QPUs are used when the system determines a clear ROI in doing so. This minimizes wasteful computations and the use of expensive quantum resources for all tasks. Our pricing reflects this optimization. Quantum hardware companies cannot compete because they only perform computations on their own hardware. We are enabling them to sell more compute and create more utility.   

ChatQLM has been described as the ChatGPT moment for quantum computing, but its core promise is not text generation, it is mathematically verifiable decision-making. How does the Quantum Leveraged Model address the numerical hallucination problem of large language models while preserving a natural conversational interface?

The Quantum Leveraged Model (QLM) logically combines LLMs, optimization solvers and quantum solvers. It handles user prompts, content generation and qualitative analysis just like LLMs do. The key differentiator is that mathematical modeling handles quantitative decision-making tasks, which are then solved using optimization solvers, GPUs and / or QPUs. Running the same tasks on LLMs creates simplified models that do not reflect all decision parameters. Moreover, LLMs cannot allocate the computational resources needed to solve these models. Consequently, QLM functionalities are a superset of LLM functionalities.    

At runtime, how does ChatQLM decide whether a problem should be solved on NVIDIA-powered classical infrastructure, a quantum annealer, or a gate-based quantum processor? What role do latency, cost, and solution confidence play in that decision-making process?

I can’t really say much beyond my previous response. Part of the “how” is patent-pending and will become available once the USPTO publishes the patent applications. The other part is a trade secret and will remain so until we decide to publish it. I can say that latency, cost vs value, solution quality, time to solution and several other factors are accounted for. Certain problems require a good solution as fast as possible whereas others require the best possible solution even if it takes longer. For example, scheduling a trucking fleet just before departure falls into the first category, while optimizing feature selection of AI models falls into the latter category. Certain consumers demand fully on-premise solutions, while others ask for integration with their existing systems. Super™ and QLM™ take all of these constraints into consideration.         

You integrated NVIDIA’s CUDA and CUDAQ toolkits at a time when hardware vendors are moving closer to the application layer. How do you view the balance of power between hardware-defined software and software-defined hardware, and what strategic advantages does Super™ retain as interoperability becomes a competitive battlefield?

We have integrated the NVIDIA CUDA and CUDAQ toolkits. We haven’t integrated NVQLink which is a control layer for quantum hardware. We plan to integrate it into the quantum computer that we are developing. Interoperability is part of our moat. We provide a single endpoint to utilize the best of quantum and classical infrastructure. In addition, we dynamically translate problems between CPUs, GPUs and all QPU modalities, enabling inter-hardware compatibility. Super™ is evolving into the OS for a hybrid quantum future.  

SuperQ’s Quantum Super Hubs introduce physical, regionally distributed access points in an era dominated by cloud computing. Beyond latency, what role do data sovereignty, ecosystem development, and local talent cultivation play in this model? Is this a response to emerging concerns around compute sovereignty?

Super Hubs are quantum and supercomputing experience centers. They provide access to systems including our and third-party technologies, learning resources, research capabilities, and mentorship, all under one roof.  They should not be viewed only as on-premise or sovereign compute centers. We are independently building sovereign compute modules complete with quantum computing, for deployment in data centers, government, defence and sensitive industries.   

Post-quantum cryptography has shifted from theoretical risk to practical urgency with harvest-now, decrypt-later threats. For enterprise CIOs still postponing action, how imminent is the risk in your assessment, and how does SuperPQC™ move beyond diagnostics toward real remediation?

Yes, it does. We launched our diagnostic AI in September 2025. This year, we have launched a complete solution suite that diagnoses cryptographic vulnerabilities in any Web2 or Web3 system, builds a roadmap to address these vulnerabilities, and now provides enterprise-ready email, VPN, digital signature and other systems to replace insecure systems. We will expand SuperPQC™ to include network security, blockchain protocols, crypto tokenization standards and other components. HNDL threat has been real for years and now industry leaders are becoming increasingly vocal about it. All information and financial systems including banking, Bitcoin and government services are considered compromised until PQC secured. 

SuperQ operates at the intersection of probabilistic computation and high-stakes domains such as healthcare. How does your platform translate probabilistic quantum outputs into deterministic, auditable recommendations, and what safeguards are required to make such systems deployable in practice?

This is no different from using diagnostic AI in healthcare or fraud detection AI in financial networks. The important advantage quantum computing and optimization solvers have over AI is explainability. One has to  design AI systems deliberately when explainability is important. Results obtained through quantum and optimization solvers are mathematically explainable. This audit trail of quantitative reasoning makes a big difference. However, this explainability is usually lost in translation. Our technology makes it explicit and understandable for users.     

If 2026 marks the beginning of quantum’s practical adoption phase, what does success look like by 2030? In that future, do you see SuperQ primarily as a distributor of compute resources, or as a creator of decision intelligence that makes the underlying compute invisible to users?

By 2030, we want quantum computing to be as widely used for enterprise, research and individual utility as generative AI is today. SuperQ Quantum will be critical to achieving this goal. We are the only hardware agnostic value creator in the space, achieving this at a fraction of the typical costs. Our role will be to provide an operating system like Windows and MacOS that stitches the best of classical and quantum hardware together to serve the users while hiding the complexity from the UI. 

Sitegeist Raises 4M for Construction Robots

Daniele Pucci on Physical AI and the Limits of Vision-First Humanoids