Software-defined image processing is moving from fixed-function hardware into modern accelerators. In this interview, Visionary.ai cofounders Oren Debbi and Yoav Taieb explain why the image signal processor is shifting to AI-native architectures, how raw-domain processing changes fidelity and power tradeoffs, and why software-defined ISPs are becoming viable at scale.
1. At CES 2026, Visionary.ai and Chips&Media introduced what you described as the world’s first fully AI-based ISP. Hardware ISPs have historically existed for one reason: extreme power efficiency. By moving the entire imaging pipeline into software running on NPUs, what assumptions about the power-performance tradeoff had to be broken for this to become viable?
Oren Debbi: We would be lying if we said that power consumption wasn’t one of our biggest challenges in the early stages of our company. It was a real constraint. But two trends have made the impossible possible for us. One is the improvement in AI accelerators available in the market: compute power today isn’t what it was even one or two years ago. At the same time, we’ve succeeded in making our network smaller.
2. Most low-light breakthroughs rely on multi-frame stacking with noticeable latency, especially for still photos. Visionary.ai claims real-time enhancement for 4K video in near-dark conditions. Architecturally, how do you achieve temporal noise reduction within tight frame budgets without introducing motion artifacts or ghosting?
Yoav Taieb: This was one of the biggest challenges to address properly. A key difficulty was building a large, high-quality dataset that accurately represents the problem. We trained on a major dataset that we constructed ourselves, and during training we applied several techniques to avoid ghosting artifacts. The network architecture was designed to be highly constrained in terms of the number of operations, while still being able to effectively aggregate information from historical frames. This approach has a significant advantage, as it allows us to leverage temporal information using AI rather than classical methods, and effective temporal aggregation is precisely where AI excels.
3. Visionary.ai operates in the RAW domain, before traditional ISP processing alters the signal. From a systems perspective, why is intervening at this stage fundamentally different from post-processing approaches, and what types of visual information are permanently lost once the pipeline moves past RAW?
Yoav Taieb: The advantage of AI lies in how effectively we leverage the data to reconstruct the image. Working on raw images provides significantly more information, making this the optimal stage to aggregate data correctly, before information is lost along the processing pipeline and can no longer be recovered.
4. Your technology now powers Lenovo’s Yoga Slim 9i under-display camera, a category long considered commercially compromised due to optical diffraction and haze. In this case, is Visionary.ai restoring physically degraded information, or reconstructing missing detail based on learned priors? Where do you draw the line between enhancement and fabrication?
Yoav Taieb: Essentially, our image processing approach mirrors the way a traditional ISP pipeline operates today. We are not guessing or hallucinating image content, so there is no scenario in which elements appear that were not present in reality. As with any traditional ISP, artifacts can occur; in our ISP we perform extensive training to minimize them, and any remaining color issues or artifacts are comparable to those found in classical pipelines. The core idea is to aggregate as much information as possible and then estimate the most reliable signal, and when the signal cannot be accurately reconstructed, we apply blurring in that region, just as a traditional ISP would. Our key advantage is that we have access to more information than a classical ISP, because we aggregate the data using an AI-based approach.
5. As generative AI becomes more capable, concerns around visual authenticity are growing, especially in video conferencing and identity-sensitive contexts. How does Visionary.ai ensure that improved image quality does not come at the cost of visual truth?
Yoav Taieb: We are not inventing new information; we are simply leveraging the available information correctly. When the data cannot be reliably used, the network blurs those areas, just like any classical approach. Staying as close as possible to reality and avoiding the creation of artificial content is one of our key technological advantages.
6. Visionary.ai’s technology demonstrably improves machine vision performance, not just human perception. Looking ahead, do you see a future where imaging pipelines split into separate tracks optimized for humans versus machines, and if so, which market ultimately defines the company’s long-term value?
Yoav Taieb: We believe the two should go hand in hand, as it is always important to monitor your system and understand the root causes of the final results—whether they stem from image quality or system errors. Like any complex problem, we aim to break it into several components and address them one by one. Additionally, we have observed a clear correlation between image quality and AI performance, which further supports addressing both together.
7. Your partnerships with Qualcomm, Synopsys, CEVA, and Chips&Media position Visionary.ai as a neutral enabler across the silicon ecosystem. But deep integration across many architectures introduces engineering overhead. How do you prevent this model from turning into high-margin engineering services rather than a scalable product business?
Oren Debbi: The architecture overhead for onboarding a new partner is actually minimal, as it’s a one-time investment for each partner. The significant overhead is usually the tuning for each new use case and customer. This is precisely why we developed a training platform which automates the tuning, and facilitates scaling our business with minimal marginal overhead per customer.
8. Qualcomm has been one of your most visible partners, yet it is also increasingly internalizing AI capabilities through acquisitions like VinAI. In an ecosystem where platform owners often absorb successful features, how does Visionary.ai protect itself from becoming feature-ized by the very platforms it helps enable?
Oren Debbi: When we started the company we were focused on denoising, which was focused mainly on low light. But our recent breakthrough of running a full ISP pipeline on a single neural network turns us into a full end-to-end product, not just a feature. This still of course uses our denoising and other AI features within it, but is fundamentally answering a much broader market need and is paving the way for the future of image processing.
The real moat and secret sauce of our business is the training platform. This is what powers the AI ISP model and I envision it to become the ChatGPT of camera tuning.
9. In smartphones, the most profitable tiers remain dominated by vertically integrated players with in-house ISPs. Do you believe third-party software ISPs can realistically penetrate these closed ecosystems, or is Visionary.ai’s long-term growth more likely to come from automotive, security, and industrial markets?
Oren Debbi: We anticipate long term growth happening more from security and automotive markets, but not necessarily for the reason you mentioned. One of the key benefits our software offers is an improvement in detection. Even though our software improves video quality for consumer electronics, the more meaningful value-add is where we can make detection applications more accurate downstream. Things like face, license plate, and object recognition work well around 90% of the time, but the 10% where they fail is due to poor lighting conditions like HDR and low light. That’s where our ISP becomes extremely valuable, and the markets which depend on detection are growing fastest.
10. Your background is primarily in global sales and business development rather than core algorithm research. How has that commercial perspective influenced Visionary.ai’s evolution from a single-function denoising solution into a full software-defined ISP platform?
Oren Debbi: With any deep tech company, there is a danger of focusing on powerful technology which may not have a solid commercial basis in market. What has worked really well for us as a business is having two co-founders who work together to steer the business in the right direction both technologically and commercially.
Our growth so far has been driven by market demand. When we cofounded the company at the end of 2020, we had a heavy focus on laptop webcams because post-Covid, we were spending so much time on video conferencing. But the market has driven us to see commercial opportunities in many other markets.
Yoav Taieb: Most recently, we’ve seen how in semiconductor design, area on chip has become of primary importance, and that’s one of the factors which pushed us to develop the AI ISP. One of the key gamechangers of our AI ISP is that it’s a premium ISP with market-leading image quality, but we’re occupying far less chip area than in premium smartphones. That’s huge, and it’s something we’ve developed as a direct result of market demand.
11. Visionary.ai raised seed capital between 2021 and 2022, followed by strategic investment from National Grid Partners, yet there has been no public large-scale funding round since. Does this signal early revenue sustainability through licensing and NRE models, or a deliberate focus on strategic alignment over venture scaling?
Oren Debbi: We have solid revenues, and are right now focused on closing deals and growing the business, but we do plan to raise a Series A in the future.
12. Looking toward 2030, as on-device multimodal models increasingly unify vision, audio, and spatial understanding, do you believe the concept of a standalone ISP will eventually disappear, and if so, what role does Visionary.ai play in that end state?
Oren Debbi: It won’t happen overnight, but we certainly see the trend moving that way. In the near future, we’ll mainly see systems which are not 100% AI based, but have certain ISP blocks relying on software and AI. But eventually, we see the full pipeline moving to software and AI.
Editor’s Note
This interview examines a structural shift in imaging systems. The image signal processor is moving from a fixed hardware block into a software-defined, AI-native system layer.
