in

Borja Díaz-Roig on Why UX Breaks When Validation Is Too Slow | Interview

Modern product teams don’t skip UX validation because they don’t value it, but because research moves too slowly to keep up.

Uxia co-founder Borja Díaz-Roig explains why unvalidated decisions are a speed problem, and how synthetic users can close the gap without replacing human research.

1. What problem in UX research convinced you that speed had become more dangerous than lack of validation?

Borja Díaz-Roig: What convinced us was realizing that lack of validation today is often a consequence of low speed.
Product teams are more agile than ever – they ship weekly or even daily – but most UX research methods still operate at a much slower pace. When research can’t keep up with production, teams don’t wait; they skip validation altogether and move straight to building.
So the real danger isn’t moving fast, it’s moving fast without feedback because research is too slow to fit modern workflows. If validation were fast enough, teams wouldn’t avoid it. Speed, or the lack of it, is what’s actually driving unvalidated decisions.

2. In practical terms, what decisions can Uxia be trusted to validate today, and which decisions should never be made from synthetic feedback alone?

Borja Díaz-Roig: Uxia is designed to validate UX decisions at the design and prototyping stage, questions like:
Is this flow understandable?
Where do users get confused?
What friction or hesitation appears before conversion?
What is missing in this flow?

We intentionally focus on the “how to build it right” phase, not the “what should we build” phase. While synthetic users can help explore ideas, we believe strategic product decisions should never rely on a single source of insight, synthetic or human.
We always position Uxia as one input among many, just like a user interview or usability test. You wouldn’t make a major product decision based on interviews alone -and you shouldn’t do that with synthetic feedback either. The value comes from triangulation, not replacement.

3. Synthetic users often follow the most reasonable path. How do you reliably surface confusion, hesitation, and abandonment rather than ideal behavior?

Borja Díaz-Roig: The key is that our synthetic users aren’t generic—they’re modeled as specific user profiles with different characteristics, constraints, levels of expertise, and mental models.
We don’t ask “what’s the best path?” Instead, we simulate how a real person with that profile would genuinely interpret the interface. That includes misunderstandings of copy, cognitive overload, unclear next steps, mismatched expectations, and other sources of friction.
We intentionally design the system to surface plausible, human behavior -not optimal behavior. Sometimes that means the user follows the reasonable path, and that’s a valid signal. But very often they hesitate, take detours, or abandon the flow entirely because the design doesn’t match how they actually think.
That’s how we move from the ideal path to the real one, and that’s where genuine confusion, hesitation, and abandonment show up.

4. Can you share one case where synthetic testing closely matched real user outcomes, and one where it failed? What did you learn from the failure?

Borja Díaz-Roig: We actually validated Uxia by testing our own system against real human usability tests.
One example is a customer case study we shared on our website, where we ran the same test with synthetic users and human participants. The outcomes were remarkably aligned, with some clear advantages on the synthetic side:

  • Results delivered 17× faster
  • 0% failed tests, compared to 40% with human participants
  • 3× more usability issues uncovered, including the same ones flagged by humans and some other humans missed

Where synthetic testing can fall short is in emotional or contextual nuance -for example, how a user feels about a brand, or how real-world distractions affect behavior. That reinforced an important lesson for us: synthetic testing works best when it complements human research, not when it tries to replace it. 
Our takeaway was clear: when used for the right decisions and in the right context, synthetic testing can dramatically expand both speed and coverage of UX insights.

5. Large language models tend to average behavior. How do you avoid testing the “average user” while missing the edge cases that usually cause real-world UX failures?

Borja Díaz-Roig: The risk of “average behavior” is real, but it mostly comes down to how the synthetic tester is set up. At Uxia, we don’t test with a generic user, we test with clear, intentional personas. Things like sociodemographic traits, tech literacy, goals, constraints, context of use, and expectations all directly shape what happens in a test.
At the same time, good UX isn’t about chasing every possible edge case. When you’re validating a new design, what really matters is how your target audience will experience it most of the time. Some edge cases uncover real, critical breakdowns; others just add noise and push the product away from its core use case.
Our goal is to surface meaningful friction for the users you’re actually building for, while still giving teams the option to explore specific edge cases intentionally, rather than accidentally designing for users they never meant to serve.

6. Your product reports user frustration and cognitive friction. How do you validate that these signals reflect real human experience rather than simulated emotion?

Borja Díaz-Roig: We’re very deliberate about this. Our platform combines state-of-the-art LLMs with public UX and behavioral data, which capture many real-world patterns, like hesitation, overload, confusion, and frustration.
But we don’t stop there. We continuously evaluate our results by comparing synthetic tests with human usability tests. That lets us check whether friction appears in the same places, for the same reasons, and with similar intensity.
We’re not trying to “fake” reactions. The goal is to reliably surface the kinds of usability signals UX teams already care about and to keep proving that those signals line up with real human behavior.

7. Over time, how do you prevent your system from becoming self-referential—testing AI-generated designs with AI-generated users?

Borja Díaz-Roig: First, Uxia is always evaluating interfaces meant for humans, not AI-generated abstractions. The question is never “what would another model do here?” but “how would a real person interpret this?”
Second, we constantly ground the system in human-generated data, established UX research, and real-world validation against human tests. That helps us avoid closed loops where AI is only learning from itself.
Synthetic testing only works if it stays anchored in real user behavior. The moment it becomes self-referential, it stops being useful UX research and that’s exactly what we want to avoid.

8. Having the UserZoom co-founder as an investor is a strong signal. Does this suggest human-based testing is becoming obsolete, or that the market is splitting into distinct layers?

Borja Díaz-Roig: Human testing isn’t going away and we don’t think it should.
What’s really happening is that teams now need different kinds of validation at different speeds. Synthetic testing is great for fast iteration, early feedback, and specific audience insights. Human testing is still essential for emotion, and real-world context.
We see this as a growing, more layered UX ecosystem. Teams that combine both approaches are the ones making better, more confident product decisions.

9. With the marginal cost of synthetic testing approaching zero, how do you see pricing and differentiation evolving in this category?

Borja Díaz-Roig: While running individual tests might get cheaper over time, building a good synthetic testing platform is still expensive. AI infrastructure, orchestration, evaluation, and continuous validation all come with real costs.
At the same time, teams will expect more than just basic feedback. They’ll want things like heatmaps, deeper behavioral insights, and even market-level signals.
So differentiation won’t be about who can run the most tests, it’ll be about who delivers the most useful, trustworthy insights that teams can actually act on.

10. For a product-led tool like Uxia, what does international expansion really mean?

Borja Díaz-Roig: Internationalization has been part of Uxia from the beginning.
We already have users all over the world, from South Korea to the US, which means our synthetic testers need to reflect local language, cultural norms, and regional UX expectations. A flow that feels obvious in one market can be confusing in another.
So for us, international expansion isn’t just about sales. It’s about making sure validation works globally and that teams can catch region-specific and audience-specific UX issues before they ship.

11. If teams start designing products mainly based on synthetic validation, what’s the biggest long-term risk to real human experience?

Borja Díaz-Roig: The biggest risk isn’t synthetic testing, it’s treating any single input as the full picture.
Synthetic validation is incredibly useful for speed, scale, and early detection of UX issues. But real human experience still benefits from direct human input. Used together, synthetic and human testing actually strengthen each other. 

12. Looking ahead, do you see Uxia staying a UX research tool, or evolving into something bigger?

Borja Díaz-Roig: There’s still a lot to build in UX research, but we definitely see Uxia evolving beyond a single category.
Our longer-term vision is to become the go-to platform for pre-traffic validation, covering usability testing (UX evaluation), and other types of product and market research using synthetic users.
In that sense, Uxia becomes an AI-native platform that helps teams reduce risk and make better decisions earlier so that when real users arrive, the experience is already in a much better place.

Editor’s Note

This interview argues that UX failures increasingly stem from validation lag, not lack of user understanding.

Tomo Tagami on Why DeFi Breaks at Execution | Interview

IntBot Unmanned Robot Booth CES 2026