In this conversation, we spoke with Miguel Torres, founder of ELEVARE, about how agentic AI systems intervene in human performance and where clear boundaries must remain between optimization and human agency.
Q: At CES 2026, “Agentic AI” marked a shift from systems that advise to systems that act. In Elevare’s case, your AI does not just recommend workouts, it actively intervenes. Where do you draw the boundary between an AI that assists human decision-making and one that quietly replaces it?
A: The boundary is defined by a Human-in-the-Loop (HITL) Governance model. ELEVARE’s agentic layer operates within a “permissioned sandbox.” While the AI acts by dynamically restructuring a weekly block, it does so based on a baseline of human-vetted data—specifically the user’s initial ISSA-aligned physiological profile and active intent. The AI automates the logistics of the workout, but the user retains the veto power over the execution. The consequence of this boundary is that the AI never forces a load; it merely presents the most optimized path for the user to approve.
Q: You describe Elevare as removing friction from fitness decisions. But friction is also where agency lives. How do you ensure that reducing effort does not gradually erode a user’s sense of authorship over their own body and discipline?
A: We distinguish between cognitive friction (the “analysis paralysis” of what to do) and physical discipline. ELEVARE removes the former so the user can focus entirely on the latter. To ensure authorship, we’ve implemented an “Intent Override” toggle. Before every session, the system asks for a subjective input. If a user wants to push through fatigue for a specific milestone, the AI pivots from “intervention” to “support.” This ensures that the discipline—the act of showing up—remains a conscious choice by the user, not a passive response to a notification.
Q: Elevare Rotations are designed to preempt physiological adaptation before users hit a plateau. How does your system model something as nonlinear and individual as human adaptation without relying on statistical averages?
A: ELEVARE utilizes N-of-1 modeling. We treat statistical averages as noise and focus exclusively on the delta between a user’s “Predicted Performance” and “Actual Output.” The mechanism is our Recursive Feedback Loop: if the system predicts a 5% increase in load capacity but the user’s biometrics show a plateau, the “Elevare Rotation” triggers an immediate mechanical shift—altering tempo or volume—rather than waiting for a 4-week average to catch up. This makes the system an individual biological mirror rather than a demographic calculator.
Q: Many fitness platforms react after failure or stagnation occurs. Elevare aims to intervene earlier. What does prediction mean inside your system, and how confident can an AI be when forecasting fatigue, injury risk, or overtraining?
A: Prediction in ELEVARE is a Probabilistic Readiness Score. We aren’t guessing; we are calculating the probability of overtraining by cross-referencing real-time HRV (Heart Rate Variability) and RPE (Rated Perceived Exertion) against the current training load. Our system is designed with a “Safety-First” confidence interval: if the probability of injury risk due to CNS fatigue exceeds 15%, the system actively throttles the intensity of the generated program. The consequence is a proactive “soft-landing” for the user, preventing the weeks-long setbacks common in reactive training.
Q: You introduced a $420 lifetime membership in an industry dominated by subscriptions. Is this a pricing experiment, or a long-term bet on falling AI inference costs? What assumptions does this model depend on?
A: This is a strategic bet on the Commoditization of Intelligence. As we see unprecedented advancements in energy grids, space-based data centers, and the “intelligence loop” (AI optimizing its own inference code), the cost of the “brain” will trend toward zero. Our model assumes that value in the future won’t be in access to AI, but in the integration of that AI into a user’s physical life. By removing the subscription, we align our success with the user’s long-term health, not their monthly billing cycle.
Q: As users spend years on Elevare, the AI develops a deep understanding of their physiology. Who owns that accumulated intelligence? Can users export or transfer a model trained on their own body?
A: We recognize that physiological data is a digital twin of the user’s own body. Currently, our architecture leverages Google’s secure AI infrastructure for its robust data sovereignty and portability standards. Our concrete policy is that the user remains the sole proprietor of their “Biometric Identity.” Users can export their entire training history and the weights of their personalized model, ensuring they aren’t locked into our ecosystem if they choose to move their “intelligence” elsewhere.
Q: Elevare positions itself as a pure software layer while hardware ecosystems like Apple, Oura, and Samsung compete for control of biometric data. If those platforms close access or launch competing agentic coaching systems, what remains defensible about Elevare?
A: ELEVARE is a Hardware-Agnostic Intelligence Layer. While those platforms control the sensors, we control the synthesis. Our edge is the ability to ingest data from any ecosystem and translate it into agentic action. By building on Google’s leading AI infrastructure, we maintain a level of processing power and cross-platform flexibility that single-hardware ecosystems often lack due to “walled garden” constraints. Our defensibility lies in being the “Universal Translator” for human performance.
Q: You promise environmental fluidity, allowing training to shift between gym, home, and outdoor settings without losing effectiveness. How does your AI calculate equivalence between fundamentally different mechanical loads?
A: We use a Mechanical Load Equivalence (MLE) algorithm moderated by our Readiness Scale. Instead of comparing “pounds lifted,” the system calculates the “Total Tension Time” and “Metabolic Demand.” If a user moves from a leg press in a gym to hill sprints outdoors, ELEVARE calculates the relative intensity required to elicit the same physiological adaptation. The Readiness Scale ensures that the “Equivalent Load” is always adjusted for the user’s current recovery state, making the environment irrelevant to the effectiveness of the stimulus.
Q: Does Elevare already operate real-time feedback loops between recovery data and nutrition adjustments, or is that still aspirational? At what point does the system move from planning to active metabolic management?
A: This is no longer aspirational; it is operational. ELEVARE facilitates Dynamic Metabolic Management by injecting real-time biometric data—such as sleep quality and daily caloric burn—directly into the “Mission” of the day. If a user’s recovery data indicates a glycogen deficit, the system doesn’t just suggest a lighter workout; it adjusts the nutritional targets for the next 24 hours. We have moved from “planning” to “active management” by treating nutrition and movement as a single, inseparable loop.
Q: Human coaches create accountability through emotion and social obligation. Without a real person involved, how does Elevare sustain adherence over years rather than weeks? Is synthetic empathy something you actively design for?
A: We are actively designing for Relational AI. We believe adherence is driven by the feeling of being “known.” Our second-generation goal for ELEVARE is the development of a Performance Companion—a synthetic entity that understands the user’s psychological triggers. This isn’t just “fake empathy”; it’s behavioral science delivered through a consistent, supportive interface. The consequence is a shift from “obligation” to “partnership,” which is the key to multi-year consistency.
Q: Agentic AI introduces liability questions that generative AI never faced. If an AI-generated training decision leads to injury, where does responsibility sit, and how do you test the system against dangerous edge cases?
A: Our safety mechanism is a Shared Responsibility Model. While ELEVARE’s “Agentic Core” is built with strict physiological guardrails to prevent dangerous suggestions, the user is always the final arbiter. We test against edge cases using adversarial simulations—intentionally feeding the AI “extreme” data to ensure the output remains within a safe human threshold. The limit is clear: the AI suggests the safest, most effective path, but the user must exercise common sense and physical awareness in execution.
Q: Looking ahead, do you see Elevare as an early form of a broader Human Operating System, coordinating movement, sleep, nutrition, and recovery as a single loop? If so, what limits should never be crossed, even if optimization allows them?
A: I see ELEVARE as the foundational layer of a Human OS, coordinating the four pillars: movement, sleep, nutrition, and recovery. However, the limit that must never be crossed is Biological Coercion. The system should offer the “Optimization Path” as an option, but never remove the user’s right to be “un-optimized.” Even in a world of perfect data, the user’s subjective experience and personal boundaries are the ultimate priority. We provide the options; the client sets the boundaries.
