In this interview, Andrew Fleury, CEO of Luna Systems, explains why automotive ADAS assumptions break down on two wheels and how vision-first ARAS is being rebuilt specifically for rider-scale dynamics.
Luna began as a compliance tool for shared micromobility operators, helping cities detect sidewalk riding and regulatory violations. Today, you are positioning Luna as a rider-facing safety system. From an operating perspective, how do you think about this shift from top-down regulation to bottom-up personal protection, and what fundamentally changes when your “customer” moves from cities to individual riders?
Our previous work focused on a very different context: shared micromobility, specifically shared e-scooters rather than shared bikes. In that world, safety has historically been framed through the lens of compliance and risk to other vulnerable road users, not necessarily the rider themselves, and there have been many instances where cities have moved to ban scooters outright.
Where Luna came in was to support fleet operators with on-board, anonymised vehicle intelligence. Our system could identify events such as sidewalk riding, speed, and proximity to pedestrians, allowing operators to enforce rider policies through warnings or commensurate interventions. This was, by definition, a top-down, after-the-fact model, so effective for governance, but inherently enforcement-led.
That operating model in no way translates to owned vehicles. When the customer became the rider, the role of our technology fundamentally changed. The focus moved fully away from compliance and towards personal safety and empowerment.
The common thread across both markets is vision AI’s ability to identify risk patterns. The difference is the product’s intent. With Luna Oculus, we start and end with rider empowerment as the primary goal. Six in ten people do not feel safe cycling in their area (Ipsos Mori). We want to change that – to help riders feel more situationally aware and therefore more confident getting from A to B.
Micromobility safety has long suffered from a cost paradox. Automotive safety systems rely on expensive sensors, high-power compute, and large form factors that do not translate to two-wheelers. Why did that legacy approach break down structurally in this category, and what assumptions about safety technology had to be discarded for Luna’s vision-first architecture to make sense?
Automotive ADAS did not fail to translate to micromobility because the need wasn’t there, but because the assumptions baked into car safety systems simply do not work on two wheels.
Car ADAS is designed around platforms that are heavy, stable, powered, thermally managed, and economically tolerant of expensive components. Those systems assume:
- Continuous power availability
- Large physical space for sensors and compute
- Active cooling and heat dissipation
- High-cost sensor stacks (radar, LiDAR, multi-camera arrays)
- A rigid vehicle body with limited pitch, roll, and yaw dynamics
None of these assumptions apply to bicycles.
On a bike, power is scarce or non-existent, heat has nowhere to go, size and weight are tightly constrained, and cost sensitivity is extreme.
Beyond hardware, there is a deeper issue: automotive perception models are trained on fundamentally different motion dynamics. Cars move largely in a planar, predictable way. Bicycles and motorcycles are constantly pitching, rolling, yawing, leaning into corners, responding to rider balance, road camber, etc. Automotive datasets do not capture this behaviour in a way that translates safely to two wheels.
Instead, we designed a monocular vision-first architecture optimised for bikes and motorbikes from the ground up.
That meant – among other things:
- Building perception models that understand two-wheel dynamics explicitly
- Training on datasets captured from bikes and motorcycles, not cars
- Designing for ultra-low power edge inference
Having the advantage of starting with shared micromobility meant that we began from the bottom-up i.e. micromobility, rather than taking ADAS vision models and trying to shrink them. We built our incredible team around this approach.
Vision gives us semantic understanding – not just detection, but context: road position, overtaking behaviour, relative motion, and risk. Combined with modern edge-AI silicon, this finally makes ARAS feasible for bikes at a cost, size, and power level that the category can support.
Essentially, two wheeled vehicles were waiting for the right compute environments, the right data, and the right architectural assumptions. Those conditions now exist, which is why this category is now finally emerging.
Added to the technical environment making this technology feasible, we believe there is a real opportunity for two wheeled vehicles to change their approach to marketing. If we look at automotive, it has been a long time since performance functions were what was used to market a car, cars are now launched with the technology features as a prominent selling feature. The customers for premium cycling are today also buying cars. The cars they are buying are covered in safety technology, so they are already predisposed to a safety message. Two wheeled brands that start focusing on safety will find a cohort of customers ready to go.
Luna runs real-time vision models on cost- and power-constrained edge hardware rather than automotive-grade silicon. From both an engineering and operational standpoint, is there a hard physical limit to how far model pruning and optimization can go before reliability degrades? How do you define the minimum acceptable boundary for safety in such a high-risk, high-dynamics environment?
So our philosophy is to do as much as we can in a form factor that can go on two wheels. We work in a number of areas to make our models as small as possible, quantization, pruning and distillation. To answer your question, there are certainly limits to how far you can go with these techniques and that is essentially where we look to go, our team has a very strong academic background, we are reading every research paper that is published in this area and we are also publishing papers ourselves. So we take this approach of experimenting with these cutting edge techniques and then working back from there to see what can improve our performance and ultimately make a better product.
You often cite fear as the single biggest barrier to cycling adoption, with roughly 60 percent of people avoiding riding due to mixed traffic. Behavioral research suggests that more alerts can sometimes increase anxiety rather than reduce it. How does Luna balance providing timely warnings without overwhelming riders, and what evidence do you see that perception-based assistance actually changes rider behavior over time?
We design alerts as nudges, not noise. The goal isn’t to narrate traffic more to signal meaningful risk escalation.
How we manage that:
- Not every approach is a hazard; the system should differentiate “presence” from “risk.”
- Minimal cognitive load; riders should interpret alerts instantly.
- Visual and audio alerts – from our testing we have found that some riders prefer our livestream view with visual alerts, while others prefer audio alone. We provide both to let the rider choose. As part of our roadmap we are also considering additional warning customisations.
Added to these points we have also done research into cognitive load. We undertook some customer research with people under different levels of data load to understand that tipping point between alerts and noise. Here, it was interesting to learn that about 50% of people had a preference for the visual alerts whereas the rest preferred audio. In the end that is why we provide both. In the future we will also consider enabling certain filtering options in the app. That is a huge benefit that vision AI offers – that nuanced, not just binary experience.
Many products market themselves as “smart safety” while offering little more than basic radar alerts or passive recording. In your view, what technically separates a true Advanced Rider Assistance System from these solutions? At a systems level, what capabilities must exist before you are willing to call something ARAS rather than a sensor accessory?
Looking at the underlying technologies, both radar and vision can provide ARAS. In our view intelligence is the underlying differentiator. A sensor accessory detects a thing. ARAS interprets a situation. In our case our Luna Oculus would be classified as passive and not active ARAS – the latter requiring not just decision making on threat levels but also actuation – deceleration, braking etc.
ARAS provides
- Perception – detection & classification in context (road, lanes, actors, relative motion).
- Risk modelling: not “is there something behind me?” but actually “is this interaction becoming unsafe?” If you consider cycling in an urban setting, there is nearly always something behind, so it is really important to understand what is a risk and what isn’t.
- Alerts that are timed, understandable, and non-distracting.
- Evidence & traceability: the ability to review events, not just experience them.
- Continuous improvement loop: the system becomes better across conditions and geographies over time.
If a product only provides proximity beeps or passive recording, it may be useful, but in our view is not an assistance system. ARAS is more about contextual assistance, not simple raw sensing.
Luna’s approach replaces expensive sensor fusion with semantic understanding from vision alone. Without relying on LiDAR or automotive radar stacks, how does your system reason about complex urban interactions such as blind spots, close passes, or multi-vehicle conflict at intersections, and where does vision still struggle compared to human intuition?
We’ve built around a proprietary 3D monocular vision model that infers depth, motion, and risk from a single camera stream. While runtime perception is vision-only, the model itself is trained using a multi-sensor fusion dataset architecture and vehicle telemetry to generate high-quality ground truth during development.
This allows us to deploy a low-power, single-camera system on bikes while retaining a deep understanding of relative distance, closing speed, and trajectory. At inference time, the system reasons about safety through motion and context rather than static object detection. Close passes, for example, are identified by analysing lateral distance, relative velocity, and road positioning over time.
In short, we replace expensive sensor fusion at the vehicle level with intelligence learned through sensor fusion at the data level, making it more practical, scalable, and affordable for two-wheelers.
Your early deployments in markets like India exposed Luna’s models to some of the most complex traffic environments in the world. What did those conditions reveal about the limits of computer vision in real-world micromobility, and how did operating there shape your assumptions about model robustness elsewhere?
Operating in highly complex markets such as India made one thing immediately clear: ARAS or even ADAS cannot be universal by default. Safety systems trained on orderly, lane-disciplined Western infrastructure break down when deployed into environments with informal road usage, mixed vehicle classes, weak lane markings, and highly variable rider behaviour.
In these markets, there is no “clean” infrastructure for models to rely on. Roads are shared by cars, buses, trucks, scooters, bicycles, pedestrians, animals, and informal vehicles – often simultaneously and without clear priority. This diversity forces a system to understand how vehicles actually move, not how they are expected to move on paper. An effective ARAS must be trained and tuned to local riding patterns, vehicle mix, traffic density, and historical incident types, otherwise it risks generating irrelevant alerts. Whilst the conditions in India are sometimes very complicated we can clearly see that one fact remains, even in very busy and congested traffic scenes it is nearly always possible to reduce risk and take a better road position. Even if you are “boxed in” to a bad road position and can’t move, at least we can alert you and heighten your concentration.
These lessons directly shaped our approach. We design ARAS as locally adaptive systems, capable of being calibrated to regional conditions rather than exported as static global products. Safety, in practice, is contextual and any system that ignores that reality will struggle to deliver meaningful protection at scale.
One additional point is the importance of riding style. On two wheels the necessity to remove your bad habits is so important, much more so than on four wheels. What could be a small accident in a four wheeled vehicle, could mean a fatality on two wheels, so we believe helping riders to identify bad riding habits is also a huge area to reduce their risk. We look at events that caused alerts and highlight them to the rider. Ideally the number of alerts a rider gets reduces over time as they learn to avoid very high risk situations and become a more advanced rider.
As cameras move into public space, visual AI is often perceived as surveillance by default. Given your background in communications and GDPR, how did you design Luna’s system to draw a clear line between safety verification and monitoring? In practice, how do you ensure that features like anonymised blackspot mapping do not erode user data ownership over time?
Our system serves our end users and so exists to help them feel empowered and reduce risk in real time not to observe, profile, or judge behaviour.
Nowadays cameras are essentially everywhere whether we talk about surveillance on buildings or public transport or the car or bike dashcam market. Millions of people already use camera technology.
We take an extremely strong stance on data privacy at an ethical and technical level, privacy is enforced by default. We want to build our brand around this position – a kind of “Cyclist Data Stewardship”. So, in our world data ownership remains with the rider. Riders will be explicitly and very clearly asked to opt in if they wish to contribute any anonymised safety insights – i.e. blackspot data. If consent is not given, that data never leaves the rider’s control. There is no background pooling or silent escalation of use over time.
When riders do choose to share, the purpose is narrowly defined: to highlight repeated risk locations and support evidence-based conversations with cities around street-level safety improvements. With enough data density in a given location we will put that data to work on the riders’ behalf by reaching out to cities to highlight dangerous patterns – but we will not commercialise that data. It would go completely against our philosophy to sell rider-level data to cities or any third party. We believe there is huge good will in the cycling community to collectively push for safer infrastructure and we want to work with that not against it.
For us, this is the kind of brand we want to build and we believe there is enormous benefit to us also in terms of building people’s trust in us as a company. Our model is voluntary, transparent, and rider-led by design.
In terms of market to market regulations this is also something we have carefully considered and we are ready to adapt to in order to effectively compete.
Luna’s devices effectively turn bikes into distributed sensors that can surface dangerous infrastructure failures. Do you see safety data becoming a standalone asset for Luna, and if so, how do you prevent those insights from being misused by insurers, cities, or other actors in ways that penalize riders or neighborhoods rather than improving conditions?
We think the anonymised data we collect (assuming rider explicit consent) will be hugely valuable from the point of view of pushing cities to make changes on a street level. As a rider we can show you “hotspots” where many alerts have been detected. That will be useful to you as a rider, but it will also be really useful to cities as they understand specific weaknesses in their cycling infrastructure.
Most cities still operate in a reality where protected bike lanes remain the exception. You have suggested that AI can bridge the safety gap until infrastructure catches up. From an operational and legal standpoint, where does responsibility sit if a system designed to compensate for infrastructure failure does not intervene in time?
We’re very clear: ARAS is assistance, not replacement. It does not absolve infrastructure obligations, driver responsibility, or rider attentiveness. From an operational and legal standpoint: The system provides risk cues and evidence, but it does not control the environment. Responsibility for safe road design and safe driving remains with the appropriate actors. Our responsibility is to be accurate, and transparent about limitations. Technology can help bridge gaps, but it cannot be used as a policy excuse to delay infrastructure improvements.
Today Luna assists riders through alerts rather than control. Looking ahead, do you believe rider assistance systems will cross the line from guidance into active intervention, such as braking or power modulation? What technical, legal, or ethical thresholds must be crossed before that shift becomes viable?
If we look at how ADAS developed then it’s plausible over time but bikes are not cars. Active intervention on a bicycle has stability implications, rider autonomy concerns and a much tighter safety envelope.
Before any move toward actuation is viable, several thresholds must be met:
- very high confidence perception in defined scenarios
- clear rider consent and override logic
- standards and liability frameworks suited to two-wheel dynamics
…and essentially careful ethics: the system cannot ever take control in any way that introduces new risks.
In motorcycles, light actuation is more plausible, for example adaptive cruise control is a feature that is popular in motorcycles, but not really relevant in cycling.
If ARAS becomes as commonplace on bikes as seatbelts are in cars, it could reshape how cities think about mobility investment. By 2030, do you expect visual AI to complement infrastructure or risk becoming a substitute for it, and what would that mean for the balance between technology, policy, and public space?
Our dream is that there are enough ARAS enabled two wheelers in our cities to give a sort of heat map of safety events. If we can identify where incidents are repeatedly occurring then that data is firstly really useful to riders. We can avoid or proceed with caution in these areas, but we also think cities can really benefit from this data and make infrastructure decisions based on the risks detected by cyclists in their own city.
Editor’s Note
This interview highlights the limitations of adapting automotive safety stacks to micromobility and explores the case for vision-first systems designed around two-wheel physics.

