in

How Formula E uses Google Cloud generative AI


In today’s fast-paced business landscape, data-driven decision-making has become paramount for organizations aiming to stay ahead of the competition. With the emergence of generative AI (gen AI) technology, it’s now possible to provide a business user with a conversational interface to data that enables new engagement styles. With this approach, enterprises can build on traditional BI capabilities with technology that can inherently understand the data model below it, and respond generally to requests for insights held within it. This simplifies data exploration and empowers users to access real-time insights with ease. A conversational approach also allows for a more general exploration of all the data compared to dashboards, which can only provide insights on certain pre-selected areas.

Google Cloud, Formula E, and McKinsey QuantumBlack worked together to create such an experience. By combining race data and car telemetry data with gen AI from Google Cloud, Formula E is able to provide a conversational interface for their drivers to answer specific questions ranging across a very large possible question-space. For example, questions such as “What was the exit speed from turn 1?” can be directly asked and answered through a simple text-based interface, rather than requiring manual analysis of the data corpus to identify important features, followed by manually creating and maintaining a BI dashboard to deliver this information.

Modern race cars create huge amounts of telemetry data from their many sensors. Understanding exactly how a vehicle behaves at certain points on a race track is an important early step for optimally tuning the vehicle and achieving faster lap times. The closing week of the Formula E 2023 championship included a successful attempt at setting the indoor land speed Guiness World Record with the next generation GENBETA race car, as well as the final two championship races of the season.

During this week of events, Formula E wanted to provide information for two very different personas:

  1. Drivers who wanted to understand how their vehicles had behaved in the land speed record attempt to help them set a faster time
  2. Fans who wanted to learn more about Formula E, current and previous championship results and race data, and the next-generation GENBETA race car

Powered by text- and chat-optimized PaLM large language models (LLMs) and other AI services in Google Cloud’s Vertex AI portfolio, we built a gen AI chat interface that catered to both of these use cases and personas and that was available to both drivers and fans during the events.

Design overview

At a high level, the system consists of a custom backend service that acts as an agent for handling the questions, and a frontend UI built in collaboration with McKinsey QuantumBlack. These two services are containerized and deployed as Kubernetes Deployments in Google Kubernetes Engine via a CI/CD pipeline built with Terraform, Cloud Build and Anthos Configuration Management (Config Sync). The frontend is secured with authorization via Identity-Aware Proxy and also uses a CDN built using Cloud CDN and Cloud Storage.


Is It Too Late for Siri? Apple’s working on their own ChatGPT-Like AI (Apple Bitz XL, Ep, 258)

ChatGPT & GPT 3.5: Google, SEO, Content. Das war’s!