in

Livex AI build AI agents on GKE infrastructure


Providing a satisfying customer experience is a critical competitive advantage for consumer companies, but delivering it comes with multiple challenges. Despite attracting visitors to a website, it can be a struggle to convert them into customers if the site lacks that personal touch. Call centers are costly to operate, and when call volume is high, customers get frustrated with lengthy wait times. Traditional chatbots are more scalable, but fall far short of a real human-to-human experience.

LiveX AI stands at the cutting edge of generative AI technology, building custom, multimodal AI agents that can see, hear, chat, and show to deliver truly human-like customer experiences. Founded by a team of experienced entrepreneurs and distinguished tech leaders, LiveX AI provides businesses with trusted AI agents that deliver strong customer engagement across a variety of platforms.

LiveX AI generative AI agents provide real-time, immersive, human-like customer experience that offer helpful, real-time solutions to customer questions and concerns in a familiar, conversational manner. And to give users a good experience, the agents need to be robust as well as fast. Creating that user experience requires a highly performant and scalable platform capable of eliminating the response lag typical of many AI agents — especially during peak volume periods like Black Friday. 

GKE provides a robust foundation for advanced generative AI applications

From the start, Google Cloud and LiveX AI collaborated to help jumpstart LiveX AI’s development, using Google Kubernetes Engine (GKE) and the NVIDIA AI platform. In a matter of three weeks, Google Cloud had helped LiveX AI deliver a custom solution for its client. Additionally, by participating in the Google for Startups Cloud Program and NVIDIA Inception program, LiveX AI had their cloud costs covered while they got up and running, and received access to additional business and technical resources. 

The LiveX AI team wanted a robust solution that would help them ramp up quickly, so it chose GKE, which lets them deploy and operate containerized applications at scale on a secure and performant global infrastructure. GKE’s platform orchestration capabilities make it easy to train and serve optimized AI workloads on NVIDIA GPUs while taking advantage of GKE’s flexible integration with distributed computing and data processing frameworks. 

In particular, GKE Autopilot makes it easy to scale applications to different clients, especially when building multimodal AI agents for brands with high volumes of real-time customer interactions. GKE Autopilot manages the underlying compute of a Kubernetes cluster without LiveX AI needing to configure or monitor it. With the help of GKE Autopilot, LiveX AI has achieved over 50% lower TCO, 25% faster time-to-market and 66% lower operational cost, helping them focus on delivering client value rather than configuring or monitoring the system.


VOICE ENGINE: The STUNNING life-like AI voices from OpenAI | AI is getting to REAL

Deckmatch powers insights for venture capitalists with Cloud SQL