In our fast-paced digital world, grasping user context is crucial for platforms like KARTE by PLAID, Inc., which offers real-time analytics on user behaviors on websites and applications to enhance the customer experience. Recently, PLAID and Google Cloud embarked on a project to leverage generative AI, including large language models (LLMs) and embedding techniques on Vertex AI, to improve KARTE’s customer support. Embeddings help KARTE better understand user intent, so that KARTE’s customer support feature can then provide relevant recommendations to answer customer queries faster and boost customer satisfaction.
Capturing user intent with pre-trained embeddings
Our project aims to train a model that recommends appropriate help content by leveraging embeddings on Vertex AI. This recommendation system operates on two key data types: Query and Corpus, focusing on capturing user intent from the Query to match against a Corpus of recommendations.
In practice, when users engage with web service content via KARTE and seek assistance on the KARTE help page, they often have trouble finding the right content. Our model interprets users’ web event logs as queries, matching them with helpful content that addresses their challenges within the context. To design our recommendations, we utilized KARTE web logs, content from KARTE help pages, and KARTE’s management screen. We designed recommendations using embeddings using the following flow: