in

Apollo 24|7 uses MedLM and RAG for healthcare innovation


Apollo 24|7, the largest multi-channel digital healthcare platform in India is working towards building a Clinical Intelligence Engine (CIE) designed to support clinical decisions and assist clinicians with primary care, condition management, home care, and wellness. One of the main challenges they faced was designing an expert clinical assistant with a deep understanding of Apollo’s clinical knowledge base.

In order to build such a robust system, Apollo 24|7’s team partnered with Google Cloud to build various systems such as a Clinical Knowledge Graph, a clinical entity extractor, and a timestamp relationship extractor.

In this blog, we take a look at how the clinical assistance system came to existence. This system would be assistive and enhance the clinician’s experience of the CIE platform that could eventually lead to improved clinical decision making.

Let’s take a deeper look to investigate the solution that Google Cloud and Apollo 24|7 built together.

Model identification

The first step to build such a solution was to identify the right model that could potentially help build this system. After carefully evaluating a host of models, including several open source models, the team decided to implement this solution using MedLM.

MedLM is a family of medically-tuned foundation models designed to provide high quality answers to medical questions. MedLM was built on Med-PaLM 2 and is fine-tuned for healthcare, making it an excellent contender to build a clinical QA model around.

The next step was to enhance the model architecture to make it align with Apollo’s knowledge base.

The solution pilot

The initial approach that we experimented with involved forming a prompt consisting of a clinical “Report” and the user’s “Question.” This prompt would be directly sent to MedLM for clinical question answering. This approach yielded great results, however it did not utilize any of Apollo’s vast clinical knowledge base. The knowledge base was in the form of de-identified clinical discharge notes that potentially had the capability of making the responses more direct and in-line with how certain patients had been treated in the past.

In order to utilize this, we experimented with providing additional context to the prompt from Apollo’s knowledge base. We quickly realized, however, that this option would run into issues owing to exceeding the input token limit of the model.

The logical next step was to chunk the hospital’s data into smaller shards, but this approach came with its own challenges. Directly chunking the data into smaller shards would cause it to lose the overall context of the patient. In other words, individual shards would only have certain siloed parts of the clinical note, while not preserving the overall patient journey, including their treatment, medications, family history, etc.

RAG + MedLM

In order to make the model more robust and inclined to Apollo’s knowledge base, we proposed a novel approach to utilize Retrieval-Augmented Generation (RAG) on Apollo’s de-identified clinical knowledge base.

RAG is an AI framework that enhances LLMs by integrating external knowledge sources, in our case the knowledge base was the preprocessed and de-identified clinical discharge notes obtained from Apollo Hospitals.


Real-time data processing for ML with Striim and BigQuery

mm

A Guide to Mastering Large Language Models