In this blog post, I focus on Grounding with Google Search that allows LLMs to ground their responses on trusted Google Search world knowledge and public facts. This helps to improve hallucinations and provide more up-to-data to LLMs.
Without Grounding
First, let’s look at some examples without grounding. Let’s ask a text-based model on Vertex AI some questions that require recent data.
I’m an Arsenal fan and curious if the model knows about Arsenal’s latest game result.
Prompt: What was Arsenal FC’s latest game result?
Response: I do not have access to real-time information, so I cannot provide the most recent game results for Arsenal. To get the most up-to-date information, I recommend checking official sports websites or news sources.
In London, where I live, we talk about the weather all the time. Can the model answer questions about the weather?
Prompt: What was the weather like in London yesterday?
Response: I do not have access to real-time information, including weather data. To get the most up-to-date weather information, I recommend checking a reliable weather source for London. You can find weather forecasts and historical data on websites and apps like BBC Weather, AccuWeather, or The Weather Channel.
Not surprisingly, the model cannot answer these questions, as they require recent data.
Next, let’s enable grounding and see how the responses change.
Enable Grounding with Google Search
To enable Grounding with Google Search, go to Vertex AI studio in the Google Cloud console, then create or select a prompt. On the left, in the advanced section, you should see a Enable grounding
toggle: