in

BigQuery’s latest Gemini models, grounding and safety supports


The proliferation of digital devices and platforms such as social media, mobile devices, and IoT sensors has led to an exponential growth in unstructured data such as images, audio files, videos, and documents. To help organizations unlock valuable insights from your data, BigQuery, Google’s AI-ready cloud data platform, is integrated with Vertex AI, Google’s fully-managed AI development platform for building gen AI apps, enabling you to leverage a variety of generative AI models (e.g., Gemini) and AI services (including Document AI and Translation AI) to work with unstructured data in BigQuery object tables.

BigQuery already lets you analyze your data using a variety of powerful large language models (LLMs) hosted in Vertex AI, including Gemini 1.0 Pro, and Gemini 1.0 Pro Vision. These models excel at tasks like text summarization and sentiment analysis, often requiring only prompt engineering.

And for scenarios where prompt engineering alone isn’t sufficient, BigQuery allows you to fine-tune the text-bison model using LoRA techniques. This additional customization is valuable when the desired model behavior is difficult to define concisely in a prompt, or when prompts don’t consistently produce the expected results. Fine-tuning enables the model to learn specific response styles, adopt new behaviors (like answering as a specific persona), and stay up-to-date with the latest information.

Recently, we added support for the latest Gemini models to BigQuery, as well as safety enhancements and grounding support: 

  1. The ML.GENERATE_TEXT SQL function now supports Gemini 1.5 Pro and Gemini 1.5 Flash foundation models. BigQuery users could already leverage Gemini 1.0 Pro for various natural language processing (NLP) tasks (e.g., advanced text generation and sentiment analysis) on text, and use Gemini 1.0 Pro Vision for visual captioning, visual Q&A, and other vision-language tasks on images and videos. Gemini 1.5, Google’s next-generation multimodal foundation model, enables users to perform not only the aforementioned NLP and vision tasks with enhanced quality but also analyze audio and PDF files (e.g., audio transcription and PDF summarization) — all from a single model.
  2. We’re enhancing the ML.GENERATE_TEXT SQL function to support grounding with Google search and customizable safety settings for responsible AI (RAI) responses. Grounding allows the model to incorporate additional information from the internet to generate more accurate and factual responses. Safety settings enable users to define blocking thresholds for different harm categories (hate speech, dangerous content, harassment, etc.), ensuring the model filters out content that violates these settings.
  3. We’re extending the CREATE MODEL DDL and ML.EVALUATE SQL function to enable Gemini 1.0 model tuning and evaluation. BigQuery users can already tune and evaluate text-bison PaLM models. Now, they can also fine-tune and evaluate Gemini 1.0 Pro models, further tailoring their AI  capabilities.

In the following sections, we will look deeper into these new features. 

BigQuery ML and Gemini 1.5

To use Gemini 1.5 Pro in BigQuery, first create the remote model that represents a hosted Vertex AI Gemini endpoint. This step typically takes just a few seconds. Once the model is created, use the model to generate text, combining data directly with your BigQuery tables.


Experts assess the EU's AI Act

Experts assess the EU’s AI Act

lightrun

Lightrun Raised $18M from GTM Capital and Launches GenAI Runtime Autonomous Debugger