in

AI-Pushed Insights: Leveraging LangChain and Pinecone with GPT-4 | by Elen Gabrielyan | Jun, 2023


Empowering Subsequent-Gen Product Managers— Vol. 1

Working successfully with qualitative information is likely one of the most necessary abilities a product supervisor can have; accumulating information, analyzing it and speaking it in an environment friendly method, by arising with actionable and useful insights.

You may get qualitative information from many locations — consumer interviews, competitor suggestions, or feedback from individuals utilizing your product. Relying on what you’re attempting to realize, you may analyze this information immediately or reserve it up for later. Typically, you may solely want a number of consumer interviews to verify a speculation. Different occasions, you would want suggestions from a thousand customers to identify developments or take a look at concepts. So, your strategy to analyzing this information can change relying on the scenario.

With Giant Language Fashions like GPT-4, and AI instruments equivalent to LangChain and Pinecone, we are able to deal with numerous conditions and plenty of information extra successfully. On this information, I’ll share my expertise with these instruments. My purpose is to point out product managers and anybody else who works with qualitative information how you can use these AI instruments to get extra helpful insights from their information.

What is going to you discover on this guide-style article?

  1. I’ll begin by introducing you to those AI instruments, and a few present limitations of Giant Language Fashions (LLMs).
  2. I’ll talk about other ways you possibly can take advantage of these instruments for real-life use instances.
  3. Utilizing consumer suggestions evaluation for instance, I’ll present code snippets and examples to point out you the way these instruments work in apply.

Please notice: To make use of instruments like GPT-4, LangChain, and Pinecone, it’s essential be comfy with information and have some primary coding abilities. It’s additionally necessary to grasp your clients and be capable of flip information insights into actual actions. Information of AI and machine studying is a plus, however not a should.

Assuming you’re already aware of GPT-4, it’s necessary to know some ideas as we talk about instruments that work with LLMs. One main problem with present LLMs like GPT-4 is their ‘context window’ — that is how a lot data they will course of and bear in mind at one time.

At the moment, there are two variations of GPT-4. The usual one has an 8k token context, whereas the prolonged model has a 32k context window. To offer you an concept, a 32k token is about 24,000 phrases, which is roughly equal to 48 pages of textual content. However keep in mind, the 32k model isn’t out there to everybody, even when you have entry to GPT-4.

Additionally, OpenAI introduced not too long ago a few new ChatGPT mannequin, known as gpt-3.5-turbo-16k, which affords 4 occasions the context size of gpt-3.5-turbo. When working with perception evaluation I’d recommend working with gpt-4, because it has higher reasoning than GPT-3.5. However you possibly can mess around and see what works in your use case.

Why am I mentioning this?

When coping with perception evaluation, a giant problem comes up when you have lots of information, otherwise you’re desirous about greater than only one immediate. Let’s say you could have one consumer interview. You wish to dig deeper and get extra insights from it, utilizing GPT-4. On this case, you possibly can simply take the interview transcript and provides it to ChatGPT, selecting GPT-4. You might need to separate the textual content as soon as, however that’s it. You don’t want another fancy instruments for this. So you’ll really want these fancy, new instruments when working with lots of qualitative information. Let’s perceive what these instruments are, then we are going to transfer to some particular use instances and examples.

So what’s LangChain?

LangChain is a framework that revolves round LLMs and affords numerous functionalities like chatbots, Generative Query-Answering (GQA), and summarization. Its versatility lies within the potential to attach completely different parts collectively, together with immediate templates, LLMs, brokers, and reminiscence methods.

Immediate templates are pre-made prompts for various conditions, whereas LLMs course of and generate responses. Brokers assist make selections primarily based on the LLM’s output, and reminiscence methods retailer data for later use.

On this article I’ll share some capabilities of it in my examples.

Excessive-level Overview of LangChain Modules

What’s Pinecone?

Pinecone.ai is a strong device designed to simplify the administration of high-dimensional information representations often known as vectors.

Vectors are notably helpful when coping with lots of textual content information, like if you’re attempting to extract data from it. Take into account a scenario the place you’re analyzing suggestions and also you wish to discover out numerous particulars a few product. This sort of deep perception gathering wouldn’t be attainable with simply key phrase searches like “nice”, “enhance”, or “i recommend”, as you may miss out on lots of context.

Now, I received’t delve into the technical elements of textual content vectorization (which may very well be word-based, sentence-based, and many others.). The important thing factor it’s essential perceive is that phrases get transformed into numbers via machine studying fashions, and these numbers are then saved in arrays.

Let’s take an instance:

The phrase “seafood” is likely to be translated right into a sequence of numbers like this: [1.2, -0.2, 7.0, 19.9, 3.1, …, 10.2].

Once I seek for one other phrase, that phrase additionally will get remodeled right into a quantity sequence (or vector). If our machine studying mannequin is doing its job correctly, phrases which have an identical context to “seafood” ought to have a quantity sequence that’s near the sequence for “seafood”. Right here’s an instance:

“shrimp” is likely to be translated as: [1.1, -0.3, 7.1, 19.8, 3.0, …, 10.5], the place the numbers are near numbers “seafood” has.

With Pinecone.ai, you possibly can effectively retailer and search these vectors, enabling fast and correct similarity comparisons.

Through the use of its capabilities, you possibly can arrange and index vectors derived from LLM fashions, opening the door to deeper insights and the invention of significant patterns inside in depth datasets.

In less complicated phrases, Pinecone.ai means that you can retailer the vector representations of your qualitative information in a handy method. You possibly can simply search via these vectors and apply LLM fashions to extract useful insights from them. It simplifies the method of managing your information and deriving significant data from it.

Illustration of Vector Databases

When would you really want instruments like LangChain and Pinecone?

Brief reply: when you’re working with lots of qualitative information.

Let me share some use instances from my expertise to provide you an concept:

  • Think about you could have hundreds of written suggestions entries out of your product channels. You wish to establish patterns within the information and observe how the suggestions has developed over time.
  • Suppose you could have opinions in numerous languages and also you wish to translate them in your most well-liked language, after which extract insights.
  • You intention to conduct aggressive evaluation by analyzing buyer opinions, suggestions, and sentiment concerning your rivals’ merchandise.
  • Your organization conducts surveys or consumer research, producing a big quantity of qualitative responses. Extracting significant insights, uncovering developments, and informing services or products enhancements are your objectives.

These are only a few examples of conditions the place instruments like LangChain and Pinecone will be invaluable for product managers working with qualitative information.

As a product supervisor, my job entails bettering our assembly notes and transcription options. To do that, we hearken to what our customers say about them.

For our meeting notes feature, customers give us a rating between 1 and 5 for high quality, inform us which template they used, and likewise ship us their feedback. Right here is the move:

On this challenge, I appeared carefully at two issues: what customers stated about our characteristic and which templates they used. I ended up coping with an enormous quantity of information — over 20,000 phrases, which became greater than 38,000 “tokens” (or items of information) after I used a particular device to interrupt it down. That’s a lot information that it’s greater than what some superior fashions can deal with !

To assist me analyze this in depth information, I turned to 2 superior instruments: LangChain and Pinecone, supplemented with GPT-4. With these in our arsenal, let’s delve deeper into the challenge and see what these high-tech instruments enabled us to do.

This challenge’s major goal was extracting insights from the gathered information, which required:

  1. The flexibility to create particular queries associated to our dataset.
  2. The usage of LLMs for dealing with huge data volumes.

First, I’ll provide you with an outline of how I carried out the challenge. After that, I’ll share some examples of the code I used.

We begin with a set of textual content recordsdata. Every file accommodates consumer suggestions paired with the identify of the template they used. You possibly can course of this information to suit your wants — I needed to do some post-processing for my challenge. Keep in mind, your recordsdata and information is likely to be completely different, so be at liberty to tweak the data in your personal challenge.

For instance you wish to perceive customers’ suggestions on assembly notes construction:

question =  "Please listing all suggestions concerning sentence constructions in a desk 
in markdown and get a single perception for every one, and provides a basic abstract for all."

Right here’s a high-level diagram showcasing the method move when using LLM and Pinecone. You ask GPT-4 a query, or what we name a ‘question’. In the meantime, Pinecone, our library stuffed with all suggestions, gives the context to your question, if you ship the query itself to it (“embed question”). Collectively, they assist us make sense of our information effectively:

Beneath is a extra simplified model of the diagram:

Let’s do it! On this script, we arrange a pipeline to investigate consumer suggestions information utilizing OpenAI’s GPT-4, Pinecone, and LangChain. Basically, it imports obligatory libraries, units the trail to the suggestions information, and establishes the OpenAI API key for processing this information.

import os
import openai
import pinecone
import certifi
import nltk
from tqdm.autonotebook import tqdm
from langchain.document_loaders import DirectoryLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Pinecone
from langchain.llms import OpenAI
from langchain.chains.question_answering import load_qa_chain

listing = 'path to your listing with textual content recordsdata, containing suggestions'
OPENAI_API_KEY = "your key"

Then we outline and name a perform load_docs() that hundreds consumer suggestions paperwork from a specified listing utilizing LangChain’s DirectoryLoader. It then counts and shows the full variety of loaded paperwork.

def load_docs(listing):
loader = DirectoryLoader(listing)
paperwork = loader.load()
return paperwork

paperwork = load_docs(listing)
len(paperwork)

Subsequent outline and execute the split_docs() perform, which divides the loaded paperwork into smaller chunks of a particular dimension and overlap utilizing LangChain’s RecursiveCharacterTextSplitter. It then counts and prints the full variety of ensuing chunks.

def split_docs(paperwork, chunk_size=500, chunk_overlap=20):
text_splitter = RecursiveCharacterTextSplitter(chunk_size=chunk_size, chunk_overlap=chunk_overlap)
docs = text_splitter.split_documents(paperwork)
return docs

docs = split_docs(paperwork)
print(len(docs))

To work with Pinecone, which is principally a vector database we have to get embeddings out of our docs, that’s why we must always introduce a perform for that. There are numerous methods to do it, however let’s go along with OpenAI’s embedding perform:

# Assuming OpenAIEmbeddings class is imported above
embeddings = OpenAIEmbeddings()

# Let's outline a perform to generate an embedding for a given question
def generate_embedding(question):
query_result = embeddings.embed_query(question)
print(f"Embedding size for the question is: {len(query_result)}")
return query_result

For storing these vectors into Pinecone, it’s essential create an account there and create an index as nicely. That’s fairly easy to do. Then you’ll get an API key, setting identify and the index identify from there.

MY_API_KEY_p= "the_key"
MY_ENV_p= "the_environment"

pinecone.init(
api_key=MY_API_KEY_p,
setting=MY_ENV_p
)

index_name = "your_index_name"

index = Pinecone.from_documents(docs, embeddings, index_name=index_name)

The subsequent step is to have the ability to discover solutions. It’s like discovering the closest factors to your query in a area of attainable solutions, giving us essentially the most related outcomes.

def get_similiar_docs(question, ok=40, rating=False):
if rating:
similar_docs = index.similarity_search_with_score(question, ok=ok)
else:
similar_docs = index.similarity_search(question, ok=ok)
return similar_docs

On this code, we arrange a question-answering system utilizing OpenAI’s GPT-4 mannequin and LangChain. The get_answer() perform takes a query as enter, finds related paperwork, and makes use of the question-answering chain to generate a solution.

from langchain.chat_models import ChatOpenAI
model_name = "gpt-4"

llm = OpenAI(model_name=model_name, temperature =0)

chain = load_qa_chain(llm, chain_type="stuff")

def get_answer(question):
similar_docs = get_similiar_docs(question)
reply = chain.run(input_documents=similar_docs, query=question)
return reply

We bought to the query! Or questions. You possibly can ask as many questions as you want.

question =  "Please listing all suggestions concerning sentence constructions in a desk 
in markdown and get a single perception for every one, and provides a basic abstract for all."

reply = get_answer(question)
print(reply)

Implementing Retrieval Q&A Chain:

To implement the retrieval question-answering system, we use the RetrievalQA class from LangChain. It makes use of an OpenAI LLM to reply questions and depends on a “stuff” chain kind. The retriever is related to a beforehand created index and is saved within the ‘qa’ variable. For higher understanding, you possibly can be taught extra about retrieval techniques.

from langchain.chains import RetrievalQA
retriever = index.as_retriever()

qa_stuff = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=retriever,
verbose=True
)

response = qa_stuff.run(question)

So we bought the response, let’s current the content material saved within the response variable in a visually interesting format utilizing Markdown textual content. It makes the displayed textual content extra organized and simpler to learn.

from IPython.show import show, Markdown

show(Markdown(response))

Instance output

Go forward and experiment each with enter recordsdata and queries to get one of the best out of this strategy and instruments.

Briefly, GPT-4, LangChain, and Pinecone make it straightforward to deal with huge chunks of qualitative information. They assist us dig into this information and discover useful insights, guiding higher selections. This text gave a sneak peek into their use, however there’s much more they will do.

As these instruments proceed to advance and turn into extra widespread, studying to make use of them now provides you with a big benefit sooner or later. So, hold exploring and studying about these instruments as a result of they’re shaping the current and the way forward for information evaluation.

Keep tuned for extra methods to discover these useful instruments sooner or later!

All pictures, except in any other case famous, are by the writer

References

LangChain Documentation

Pinecone Documentation

LangChain for LLM Application Development short course


The Underrated Gems Pt.1: 8 Pandas Strategies That Will Make You a Professional | by Andreas Lukita | Jul, 2023

*args, **kwargs, and Every thing in Between | by Philip Wilkinson, Ph.D. | Jul, 2023