Retrieval Augmented Generation (RAG) for GPT-4 using Pinecone
Fixing LLMs that Hallucinate
In this notebook, we will learn how to query relevant contexts for our queries from Pinecone and pass them to a GPT-4 model to generate an answer backed by real data sources.
GPT-4 is a significant improvement over previous OpenAI completion models. It exclusively uses the ChatCompletion
endpoint, so we must use it in a slightly different way than usual. However, the power of the model makes the change worthwhile, particularly when augmented with an external knowledge base like the Pinecone vector database.
Required installs for this notebook are:
!pip install -qU bs4 tiktoken openai langchain pinecone-client[grpc]
Preparing the Data
In this example, we will download the LangChain docs from langchain.readthedocs.io/. We get all .html
files located on the site like so:
!wget -r -A.html -P rtdocs https://python.langchain.com/en/latest/
This downloads all HTML into the rtdocs
directory. Now we can use LangChain itself to process these docs. We do this using the ReadTheDocsLoader
like so:
from langchain.document_loaders import ReadTheDocsLoader
loader = ReadTheDocsLoader('rtdocs')
docs = loader.load()
len(docs)
This leaves us with hundreds of processed doc pages. Let's take a look at the format each one contains:
docs[0]
We access the plaintext page content like so:
print(docs[0].page_content)
print(docs[5].page_content)
We can also find the source of each document:
docs[5].metadata['source'].replace('rtdocs/', 'https://')
We can use these to create our data
list:
data = []
for doc in docs:
data.append({
'url': doc.metadata['source'].replace('rtdocs/', 'https://'),
'text': doc.page_content
})
data[3]
It's pretty ugly but it's good enough for now. Let's see how we can process all of these. We will chunk everything into ~400 token chunks, we can do this easily with langchain
and tiktoken
:
import tiktoken
tokenizer = tiktoken.get_encoding('p50k_base')
# create the length function
def tiktoken_len(text):
tokens = tokenizer.encode(
text,
disallowed_special=()
)
return len(tokens)
from langchain.text_splitter import RecursiveCharacterTextSplitter
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=400,
chunk_overlap=20,
length_function=tiktoken_len,
separators=["\n\n", "\n", " ", ""]
)
Process the data
into more chunks using this approach.
from uuid import uuid4
from tqdm.auto import tqdm
chunks = []
for idx, record in enumerate(tqdm(data)):
texts = text_splitter.split_text(record['text'])
chunks.extend([{
'id': str(uuid4()),
'text': texts[i],
'chunk': i,
'url': record['url']
} for i in range(len(texts))])
Our chunks are ready so now we move onto embedding and indexing everything.
Initialize Embedding Model
We use text-embedding-3-small
as the embedding model. We can embed text like so:
import openai
# initialize openai API key
openai.api_key = "sk-..."
embed_model = "text-embedding-3-small"
res = openai.Embedding.create(
input=[
"Sample document text goes here",
"there will be several phrases in each batch"
], engine=embed_model
)
In the response res
we will find a JSON-like object containing our new embeddings within the 'data'
field.
Inside 'data'
we will find two records, one for each of the two sentences we just embedded. Each vector embedding contains 1536
dimensions (the output dimensionality of the text-embedding-3-small
model.
We will apply this same embedding logic to the langchain docs dataset we've just scraped. But before doing so we must create a place to store the embeddings.
Initializing the Index
Now we need a place to store these embeddings and enable a efficient vector search through them all. To do that we use Pinecone, we can get a free API key and enter it below where we will initialize our connection to Pinecone and create a new index.
import pinecone
index_name = 'gpt-4-langchain-docs'
# initialize connection to pinecone
pinecone.init(
api_key="PINECONE_API_KEY", # app.pinecone.io (console)
environment="PINECONE_ENVIRONMENT" # next to API key in console
)
# check if index already exists (it shouldn't if this is first time)
if index_name not in pine.list_indexes():
# if does not exist, create index
pinecone.create_index(
index_name,
dimension=len(res['data'][0]['embedding']),
metric='dotproduct'
)
# connect to index
index = pinecone.GRPCIndex(index_name)
# view index stats
index.describe_index_stats()
We can see the index is currently empty with a total_vector_count
of 0
. We can begin populating it with OpenAI text-embedding-3-small
built embeddings like so:
from tqdm.auto import tqdm
import datetime
from time import sleep
batch_size = 100 # how many embeddings we create and insert at once
for i in tqdm(range(0, len(chunks), batch_size)):
# find end of batch
i_end = min(len(chunks), i+batch_size)
meta_batch = chunks[i:i_end]
# get ids
ids_batch = [x['id'] for x in meta_batch]
# get texts to encode
texts = [x['text'] for x in meta_batch]
# create embeddings (try-except added to avoid RateLimitError)
try:
res = openai.Embedding.create(input=texts, engine=embed_model)
except:
done = False
while not done:
sleep(5)
try:
res = openai.Embedding.create(input=texts, engine=embed_model)
done = True
except:
pass
embeds = [record['embedding'] for record in res['data']]
# cleanup metadata
meta_batch = [{
'text': x['text'],
'chunk': x['chunk'],
'url': x['url']
} for x in meta_batch]
to_upsert = list(zip(ids_batch, embeds, meta_batch))
# upsert to Pinecone
index.upsert(vectors=to_upsert)
Now we've added all of our langchain docs to the index. With that we can move on to retrieval and then answer generation using GPT-4.
Retrieval
To search through our documents we first need to create a query vector xq
. Using xq
we will retrieve the most relevant chunks from the LangChain docs, like so:
query = "how do I use the LLMChain in LangChain?"
res = openai.Embedding.create(
input=[query],
engine=embed_model
)
# retrieve from Pinecone
xq = res['data'][0]['embedding']
# get relevant contexts (including the questions)
res = index.query(xq, top_k=5, include_metadata=True)
With retrieval complete, we move on to feeding these into GPT-4 to produce answers.
Retrieval Augmented Generation
GPT-4 is currently accessed via the ChatCompletions
endpoint of OpenAI. To add the information we retrieved into the model, we need to pass it into our user prompts alongside our original query. We can do that like so:
# get list of retrieved text
contexts = [item['metadata']['text'] for item in res['matches']]
augmented_query = "\n\n---\n\n".join(contexts)+"\n\n-----\n\n"+query
Now we ask the question:
# system message to 'prime' the model
primer = f"""You are Q&A bot. A highly intelligent system that answers
user questions based on the information provided by the user above
each question. If the information can not be found in the information
provided by the user you truthfully say "I don't know".
"""
res = openai.ChatCompletion.create(
model="gpt-4",
messages=[
{"role": "system", "content": primer},
{"role": "user", "content": augmented_query}
]
)
To display this response nicely, we will display it in markdown.
To use the LLMChain in LangChain, follow these steps:
- Import the necessary classes:
from langchain.prompts import PromptTemplate
from langchain.llms import OpenAI
from langchain.chains import LLMChain
- Create an instance of the LLM and set the configuration options:
llm = OpenAI(temperature=0.9)
- Create a PromptTemplate instance with the input variables and the template:
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good product for {product}?",
)
- Create an LLMChain instance by passing the LLM and PromptTemplate instances:
llm_chain = LLMChain(llm=llm, prompt_template=prompt)
- Run the LLMChain with user input:
response = llm_chain.run({"product": "software development"})
- Access the generated response:
generated_text = response["generated_text"]
In this example, the LLMChain is used to generate a response by passing through the user input and formatting it using the prompt template. The response is then obtained from the LLM instance (in this case, OpenAI), and the generated text can be accessed from the response dictionary.
Let's compare this to a non-augmented query...
I don't know.
If we drop the "I don't know"
part of the primer
?
LangChain hasn't provided any public documentation on LLMChain, nor is there a known technology called LLMChain in their library. To better assist you, please provide more information or context about LLMChain and LangChain.
Meanwhile, if you are referring to LangChain, a blockchain-based decentralized AI language model, you can start by visiting their official website (if they have one), exploring their available resources, such as documentation and tutorials, and following any instructions on setting up their technology.
If you are looking for help with a specific language chain or model in natural processing, consider rephrasing your question to provide more accurate information or visit relevant resources like GPT-3 or other NLP-related documentation.