Remember Alfred, our helpful butler agent from Unit 1? To assist us effectively, Alfred needs to understand our requests and find relevant information to help complete tasks. This is where LlamaIndex’s components come in.
While LlamaIndex has many components, we’ll focus specifically on the QueryEngine
component.
Why? Because it can be used as a Retrieval-Augmented Generation (RAG) tool for an agent.
LLMs are trained on enormous bodies of data to learn general knowledge. However, they may not be trained on relevant and up-to-date data. RAG solves this problem by finding and retrieving relevant information from your data and forward that to the LLM.
Now, think about how Alfred works:
QueryEngine
helps Alfred find this information and use it to plan the dinner partyThis makes the QueryEngine
the most relevant component for building agentic RAG workflows in LlamaIndex.
Just as Alfred needs to search through your household information to be helpful, any agent needs a way to find and understand relevant data.
The QueryEngine
provides exactly this capability.
Now, let’s dive a bit deeper into the components and see how you can combine components to create a RAG pipeline.
There are five key stages within RAG, which in turn will be a part of most larger applications you build. These are:
Next, let’s see how we can reproduce these stages using components.
As mentioned before, LlamaIndex can work on top of your own data, however, before accessing data, we need to load it. There are three main ways to do to load data into LlamaIndex:
SimpleDirectoryReader
: A built-in loader for various file types from a local directory.LlamaParse
: LlamaParse, LlamaIndex’s official tool for PDF parsing, available as a managed API.LlamaHub
: A registry of hundreds of data loading libraries to ingest data from any source.The simplest way to load data is with SimpleDirectoryReader
.
This versatile component can load various file types from a folder and convert them into Document
objects that LlamaIndex can work with.
Let’s see how we can use SimpleDirectoryReader
to load data from a folder.
from llama_index.core import SimpleDirectoryReader
reader = SimpleDirectoryReader(input_dir="path/to/directory")
documents = reader.load_data()
After loading our documents, we need to break them into smaller pieces called Node
objects.
A Node
is just a chunk of text from the original document that’s easier for the AI to work with, while it still has references to the original Document
object.
The IngestionPipeline
helps us create these nodes through two key transformations.
SentenceSplitter
breaks down documents into manageable chunks by splitting them at natural sentence boundaries.HuggingFaceInferenceAPIEmbedding
converts each chunk into numerical embeddings - vector representations that capture the semantic meaning in a way AI can process efficiently.HuggingFaceInferenceAPIEmbedding
converts each chunk into numerical embeddings - vector representations that capture the semantic meaning in a way AI can process efficiently.This process helps us organise our documents in a way that’s more useful for searching and analysis.
from llama_index.core import Document
from llama_index.embeddings.huggingface_api import HuggingFaceInferenceAPIEmbedding
from llama_index.core.node_parser import SentenceSplitter
from llama_index.core.ingestion import IngestionPipeline, IngestionCache
# create the pipeline with transformations
pipeline = IngestionPipeline(
transformations=[
SentenceSplitter(chunk_size=25, chunk_overlap=0),
HuggingFaceInferenceAPIEmbedding("BAAI/bge-small-en-v1.5"),
]
)
# run the pipeline
nodes = pipeline.run(documents=[Document.example()])
After creating our Node
objects we need to index them to make them searchable but before we can do that, we need a place to store our data.
Within LlamaIndex, we can use a StorageContext
to handle a lot of different storage types.
For each of these storage types, there are different integrations with storage backends that can be used.
The various data storage types that LlamaIndex supports are:
DocumentStore
: Stores ingested documents (Node objects) for keyword searchIndexStore
: Stores index metadataVectorStore
: Stores embedding vectors for semantic searchPropertyGraphStore
: Stores knowledge graphs for graph-based queriesChatStore
: Stores and organizes chat message historyAn overview of the different storage types and their integrations can be found in the LlamaIndex documentation.
LlamaIndex makes it easy to set up storage - we can either configure a StorageContext
manually or let LlamaIndex handle it automatically when creating a search index.
When we save the StorageContext
, it neatly organizes all our data into files that we can easily access later.
For searching through our nodes effectively, we need a way to compare queries against them.
This is where vector embeddings come in - by embedding both the query and nodes in the same vector space, we can find relevant matches.
The VectorStoreIndex
handles this for us, using the same embedding model we used during ingestion to ensure consistency.
Let’s see how to create this index and save it to your computer:
from llama_index.core import VectorStoreIndex
from llama_index.embeddings.huggingface_api import HuggingFaceInferenceAPIEmbedding
embed_model = HuggingFaceInferenceAPIEmbedding("BAAI/bge-small-en-v1.5")
index = VectorStoreIndex.from_documents(nodes, embed_model=embed_model)
index.storage_context.persist("path/to/vector/store")
We can load our index again using files that were created when saving the StorageContext
.
from llama_index.core import StorageContext, load_index_from_storage
from llama_index.embeddings.huggingface_api import HuggingFaceInferenceAPIEmbedding
embed_model = HuggingFaceInferenceAPIEmbedding("BAAI/bge-small-en-v1.5")
storage_context = StorageContext.from_defaults(persist_dir="path/to/vector/store")
index = load_index_from_storage(storage_context, embed_model=embed_model)
Great! Now that we can save and load our index easily, let’s explore how to query it in different ways.
Before we can query our index, we need to convert it to a query interface. The most common conversion options are:
as_retriever
: For basic document retrieval, returning a list of NodeWithScore
objects with similarity scoresas_query_engine
: For single question-answer interactions, returning a written responseas_chat_engine
: For conversational interactions that maintain memory across multiple messages, returning a chat historyWe’ll focus on the query engine since it is more common for agent-like interactions. We also pass in an LLM to the query engine to use for the response.
from llama_index.llms.huggingface_api import HuggingFaceInferenceAPILM
llm = HuggingFaceInferenceAPILM(model_name="meta-llama/Meta-Llama-3-8B-Instruct")
query_engine = index.as_query_engine(llm=llm)
query_engine.query("What is the meaning of life?")
# the meaning of life is 42
Under the hood, the query engine doesn’t only use the LLM to answer the question, but also uses a ResponseSynthesizer
as strategy to process the response.
Once again, this is fully customisable but there are three main strategies that work well out of the box:
refine
: create and refine an answer by sequentially going through each retrieved text chunk. This makes a separate LLM call per Node/retrieved chunk.compact
(default): similar to refine but concatenate the chunks beforehand, resulting in less LLM calls.tree_summarize
: create a detailed answer by going through each retrieved text chunk and creating a tree structure of the answer.Language model won’t always perform in predictable ways, so we can’t be sure that the answer we get is always correct. We can deal with this by evaluating the quality of the answer.
LlamaIndex provides built-in evaluation tools to assess response quality. These evaluators leverage LLMs to analyze responses across different dimensions. Let’s look at the three main evaluators available:
FaithfulnessEvaluator
: Evaluates the faithfulness of the answer by checking if the answer is supported by the context.AnswerRelevancyEvaluator
: Evaluates the relevance of the answer by checking if the answer is relevant to the question.CorrectnessEvaluator
: Evaluates the correctness of the answer by checking if the answer is correct.from llama_index.core.evaluation import FaithfulnessEvaluator
query_engine = # from previous section
llm = # from previous section
# query index
evaluator = FaithfulnessEvaluator(llm=llm)
response = query_engine.query(
"What battles took place in New York City in the American Revolution?"
)
eval_result = evaluator.evaluate_response(response=response)
print(str(eval_result.passing))
Even without direct evaluation, we can gain insights into how our system is performing through observability. This is especially useful when we are building more complex workflows and want to understand how each component is performing.
pip install -U llama-index-callbacks-arize-phoenix
Additionally, we need to set the PHOENIX_API_KEY
environment variable to our LlamaTrace API key. We can get this by:
import llama_index
import os
PHOENIX_API_KEY = "<PHOENIX_API_KEY>"
os.environ["OTEL_EXPORTER_OTLP_HEADERS"] = f"api_key={PHOENIX_API_KEY}"
llama_index.core.set_global_handler(
"arize_phoenix", endpoint="https://llamatrace.com/v1/traces"
)
We have seen how to use components to create a QueryEngine
. Now, let’s see how we can use the QueryEngine
as a tool for an agent!