What are components in LlamaIndex?

Remember Alfred, our helpful butler agent from Unit 1? To assist us effectively, Alfred needs to understand our requests and find relevant information to help complete tasks. This is where LlamaIndex’s components come in.

While LlamaIndex has many components, we’ll focus specifically on the QueryEngine component. Why? Because it can be used as a Retrieval-Augmented Generation (RAG) tool for an agent.

LLMs are trained on enormous bodies of data to learn general knowledge. However, they may not be trained on relevant and up-to-date data. RAG solves this problem by finding and retrieving relevant information from your data and forward that to the LLM.

RAG

Now, think about how Alfred works:

  1. You ask Alfred to help plan a dinner party
  2. Alfred needs to check your calendar, dietary preferences, and past successful menus
  3. The QueryEngine helps Alfred find this information and use it to plan the dinner party

This makes the QueryEngine the most relevant component for building agentic RAG workflows in LlamaIndex. Just as Alfred needs to search through your household information to be helpful, any agent needs a way to find and understand relevant data. The QueryEngine provides exactly this capability.

Now, let’s dive a bit deeper into the components and see how you can combine components to create a RAG pipeline.

Creating a RAG pipeline using components

There are five key stages within RAG, which in turn will be a part of most larger applications you build. These are:

  1. Loading: this refers to getting your data from where it lives — whether it’s text files, PDFs, another website, a database, or an API — into your workflow. LlamaHub provides hundreds of integrations to choose from.
  2. Indexing: this means creating a data structure that allows for querying the data. For LLMs this nearly always means creating vector embeddings. Which are numerical representations of the meaning of text data. Indexing can also refer to numerous other metadata strategies to make it easy to accurately find contextually relevant data based on properties.
  3. Storing: once your data is indexed you will want to store your index, as well as other metadata, to avoid having to re-index it.
  4. Querying: for any given indexing strategy there are many ways you can utilize LLMs and LlamaIndex data structures to query, including sub-queries, multi-step queries and hybrid strategies.
  5. Evaluation: a critical step in any flow is checking how effective it is relative to other strategies, or when you make changes. Evaluation provides objective measures of how accurate, faithful and fast your responses to queries are.

Next, let’s see how we can reproduce these stages using components.

Loading and embedding documents

As mentioned before, LlamaIndex can work on top of your own data, however, before accessing data, we need to load it. There are three main ways to do to load data into LlamaIndex:

  1. SimpleDirectoryReader: A built-in loader for various file types from a local directory.
  2. LlamaParse: LlamaParse, LlamaIndex’s official tool for PDF parsing, available as a managed API.
  3. LlamaHub: A registry of hundreds of data loading libraries to ingest data from any source.
Get familiar with LlamaHub loaders and LlamaParse parser for more complex data sources.

The simplest way to load data is with SimpleDirectoryReader. This versatile component can load various file types from a folder and convert them into Document objects that LlamaIndex can work with. Let’s see how we can use SimpleDirectoryReader to load data from a folder.

from llama_index.core import SimpleDirectoryReader

reader = SimpleDirectoryReader(input_dir="path/to/directory")
documents = reader.load_data()

After loading our documents, we need to break them into smaller pieces called Node objects. A Node is just a chunk of text from the original document that’s easier for the AI to work with, while it still has references to the original Document object.

The IngestionPipeline helps us create these nodes through two key transformations.

  1. SentenceSplitter breaks down documents into manageable chunks by splitting them at natural sentence boundaries.
  2. HuggingFaceInferenceAPIEmbedding converts each chunk into numerical embeddings - vector representations that capture the semantic meaning in a way AI can process efficiently.
  3. HuggingFaceInferenceAPIEmbedding converts each chunk into numerical embeddings - vector representations that capture the semantic meaning in a way AI can process efficiently.

This process helps us organise our documents in a way that’s more useful for searching and analysis.

from llama_index.core import Document
from llama_index.embeddings.huggingface_api import HuggingFaceInferenceAPIEmbedding
from llama_index.core.node_parser import SentenceSplitter
from llama_index.core.ingestion import IngestionPipeline, IngestionCache

# create the pipeline with transformations
pipeline = IngestionPipeline(
    transformations=[
        SentenceSplitter(chunk_size=25, chunk_overlap=0),
        HuggingFaceInferenceAPIEmbedding("BAAI/bge-small-en-v1.5"),
    ]
)

# run the pipeline
nodes = pipeline.run(documents=[Document.example()])
To save time and computer power, **LlamaIndex caches the results of the ingestion pipeline** so you don't need to load and embed the same documents twice. Learn more about caching in the LlamaIndex documentation.

Storing and indexing documents

After creating our Node objects we need to index them to make them searchable but before we can do that, we need a place to store our data.

Within LlamaIndex, we can use a StorageContext to handle a lot of different storage types. For each of these storage types, there are different integrations with storage backends that can be used. The various data storage types that LlamaIndex supports are:

An overview of the different storage types and their integrations can be found in the LlamaIndex documentation.

LlamaIndex makes it easy to set up storage - we can either configure a StorageContext manually or let LlamaIndex handle it automatically when creating a search index. When we save the StorageContext, it neatly organizes all our data into files that we can easily access later.

For searching through our nodes effectively, we need a way to compare queries against them. This is where vector embeddings come in - by embedding both the query and nodes in the same vector space, we can find relevant matches. The VectorStoreIndex handles this for us, using the same embedding model we used during ingestion to ensure consistency.

Let’s see how to create this index and save it to your computer:

from llama_index.core import VectorStoreIndex
from llama_index.embeddings.huggingface_api import HuggingFaceInferenceAPIEmbedding

embed_model = HuggingFaceInferenceAPIEmbedding("BAAI/bge-small-en-v1.5")
index = VectorStoreIndex.from_documents(nodes, embed_model=embed_model)
index.storage_context.persist("path/to/vector/store")

We can load our index again using files that were created when saving the StorageContext.

from llama_index.core import StorageContext, load_index_from_storage
from llama_index.embeddings.huggingface_api import HuggingFaceInferenceAPIEmbedding

embed_model = HuggingFaceInferenceAPIEmbedding("BAAI/bge-small-en-v1.5")
storage_context = StorageContext.from_defaults(persist_dir="path/to/vector/store")
index = load_index_from_storage(storage_context, embed_model=embed_model)

Great! Now that we can save and load our index easily, let’s explore how to query it in different ways.

Querying a VectorStoreIndex with prompts and LLMs

Before we can query our index, we need to convert it to a query interface. The most common conversion options are:

We’ll focus on the query engine since it is more common for agent-like interactions. We also pass in an LLM to the query engine to use for the response.

from llama_index.llms.huggingface_api import HuggingFaceInferenceAPILM

llm = HuggingFaceInferenceAPILM(model_name="meta-llama/Meta-Llama-3-8B-Instruct")
query_engine = index.as_query_engine(llm=llm)
query_engine.query("What is the meaning of life?")
# the meaning of life is 42

Response Processing

Under the hood, the query engine doesn’t only use the LLM to answer the question, but also uses a ResponseSynthesizer as strategy to process the response. Once again, this is fully customisable but there are three main strategies that work well out of the box:

Take fine-grained control of your query workflows with the [low-level composition API](https://docs.llamaindex.ai/en/stable/module_guides/deploying/query_engine/usage_pattern/#low-level-composition-api). This API lets you customize and fine-tune every step of the query process to match your exact needs.

Language model won’t always perform in predictable ways, so we can’t be sure that the answer we get is always correct. We can deal with this by evaluating the quality of the answer.

Evaluation and observability

LlamaIndex provides built-in evaluation tools to assess response quality. These evaluators leverage LLMs to analyze responses across different dimensions. Let’s look at the three main evaluators available:

from llama_index.core.evaluation import FaithfulnessEvaluator

query_engine = # from previous section
llm = # from previous section

# query index
evaluator = FaithfulnessEvaluator(llm=llm)
response = query_engine.query(
    "What battles took place in New York City in the American Revolution?"
)
eval_result = evaluator.evaluate_response(response=response)
print(str(eval_result.passing))

Even without direct evaluation, we can gain insights into how our system is performing through observability. This is especially useful when we are building more complex workflows and want to understand how each component is performing.

Install LlamaTrace As introduced in the [section on components](what-are-components-in-llama-index.mdx), we can install the LlamaTrace integration with the following command:
pip install -U llama-index-callbacks-arize-phoenix

Additionally, we need to set the PHOENIX_API_KEY environment variable to our LlamaTrace API key. We can get this by:

import llama_index
import os

PHOENIX_API_KEY = "<PHOENIX_API_KEY>"
os.environ["OTEL_EXPORTER_OTLP_HEADERS"] = f"api_key={PHOENIX_API_KEY}"
llama_index.core.set_global_handler(
    "arize_phoenix", endpoint="https://llamatrace.com/v1/traces"
)
Want to learn more about components and how to use them? Continue your journey with the Components Guides or the Guide on RAG.

We have seen how to use components to create a QueryEngine. Now, let’s see how we can use the QueryEngine as a tool for an agent!

< > Update on GitHub