Creating agentic workflows in LlamaIndex

A workflow in LlamaIndex provides a structured way to organize your code into sequential and manageable steps.

Such a workflow is created by defining Steps which are triggered by Events, and themselves emit Events to trigger further steps. Let’s take a look at Alfred showing a LlamaIndex workflow for a RAG task.

Workflow Schematic

Workflows offer several key benefits:

As you might have guessed, workflows strike a great balance between autonomy of agents while maintaining control over the overall workflow.

So, let’s learn how to create a workflow ourselves!

Creating Workflows

Basic Workflow Creation

Install the Workflow package As introduced in the [section on components](what-are-components-in-llama-index.mdx), we can install the Workflow package with the following command:
pip install llama-index-utils-workflow

We can create a single step workflow by defining a class that inherits from Workflow and decorating your functions with @step. We will also need to add StartEvent and StopEvent, which are special events that are used to indicate the start and end of the workflow.

from llama_index.core.workflow import StartEvent, StopEvent, Workflow, step

class MyWorkflow(Workflow):
    @step
    async def my_step(self, ev: StartEvent) -> StopEvent:
        # do something here
        return StopEvent(result="Hello, world!")


w = MyWorkflow(timeout=10, verbose=False)
result = await w.run()

As you can see, we can now run the workflow by calling w.run().

Connecting Multiple Steps

To connect multiple steps, we create custom events that carry data between steps. To do so, we need to add an Event that is passed between the steps and transfers the output of the first step to the second step.

from llama_index.core.workflow import Event

class ProcessingEvent(Event):
    intermediate_result: str

class MultiStepWorkflow(Workflow):
    @step
    async def step_one(self, ev: StartEvent) -> ProcessingEvent:
        # Process initial data
        return ProcessingEvent(intermediate_result="Step 1 complete")

    @step
    async def step_two(self, ev: ProcessingEvent) -> StopEvent:
        # Use the intermediate result
        final_result = f"Finished processing: {ev.intermediate_result}"
        return StopEvent(result=final_result)

The type hinting is important here, as it ensures that the workflow is executed correctly. Let’s complicate things a bit more!

Loops and Branches

The type hinting is the most powerful part of workflows because it allows us to create branches, loops, joins to facilitate more complex workflows.

Let’s show an example of creating a loop by using the union-operator |. In the example below, we see that the LoopEvent is taken as input for the step and can also be returned as output.

@step
async def step_one(self, ev: StartEvent | LoopEvent) -> FirstEvent | LoopEvent:
    if random.randint(0, 1) == 0:
        print("Bad thing happened")
        return LoopEvent(loop_output="Back to step one.")
    else:
        print("Good thing happened")
        return FirstEvent(first_output="First step complete.")

There is one last cool trick that we will cover in the course, which is the ability to add state to the workflow.

State Management

State management is useful when you want to keep track of the state of the workflow, so that every step has access to the same state. We can do this by using the Context type hint on top of a parameter in the step function.

from llama_index.core.workflow import Context, StartEvent, StopEvent


@step
async def query(self, ctx: Context, ev: StartEvent) -> StopEvent:
    # retrieve from context
    query = await ctx.get("query")

    # do something with context and event
    val = ...

    # store in context
    await ctx.set("key", val)

    return StopEvent(result=result)

Great! Now you know how to create basic workflows in LlamaIndex!

There are some more complex nuances to workflows, which you can learn about in the LlamaIndex documentation.

However, there is another way to create workflows, which relies on the AgentWorkflow class. Let’s take a look at how we can use this to create a multi-agent workflow.

Automating workflows with Multi-Agent Workflows

Instead of manual workflow creation, we can use the AgentWorkflow class to create a multi-agent workflow. The AgentWorkflow uses Workflow Agents to allow you to create a system of one or more agents that can collaborate and hand off tasks to each other based on their specialized capabilities. This enables building complex agent systems where different agents handle different aspects of a task. Instead of importing classes from llama_index.core.agent, we will import the agent classes from llama_index.core.agent.workflow. One agent must be designated as the root agent in the AgentWorkflow constructor. When a user message comes in, it is first routed to the root agent. Each agent can then:

Let’s see how to create a multi-agent workflow.

from llama_index.core.agent.workflow import AgentWorkflow, FunctionAgent, ReActAgent

query_engine_agent_tool = # as defined in the previous section
# Define the agents
multiply_agent = FunctionAgent(
    fn=lambda x, y: x * y,
    name="multiply",
    description="Multiplies two integers",
)
retriever_agent = ReActAgent(
    llm=llm,
    tools=[query_engine_agent_tool],
)
# Create the workflow
workflow = AgentWorkflow(
    agents=[multiply_agent, retriever_agent], root_agent="multiply"
)

# Run the system
response = await workflow.run(user_msg="Can you add 5 and 3?")

Before starting the workflow, we can provide an initial state dict that will be available to all agents. The state is stored in the state key of the workflow context. It will be injected into the state_prompt which augments each new user message.

workflow = AgentWorkflow(
    agents=[...],
    root_agent="root_agent",
    initial_state={"counter": 0},
    state_prompt="Current state: {state}. User message: {msg}",
)

Congratulations! You have now mastered the basics of Agents in LlamaIndex! 🎉

Let’s continue with tackling LangGraph! 🚀

< > Update on GitHub