Large Language Models (LLMs) have revolutionized how we interact with and produce text. They transform static knowledge into dynamic text generation through a process known as inference.
In this chapter, we will explore the fundamental concepts and techniques behind LLM inference, providing a comprehensive understanding of how these models generate text.
Let’s start with the basics. Inference is the process of using a trained LLM to generate human-like text from a given input prompt. At its core, the model leverages learned probabilities from billions of parameters to predict and generate the next token in a sequence, one token at a time. The model is able to predict the next token based on the previous tokens in the sequence.
A key aspect of the Transformer architecture is Attention. When predicting the next word, not every word in a sentence is equally important; words like “France” and “capital” in the sentence “The capital of France is …” carry the most meaning.
This process of identifying the most relevant words to predict the next token has proven to be incredibly effective.
Although the basic principle of LLMs—predicting the next token—has remained consistent since GPT-2, there have been significant advancements in scaling neural networks and making the attention mechanism work for longer and longer sequences.
If you’ve interacted with LLMs, you’re probably familiar with the term context length, which refers to the maximum number of tokens the LLM can process, and the maximum attention span it has.
Considering that the only job of an LLM is to predict the next token by looking at every input token, and to choose which tokens are “important”, the wording of your input sequence is very important.
The input sequence you provide an LLM is called a prompt. Careful design of the prompt makes it easier to guide the generation of the LLM toward the desired output.
The inference process can be broadly divided into two main phases. LLMs operate in an autoregressive manner—each output token is appended to the sequence and used as part of the input for predicting subsequent tokens.
This is where the input prompt is processed. It involves tokenization, embedding, and a single forward pass through the model. Prefill is computationally intensive and often compute-bound, meaning its speed is limited by the processing power of the GPU. During this phase, the input prompt is first broken into tokens, which are then converted into numerical representations that capture meaning and context. Finally, these embeddings are processed through attention and feed-forward layers to form a rich internal representation.
This is the autoregressive part, where the model generates text token by token. Each token generation requires loading the model again, making it memory-bound. This repeated loading is a major bottleneck. The decode phase consists of several key steps: the model first generates one token at a time in an autoregressive manner, then calculates probability scores for each possible next token. Following this, it applies various decoding strategies to choose the next token, and continues this process until an end-of-sequence (EOS) token is generated.
These decoding strategies play a crucial role in determining how the next token is chosen based on computed probabilities. The simplest approach is greedy decoding, which directly selects the token with the highest probability. A more sophisticated method is beam search, which evaluates multiple candidate sequences to optimize overall sentence probability. Additionally, techniques such as temperature scaling, top-k, and nucleus sampling (top-p) provide different ways to balance between deterministic outputs and creative generation.
Highlight how and why inference is challenging.
When we talk about performance, we consider several key metrics:
We should consider these metrics when we design our inference pipeline, and adapt them based on our use case.
The context length is the maximum number of tokens the LLM can process. It affects both:
Recent advances have pushed context lengths from thousands to millions of tokens, but this comes with computational trade-offs. Models like Claude 2.1 (200K tokens) and Anthropic’s Constitutional AI (100K tokens) demonstrate the ongoing evolution in this space.
Chat templates are standardized formats that structure conversations with LLMs. They’re crucial because:
Here’s a simple example of a chat template:
<|system|>You are a helpful AI assistant.<|endoftext|> <|user|>What is the capital of France?<|endoftext|> <|assistant|>The capital of France is Paris.<|endoftext|>
We won’t go into the details of how to create a chat template, but it’s good to know that they exist and that they can be very different. If you’re interested in learning more, you can check out this Chat Templates guide.
Let’s break down the hardware requirements for LLM inference into compute and memory requirements.
Resource Type | Component | Description |
---|---|---|
Memory | Model Weights | Model size (e.g., 7 billion parameter model ≈ 14GB of memory) |
Memory | Working Memory | Computations and system overhead (~2-3GB) |
Compute | Prefill Phase | Heavy parallel computation for initial prompt |
Compute | Decode Phase | Sequential generation of new tokens |
Compute | Attention | Memory usage grows quadratically with sequence length |
The memory usage for LLM inference is dominated by the model weights and the working memory for computations and system overhead. Whilst the compute requirements are dominated by the prefill phase.
We can optimize the memory usage by using quantization, which reduces the precision of the model weights, and by using attention optimizations, which reduce the memory usage of the attention mechanism. Some common optimizations are:
We will cover inference optimizations in more detail in the next chapter.
Understanding LLM inference is essential for harnessing the full potential of these models. By mastering the processes of tokenization, contextualization, and autoregressive decoding—as well as key concepts like attention and decoding strategies—we can ensure that generated text is both coherent and context-aware.
A well-designed prompt further guides the model toward producing the desired output, making prompt engineering a vital part of working with LLMs.
< > Update on GitHub