Inference is the process of using a model to generate an output like text, images, predictions, or other modalities.
While inference might seem straightforward, deploying and using models efficiently requires some consideration of the various factors like performance, cost, and reliability. Large Language Models (LLMs) present unique challenges due to their size and computational requirements.
In this chapter, we’ll explore the challenge of inference from multiple perspectives. We’ll go from simple pipelines for vibe testing to production-ready solutions for large-scale deployments. We’ll also explore inference both via APIs and through local inference, and we’ll cover the various frameworks and libraries that can help you deploy your models.