TeaCache
TeaCache (Timestep Embedding Aware Cache) is a training-free caching approach that estimates and leverages the fluctuating differences among model outputs across timesteps, thereby accelerating the inference.
Examples
FLUX
Script: ./flux_teacache.py
Model: FLUX.1-dev
Steps: 50
GPU: A100
Hunyuan Video
Script: ./hunyuanvideo_teacache.py
Model: Hunyuan Video
Steps: 30
GPU: A100
The following video was generated using TeaCache. It is nearly identical to the video without TeaCache enabled, but with double the speed.
https://github.com/user-attachments/assets/cd9801c5-88ce-4efc-b055-2c7737166f34