| | --- |
| | datasets: |
| | - d3LLM/trajectory_data_dream_32 |
| | pipeline_tag: text-generation |
| | library_name: transformers |
| | license: apache-2.0 |
| | base_model: Dream-org/Dream-v0-Instruct-7B |
| | tags: |
| | - diffusion |
| | - text-generation |
| | - fast-inference |
| | - d3llm |
| | --- |
| | |
| | # d3LLM: Ultra-Fast Diffusion LLM using Pseudo-Trajectory Distillation 🚀 |
| |
|
| | This repository contains the **d3LLM-Dream** model, an ultra-fast diffusion language model introduced in the paper [d3LLM: Ultra-Fast Diffusion LLM using Pseudo-Trajectory Distillation](https://huggingface.co/papers/2601.07568). |
| |
|
| | - 📄 **Paper**: [arXiv:2601.07568](https://huggingface.co/papers/2601.07568) |
| | - 👉 **Code repo**: [https://github.com/hao-ai-lab/d3LLM](https://github.com/hao-ai-lab/d3LLM) |
| | - 🌐 **Blog**: [https://hao-ai-lab.github.io/blogs/text-diffusion/](https://hao-ai-lab.github.io/blogs/text-diffusion/) |
| | - 🕹️ **Demo**: [https://d3llm-team.github.io/](https://d3llm-team.github.io/) |
| |
|
| | ## Model Description |
| |
|
| | **d3LLM-Dream** is an ultra-fast diffusion language model that achieves high generation speed while maintaining competitive performance. It strikes a balance between accuracy and parallelism by using **pseudo-trajectory distillation** during training and **entropy-based multi-block decoding** during inference. |
| |
|
| | ## Key Features |
| |
|
| | - 🚀 **High throughput**: 4.5× faster than autoregressive models (Qwen-2.5-7B) on H100 GPU, 2.5× faster on A100 GPU. Achieves **235.34 tokens/s** on H100 on GSM8K-CoT. |
| | - 📊 **High AUP**: Optimized for Accuracy Under Parallelism across benchmarks. |
| | - 🔧 **Specialized**: Optimized for coding and math reasoning tasks. |
| |
|
| | ## Usage |
| |
|
| | For more chat examples and evaluation scripts, visit the [official repository](https://github.com/hao-ai-lab/d3LLM). |
| |
|
| | ## Citation |
| |
|
| | ```bibtex |
| | @article{arxiv'26:d3llm, |
| | title = {d3LLM: Ultra-Fast Diffusion LLM using Pseudo-Trajectory Distillation}, |
| | author = {Yu-Yang Qian and Junda Su and Lanxiang Hu and Peiyuan Zhang and Zhijie Deng and Peng Zhao and Hao Zhang}, |
| | journal = {ArXiv preprint}, |
| | volume = {arXiv:2601.07568}, |
| | year = {2026} |
| | } |
| | ``` |