File size: 6,339 Bytes
79fbdab |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 |
---
license: apache-2.0
license_name: qwen
license_link: https://huggingface.co/Qwen/Qwen2.5-3B/blob/main/LICENSE
pipeline_tag: text-generation
library_name: transformers
base_model:
- Qwen/Qwen2.5-3B
base_model_relation: finetune
language:
- en
tags:
- sdlm
- diffusion language model
- custom_code
datasets:
- dyyyyyyyy/ScaleQuest-Math
- OpenCoder-LLM/opc-sft-stage2
- allenai/tulu-3-sft-mixture
- HuggingFaceTB/smoltalk2
- LipengCS/Table-GPT
- allenai/SciRIFF
---
# SDLM-3B-D4
[\[📂 GitHub\]](https://github.com/OpenGVLab/SDLM) [\[📜 Tech Report\]](https://huggingface.co/papers/xxx) [\[🤗 HuggingFace\]](https://huggingface.co/collections/OpenGVLab/sdlm-68ac82709d7c343ad36aa552)
## Introduction
We propose a <b>S</b>equential <b>D</b>iffusion <b>L</b>anguage <b>M</b>odel (<b>SDLM</b>), to cheaply stimulate the parallel prediction capabilities of diffusion models. Specifically, SDLM reduces distribution shift by limiting the prediction range to a fixed block length and enforces decoding order through the longest prefix decoding method, thereby significantly improving prediction efficiency while ensuring generation quality. Our method can be viewed as a further generalization of the autoregressive (AR) paradigm. Therefore, it is possible to use pre-trained AR weights and quickly migrate to the diffusion framework with only minimal instruction fine-tuning.

## SDLM Family
In the following table, we provide an overview of the SDLM series.
| Model Name | Base Model 🤗 | HF Link 🤗 |
| ----------- | ------------------------------------------------------------ | -------------------------------------------- |
| SDLM-3B-D4 | <a href="https://huggingface.co/Qwen/Qwen2.5-3B">Qwen2.5-3B</a> | https://huggingface.co/OpenGVLab/SDLM-3B-D4 |
| SDLM-3B-D8 | <a href="https://huggingface.co/Qwen/Qwen2.5-3B">Qwen2.5-3B</a> | https://huggingface.co/OpenGVLab/SDLM-3B-D8 |
| SDLM-32B-D4 | <a href="https://huggingface.co/Qwen/Qwen2.5-32B">Qwen2.5-32B</a> | https://huggingface.co/OpenGVLab/SDLM-32B-D4 |
## Model Architecture
We propose a sequential blockwise masked prediction method that reduces error accumulation in diffusion-based generation. Our method leverages the observation that predictions for tokens at lower positional indices typically benefit from more reliable contextual information, resulting in lower deviation and improved accuracy.
* **(a) Training pipeline.** Reordered input enables structured mask with causal prefix (top-left), visible cross-block prefix (bottom-left), and intra-block bidirectional attention (bottom-right).
* **(b) Sampling Pipeline.** Confidence-based dynamic block decoding with KV cache reuse. At each step, a block of B tokens is predicted with B-1 padding masks. The longest high-confidence prefix is selected as dynamic output. Cached KV states enable efficient decoding.

## Performance
### Long-Form Benchmarks
SDLM delivers strong performance with significantly faster decoding speed. It operates approximately 2x faster than comparable autoregressive models while matching their accuracy, and achieves up to 5x speedup over other diffusion language models, as evidenced by results on the MATH-500 benchmark.

### General Mutiple-Choice Benchmarks

### Block Size & Self-Speculative Decoding

## Trade-off Between Performance and Speed
Trade-off between performance and speed under different confidence thresholds τ for SDLM-3B (B=4) and SDLM-3B (B=8). By adjusting τ, a controllable trade-off between speed and performance can be achieved. SpeedUp denotes the average number of tokens output per forward pass.

## Inference
1. Install Dependencies
Key package versions:
```
transformers==4.37.2
torch>=2.5.0
```
2. Download the model generation script [sdlm_inference.py](https://github.com/OpenGVLab/SDLM/blob/main/sdlm_inference.py) to your working directory.
3. We provide an example code to run `SDLM-3B-D4` using `transformers`.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from sdlm_inference import SDLM_generate
if __name__ == "__main__":
ckpt_hf = 'OpenGVLab/SDLM-3B-D4'
model = AutoModelForCausalLM.from_pretrained(
ckpt_hf,
attn_implementation="eager",
trust_remote_code=True
).to(dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained(ckpt_hf)
prompt = 'Write a Fibonacci function in Python.'
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
response, history = SDLM_generate(
model,
tokenizer,
model_inputs,
max_gen_len = 1024,
temperature = 0,
threshold = 0.5,
n_future_tokens = 4,
alg = 'prob_conf', # prob_conf | entropy_conf | self_speculative
save_history = True,
use_cache = True
)
print('response: ', response[0])
print('=======histroy')
for item in history:
print('cur total token ', item[1])
print(item[0][0])
print('--------')
```
## Citation
If you find this project useful in your research, please consider citing:
```BibTeX
@article{SDLM,
title={Sequential Diffusion Language Models},
author={},
journal={arXiv preprint arXiv:2025.xxxxx},
year={2025}
}
```
|