nielsr HF Staff commited on
Commit
ca7adcb
·
verified ·
1 Parent(s): a5c5e0d

Improve model card: Add paper, code, project links, abstract, and comprehensive usage

Browse files

This PR significantly enhances the model card by:
- Adding prominent links to the associated paper ([GLM-4.5: Agentic, Reasoning, and Coding (ARC) Foundation Models](https://huggingface.co/papers/2508.06471)), the main GitHub repository (`https://github.com/zai-org/GLM-4.5`), and the project's technical blog (`https://z.ai/blog/glm-4.5`).
- Including the paper's abstract for quick overview.
- Updating the outdated information regarding the technical report release, referencing the already available paper.
- Providing a comprehensive Python code snippet for using the model with the `transformers` library, explicitly demonstrating both "thinking" and "non-thinking" inference modes, which is a key feature of this model.
- Integrating "Model Downloads" and "System Requirements" sections directly from the GitHub README to make the model card a more complete resource.

These changes improve discoverability, clarity, and utility for users interacting with the model on the Hub.

Files changed (1) hide show
  1. README.md +224 -45
README.md CHANGED
@@ -1,45 +1,224 @@
1
- ---
2
- license: mit
3
- language:
4
- - en
5
- - zh
6
- pipeline_tag: text-generation
7
- library_name: transformers
8
- ---
9
-
10
- # GLM-4.5-FP8
11
-
12
- <div align="center">
13
- <img src=https://raw.githubusercontent.com/zai-org/GLM-4.5/refs/heads/main/resources/logo.svg width="15%"/>
14
- </div>
15
- <p align="center">
16
- 👋 Join our <a href="https://discord.gg/QR7SARHRxK" target="_blank">Discord</a> community.
17
- <br>
18
- 📖 Check out the GLM-4.5 <a href="https://z.ai/blog/glm-4.5" target="_blank">technical blog</a>.
19
- <br>
20
- 📍 Use GLM-4.5 API services on <a href="https://docs.z.ai/guides/llm/glm-4.5">Z.ai API Platform (Global)</a> or <br> <a href="https://docs.bigmodel.cn/cn/guide/models/text/glm-4.5">Zhipu AI Open Platform (Mainland China)</a>.
21
- <br>
22
- 👉 One click to <a href="https://chat.z.ai">GLM-4.5</a>.
23
- </p>
24
-
25
- ## Model Introduction
26
-
27
- The **GLM-4.5** series models are foundation models designed for intelligent agents. GLM-4.5 has **355** billion total parameters with **32** billion active parameters, while GLM-4.5-Air adopts a more compact design with **106** billion total parameters and **12** billion active parameters. GLM-4.5 models unify reasoning, coding, and intelligent agent capabilities to meet the complex demands of intelligent agent applications.
28
-
29
- Both GLM-4.5 and GLM-4.5-Air are hybrid reasoning models that provide two modes: thinking mode for complex reasoning and tool usage, and non-thinking mode for immediate responses.
30
-
31
- We have open-sourced the base models, hybrid reasoning models, and FP8 versions of the hybrid reasoning models for both GLM-4.5 and GLM-4.5-Air. They are released under the MIT open-source license and can be used commercially and for secondary development.
32
-
33
- As demonstrated in our comprehensive evaluation across 12 industry-standard benchmarks, GLM-4.5 achieves exceptional performance with a score of **63.2**, in the **3rd** place among all the proprietary and open-source models. Notably, GLM-4.5-Air delivers competitive results at **59.8** while maintaining superior efficiency.
34
-
35
- ![bench](https://raw.githubusercontent.com/zai-org/GLM-4.5/refs/heads/main/resources/bench.png)
36
-
37
- For more eval results, show cases, and technical details, please visit
38
- our [technical blog](https://z.ai/blog/glm-4.5). The technical report will be released soon.
39
-
40
-
41
- The model code, tool parser and reasoning parser can be found in the implementation of [transformers](https://github.com/huggingface/transformers/tree/main/src/transformers/models/glm4_moe), [vLLM](https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/models/glm4_moe_mtp.py) and [SGLang](https://github.com/sgl-project/sglang/blob/main/python/sglang/srt/models/glm4_moe.py).
42
-
43
- ## Quick Start
44
-
45
- Please refer our [github page](https://github.com/zai-org/GLM-4.5) for more detail.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - zh
5
+ library_name: transformers
6
+ license: mit
7
+ pipeline_tag: text-generation
8
+ ---
9
+
10
+ # GLM-4.5-FP8
11
+
12
+ [📚 Paper](https://huggingface.co/papers/2508.06471) | [💻 Code](https://github.com/zai-org/GLM-4.5) | [🌐 Project Page](https://z.ai/blog/glm-4.5)
13
+
14
+ <div align="center">
15
+ <img src=https://raw.githubusercontent.com/zai-org/GLM-4.5/refs/heads/main/resources/logo.svg width="15%"/>
16
+ </div>
17
+ <p align="center">
18
+ 👋 Join our <a href="https://discord.gg/QR7SARHRxK" target="_blank">Discord</a> community.
19
+ <br>
20
+ 📖 Check out the GLM-4.5 <a href="https://z.ai/blog/glm-4.5" target="_blank">technical blog</a>.
21
+ <br>
22
+ 📍 Use GLM-4.5 API services on <a href="https://docs.z.ai/guides/llm/glm-4.5">Z.ai API Platform (Global)</a> or <br> <a href="https://docs.bigmodel.cn/cn/guide/models/text/glm-4.5">Zhipu AI Open Platform (Mainland China)</a>.
23
+ <br>
24
+ 👉 One click to <a href="https://chat.z.ai">GLM-4.5</a>.
25
+ </p>
26
+
27
+ ## Paper Abstract
28
+
29
+ We present GLM-4.5, an open-source Mixture-of-Experts (MoE) large language model with 355B total parameters and 32B activated parameters, featuring a hybrid reasoning method that supports both thinking and direct response modes. Through multi-stage training on 23T tokens and comprehensive post-training with expert model iteration and reinforcement learning, GLM-4.5 achieves strong performance across agentic, reasoning, and coding (ARC) tasks, scoring 70.1% on TAU-Bench, 91.0% on AIME 24, and 64.2% on SWE-bench Verified. With much fewer parameters than several competitors, GLM-4.5 ranks 3rd overall among all evaluated models and 2nd on agentic benchmarks. We release both GLM-4.5 (355B parameters) and a compact version, GLM-4.5-Air (106B parameters), to advance research in reasoning and agentic AI systems. Code, models, and more information are available at this https URL .
30
+
31
+ ## Model Introduction
32
+
33
+ The **GLM-4.5** series models are foundation models designed for intelligent agents. GLM-4.5 has **355** billion total parameters with **32** billion active parameters, while GLM-4.5-Air adopts a more compact design with **106** billion total parameters and **12** billion active parameters. GLM-4.5 models unify reasoning, coding, and intelligent agent capabilities to meet the complex demands of intelligent agent applications.
34
+
35
+ Both GLM-4.5 and GLM-4.5-Air are hybrid reasoning models that provide two modes: thinking mode for complex reasoning and tool usage, and non-thinking mode for immediate responses.
36
+
37
+ We have open-sourced the base models, hybrid reasoning models, and FP8 versions of the hybrid reasoning models for both GLM-4.5 and GLM-4.5-Air. They are released under the MIT open-source license and can be used commercially and for secondary development.
38
+
39
+ As demonstrated in our comprehensive evaluation across 12 industry-standard benchmarks, GLM-4.5 achieves exceptional performance with a score of **63.2**, in the **3rd** place among all the proprietary and open-source models. Notably, GLM-4.5-Air delivers competitive results at **59.8** while maintaining superior efficiency.
40
+
41
+ ![bench](https://raw.githubusercontent.com/zai-org/GLM-4.5/refs/heads/main/resources/bench.png)
42
+
43
+ For more eval results, show cases, and technical details, please visit our [technical blog](https://z.ai/blog/glm-4.5) or refer to the [technical report (paper)](https://huggingface.co/papers/2508.06471).
44
+
45
+ The model code, tool parser and reasoning parser can be found in the implementation of [transformers](https://github.com/huggingface/transformers/tree/main/src/transformers/models/glm4_moe), [vLLM](https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/models/glm4_moe_mtp.py) and [SGLang](https://github.com/sgl-project/sglang/blob/main/python/sglang/srt/models/glm4_moe.py).
46
+
47
+ ## Model Downloads
48
+
49
+ You can directly experience the model on [Hugging Face](https://huggingface.co/spaces/zai-org/GLM-4.5-Space)
50
+ or [ModelScope](https://modelscope.cn/studios/ZhipuAI/GLM-4.5-Demo) or download the model by following the links below.
51
+
52
+ | Model | Download Links | Model Size | Precision |
53
+ |------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|------------|-----------|
54
+ | GLM-4.5 | [🤗 Hugging Face](https://huggingface.co/zai-org/GLM-4.5)<br> [🤖 ModelScope](https://modelscope.cn/models/ZhipuAI/GLM-4.5) | 355B-A32B | BF16 |
55
+ | GLM-4.5-Air | [🤗 Hugging Face](https://huggingface.co/zai-org/GLM-4.5-Air)<br> [🤖 ModelScope](https://modelscope.cn/models/ZhipuAI/GLM-4.5-Air) | 106B-A12B | BF16 |
56
+ | GLM-4.5-FP8 | [🤗 Hugging Face](https://huggingface.co/zai-org/GLM-4.5-FP8)<br> [🤖 ModelScope](https://modelscope.cn/models/ZhipuAI/GLM-4.5-FP8) | 355B-A32B | FP8 |
57
+ | GLM-4.5-Air-FP8 | [🤗 Hugging Face](https://huggingface.co/zai-org/GLM-4.5-Air-FP8)<br> [🤖 ModelScope](https://modelscope.cn/models/ZhipuAI/GLM-4.5-Air-FP8) | 106B-A12B | FP8 |
58
+ | GLM-4.5-Base | [🤗 Hugging Face](https://huggingface.co/zai-org/GLM-4.5-Base)<br> [🤖 ModelScope](https://modelscope.cn/models/ZhipuAI/GLM-4.5-Base) | 355B-A32B | BF16 |
59
+ | GLM-4.5-Air-Base | [🤗 Hugging Face](https://huggingface.co/zai-org/GLM-4.5-Air-Base)<br> [🤖 ModelScope](https://modelscope.cn/models/ZhipuAI/GLM-4.5-Air-Base) | 106B-A12B | BF16 |
60
+
61
+ ## System Requirements
62
+
63
+ ### Inference
64
+
65
+ We provide minimum and recommended configurations for "full-featured" model inference. The data in the table below is
66
+ based on the following conditions:
67
+
68
+ 1. All models use MTP layers and specify
69
+ `--speculative-num-steps 3 --speculative-eagle-topk 1 --speculative-num-draft-tokens 4` to ensure competitive
70
+ inference speed.
71
+ 2. The `cpu-offload` parameter is not used.
72
+ 3. Inference batch size does not exceed `8`.
73
+ 4. All are executed on devices that natively support FP8 inference, ensuring both weights and cache are in FP8 format.
74
+ 5. Server memory must exceed `1T` to ensure normal model loading and operation.
75
+
76
+ The models can run under the configurations in the table below:
77
+
78
+ | Model | Precision | GPU Type and Count | Test Framework |
79
+ |-------------|-----------|----------------------|----------------|
80
+ | GLM-4.5 | BF16 | H100 x 16 / H200 x 8 | sglang |
81
+ | GLM-4.5 | FP8 | H100 x 8 / H200 x 4 | sglang |
82
+ | GLM-4.5-Air | BF16 | H100 x 4 / H200 x 2 | sglang |
83
+ | GLM-4.5-Air | FP8 | H100 x 2 / H200 x 1 | sglang |
84
+
85
+ Under the configurations in the table below, the models can utilize their full 128K context length:
86
+
87
+ | Model | Precision | GPU Type and Count | Test Framework |
88
+ |-------------|-----------|-----------------------|----------------|
89
+ | GLM-4.5 | BF16 | H100 x 32 / H200 x 16 | sglang |
90
+ | GLM-4.5 | FP8 | H100 x 16 / H200 x 8 | sglang |
91
+ | GLM-4.5-Air | BF16 | H100 x 8 / H200 x 4 | sglang |
92
+ | GLM-4.5-Air | FP8 | H100 x 4 / H200 x 2 | sglang |
93
+
94
+ ### Fine-tuning
95
+
96
+ The code can run under the configurations in the table below
97
+ using [Llama Factory](https://github.com/hiyouga/LLaMA-Factory):
98
+
99
+ | Model | GPU Type and Count | Strategy | Batch Size (per GPU) |
100
+ |-------------|--------------------|----------|----------------------|
101
+ | GLM-4.5 | H100 x 16 | Lora | 1 |
102
+ | GLM-4.5-Air | H100 x 4 | Lora | 1 |
103
+
104
+ The code can run under the configurations in the table below using [Swift](https://github.com/modelscope/ms-swift):
105
+
106
+ | Model | GPU Type and Count | Strategy | Batch Size (per GPU) |
107
+ |-------------|--------------------|----------|----------------------|
108
+ | GLM-4.5 | H20 (96GiB) x 16 | Lora | 1 |
109
+ | GLM-4.5-Air | H20 (96GiB) x 4 | Lora | 1 |
110
+ | GLM-4.5 | H20 (96GiB) x 128 | SFT | 1 |
111
+ | GLM-4.5-Air | H20 (96GiB) x 32 | SFT | 1 |
112
+ | GLM-4.5 | H20 (96GiB) x 128 | RL | 1 |
113
+ | GLM-4.5-Air | H20 (96GiB) x 32 | RL | 1 |
114
+
115
+ ## Quick Start
116
+
117
+ For more comprehensive details and setup instructions, please refer to our [GitHub page](https://github.com/zai-org/GLM-4.5).
118
+
119
+ ### Transformers Inference
120
+
121
+ Here is a basic example to run inference with the `transformers` library, demonstrating both thinking and non-thinking modes:
122
+
123
+ ```python
124
+ from transformers import AutoTokenizer, AutoModelForCausalLM
125
+ import torch
126
+
127
+ # Load model and tokenizer
128
+ model_id = "zai-org/GLM-4.5-FP8"
129
+ tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
130
+ model = AutoModelForCausalLM.from_pretrained(
131
+ model_id,
132
+ torch_dtype=torch.bfloat16, # Adjust as needed (e.g., torch.float8 for FP8 models)
133
+ low_cpu_mem_usage=True,
134
+ device_map="auto",
135
+ trust_remote_code=True
136
+ )
137
+ model.eval()
138
+
139
+ messages = [
140
+ {"role": "user", "content": "Hello, how are you?"},
141
+ ]
142
+
143
+ # Example for non-thinking mode (direct response)
144
+ # The `add_nothink_token=True` parameter triggers non-thinking mode.
145
+ # This mode is suitable for straightforward questions not requiring complex reasoning or tool usage.
146
+ inputs_nothink_text = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False, add_nothink_token=True)
147
+ input_ids_nothink = tokenizer(inputs_nothink_text, return_tensors="pt").input_ids.to(model.device)
148
+ outputs_nothink = model.generate(input_ids_nothink, max_new_tokens=100)
149
+ print("Non-thinking mode response:", tokenizer.decode(outputs_nothink[0][len(input_ids_nothink[0]):], skip_special_tokens=True))
150
+
151
+ # Example for thinking mode (for complex reasoning or tool usage)
152
+ # By default, `add_nothink_token=False` or omitting it triggers thinking mode.
153
+ # This mode allows the model to perform multi-step reasoning, break down tasks, and utilize tools.
154
+ inputs_think_text = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False, add_nothink_token=False)
155
+ input_ids_think = tokenizer(inputs_think_text, return_tensors="pt").input_ids.to(model.device)
156
+ outputs_think = model.generate(input_ids_think, max_new_tokens=100)
157
+ print("Thinking mode response:", tokenizer.decode(outputs_think[0][len(input_ids_think[0]):], skip_special_tokens=True))
158
+ ```
159
+
160
+ ### vLLM
161
+
162
+ + Both BF16 and FP8 can be started with the following code:
163
+
164
+ ```shell
165
+ vllm serve zai-org/GLM-4.5-Air \
166
+ --tensor-parallel-size 8 \
167
+ --tool-call-parser glm45 \
168
+ --reasoning-parser glm45 \
169
+ --enable-auto-tool-choice \
170
+ --served-model-name glm-4.5-air
171
+ ```
172
+
173
+ If you're using 8x H100 GPUs and encounter insufficient memory when running the GLM-4.5 model, you'll need
174
+ `--cpu-offload-gb 16` (only applicable to vLLM).
175
+
176
+ If you encounter `flash infer` issues, use `VLLM_ATTENTION_BACKEND=XFORMERS` as a temporary replacement. You can also
177
+ specify `TORCH_CUDA_ARCH_LIST='9.0+PTX'` to use `flash infer` (different GPUs have different TORCH_CUDA_ARCH_LIST
178
+ values, please check accordingly).
179
+
180
+ ### SGLang
181
+
182
+ + BF16
183
+
184
+ ```shell
185
+ python3 -m sglang.launch_server \
186
+ --model-path zai-org/GLM-4.5-Air \
187
+ --tp-size 8 \
188
+ --tool-call-parser glm45 \
189
+ --reasoning-parser glm45 \
190
+ --speculative-algorithm EAGLE \
191
+ --speculative-num-steps 3 \
192
+ --speculative-eagle-topk 1 \
193
+ --speculative-num-draft-tokens 4 \
194
+ --mem-fraction-static 0.7 \
195
+ --served-model-name glm-4.5-air \
196
+ --host 0.0.0.0 \
197
+ --port 8000
198
+ ```
199
+
200
+ + FP8
201
+
202
+ ```shell
203
+ python3 -m sglang.launch_server \
204
+ --model-path zai-org/GLM-4.5-Air-FP8 \
205
+ --tp-size 4 \
206
+ --tool-call-parser glm45 \
207
+ --reasoning-parser glm45 \
208
+ --speculative-algorithm EAGLE \
209
+ --speculative-num-steps 3 \
210
+ --speculative-eagle-topk 1 \
211
+ --speculative-num-draft-tokens 4 \
212
+ --mem-fraction-static 0.7 \
213
+ --disable-shared-experts-fusion \
214
+ --served-model-name glm-4.5-air-fp8 \
215
+ --host 0.0.0.0 \
216
+ --port 8000
217
+ ```
218
+
219
+ ### Request Parameter Instructions
220
+
221
+ + When using `vLLM` and `SGLang`, thinking mode is enabled by default when sending requests. If you want to disable the
222
+ thinking switch, you need to add the `extra_body={"chat_template_kwargs": {"enable_thinking": False}}` parameter.
223
+ + Both support tool calling. Please use OpenAI-style tool description format for calls.
224
+ + For specific code, please refer to `api_request.py` in the `inference` folder.