Fazzie commited on
Commit
38b10a9
·
verified ·
1 Parent(s): 02e07f8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +627 -3
README.md CHANGED
@@ -1,3 +1,627 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: text-generation
4
+ library_name: transformers
5
+ tags:
6
+ - vllm
7
+ language:
8
+ - en
9
+ - zh
10
+ base_model:
11
+ - ByteDance-Seed/Seed-OSS-36B-Base
12
+ ---
13
+
14
+ <div align="center">
15
+ 👋 Hi, everyone!
16
+ <br>
17
+ We are <b>ByteDance Seed Team.</b>
18
+ </div>
19
+
20
+ <p align="center">
21
+ You can get to know us better through the following channels👇
22
+ <br>
23
+ <a href="https://seed.bytedance.com/">
24
+ <img src="https://img.shields.io/badge/Website-%231e37ff?style=for-the-badge&logo=bytedance&logoColor=white"></a>
25
+ </p>
26
+
27
+ ![seed logo](https://github.com/user-attachments/assets/c42e675e-497c-4508-8bb9-093ad4d1f216)
28
+
29
+
30
+ # Seed-OSS Open-Source Models
31
+ <p align="center">
32
+ <a href="https://github.com/ByteDance-Seed/seed-oss">
33
+ <img src="https://img.shields.io/badge/Seed-Project Page-yellow"></a>
34
+ <a href="https://github.com/ByteDance-Seed/seed-oss">
35
+ <img src="https://img.shields.io/badge/Seed-Tech Report Coming Soon-red"></a>
36
+ <a href="https://huggingface.co/ByteDance-Seed">
37
+ <img src="https://img.shields.io/badge/Seed-Hugging Face-orange"></a>
38
+ <br>
39
+ <a href="./LICENSE">
40
+ <img src="https://img.shields.io/badge/License-Apache2.0-blue"></a>
41
+ </p>
42
+
43
+ > [!NOTE]
44
+ > This model card is dedicated to the `Seed-OSS-36B-Instruct` model.
45
+
46
+ ## News
47
+ - [2025/08/20]🔥We release `Seed-OSS-36B-Base` (both with and without synthetic data versions) and `Seed-OSS-36B-Instruct`.
48
+
49
+ ## Introduction
50
+ Seed-OSS is a series of open-source large language models developed by ByteDance's Seed Team, designed for powerful long-context, reasoning, agent and general capabilities, and versatile developer-friendly features. Although trained with only 12T tokens, Seed-OSS achieves excellent performance on several popular open benchmarks.
51
+
52
+ We release this series of models to the open-source community under the Apache-2.0 license.
53
+
54
+ > [!NOTE]
55
+ > Seed-OSS is primarily optimized for international (i18n) use cases.
56
+
57
+ ### Key Features
58
+ - **Flexible Control of Thinking Budget**: Allowing users to flexibly adjust the reasoning length as needed. This capability of dynamically controlling the reasoning length enhances inference efficiency in practical application scenarios.
59
+ - **Enhanced Reasoning Capability**: Specifically optimized for reasoning tasks while maintaining balanced and excellent general capabilities.
60
+ - **Agentic Intelligence**: Performs exceptionally well in agentic tasks such as tool-using and issue resolving.
61
+ - **Research-Friendly**: Given that the inclusion of synthetic instruction data in pre-training may affect the post-training research, we released pre-trained models both with and without instruction data, providing the research community with more diverse options.
62
+ - **Native Long Context**: Trained with up-to-512K long context natively.
63
+
64
+ ### Model Summary
65
+
66
+ Seed-OSS adopts the popular causal language model architecture with RoPE, GQA attention, RMSNorm and SwiGLU activation.
67
+
68
+ <div align="center">
69
+
70
+ | | |
71
+ |:---:|:---:|
72
+ | | **Seed-OSS-36B** |
73
+ | **Parameters** | 36B |
74
+ | **Attention** | GQA |
75
+ | **Activation Function** | SwiGLU |
76
+ | **Number of Layers** | 64 |
77
+ | **Number of QKV Heads** | 80 / 8 / 8 |
78
+ | **Head Size** | 128 |
79
+ | **Hidden Size** | 5120 |
80
+ | **Vocabulary Size** | 155K |
81
+ | **Context Length** | 512K |
82
+ | **RoPE Base Frequency** | 1e7 |
83
+
84
+ </div>
85
+
86
+
87
+ ## Evaluation Results
88
+
89
+ ### Seed-OSS-36B-Base
90
+
91
+ Incorporating synthetic instruction data into pretraining leads to improved performance on most benchmarks. We adopt the version augmented with synthetic instruction data (i.e., *w/ syn.*) as `Seed-OSS-36B-Base`. We also release `Seed-OSS-36B-Base-woSyn` trained without such data (i.e., *w/o syn.*), offering the community a high-performance foundation model unaffected by synthetic instruction data.
92
+
93
+ <div align="center">
94
+ <table>
95
+ <thead>
96
+ <tr>
97
+ <th align="center">Benchmark</th>
98
+ <th align="center"><sup><a href="https://seed.bytedance.com/en/seed1_6">Seed1.6-Base</a></sup></th>
99
+ <th align="center"><sup>Qwen3-30B-A3B-Base-2507*</sup></th>
100
+ <th align="center"><sup>Qwen2.5-32B-Base*</sup></th>
101
+ <th align="center"><sup>Seed-OSS-36B-Base<br>(<i>w/ syn.</i>)</sup></th>
102
+ <th align="center"><sup>Seed-OSS-36B-Base-woSyn<br>(<i>w/o syn.</i>)</sup></th>
103
+ </tr>
104
+ </thead>
105
+ <tbody>
106
+ <tr>
107
+ <td align="center" colspan=6><strong>Knowledge</strong></td>
108
+ </tr>
109
+ <tr>
110
+ <td align="center">MMLU-Pro</td>
111
+ <td align="center">70</td>
112
+ <td align="center">59.8</td>
113
+ <td align="center">58.5 (55.1)</td>
114
+ <td align="center"><b>65.1</b></td>
115
+ <td align="center">60.4</td>
116
+ </tr>
117
+ <tr>
118
+ <td align="center">MMLU</td>
119
+ <td align="center">88.8</td>
120
+ <td align="center">82.7</td>
121
+ <td align="center">84 (83.3)</td>
122
+ <td align="center"><b>84.9</b></td>
123
+ <td align="center">84.8</td>
124
+ </tr>
125
+ <tr>
126
+ <td align="center">TriviaQA</td>
127
+ <td align="center">91</td>
128
+ <td align="center">76.2</td>
129
+ <td align="center">76</td>
130
+ <td align="center"><b>82.1</b></td>
131
+ <td align="center">81.9</td>
132
+ </tr>
133
+ <tr>
134
+ <td align="center">GPQA-D</td>
135
+ <td align="center">43.4</td>
136
+ <td align="center"><b>37</b></td>
137
+ <td align="center">29.3</td>
138
+ <td align="center">31.7</td>
139
+ <td align="center">35.2</td>
140
+ </tr>
141
+ <tr>
142
+ <td align="center">SimpleQA</td>
143
+ <td align="center">17.1</td>
144
+ <td align="center">7.2</td>
145
+ <td align="center">6.1</td>
146
+ <td align="center">5.8</td>
147
+ <td align="center"><b>7.4</b></td>
148
+ </tr>
149
+
150
+ <tr>
151
+ <td align="center" colspan=6><strong>Reasoning</strong></td>
152
+ </tr>
153
+ <tr>
154
+ <td align="center">BBH</td>
155
+ <td align="center">92.1</td>
156
+ <td align="center">81.4</td>
157
+ <td align="center">79.1 (84.5)</td>
158
+ <td align="center"><b>87.7</b></td>
159
+ <td align="center">87.2</td>
160
+ </tr>
161
+ <tr>
162
+ <td align="center">AGIEval-en</td>
163
+ <td align="center">78</td>
164
+ <td align="center">66.4</td>
165
+ <td align="center">65.6</td>
166
+ <td align="center"><b>70.7</b></td>
167
+ <td align="center">70.1</td>
168
+ </tr>
169
+
170
+ <tr>
171
+ <td align="center" colspan=6><strong>Math</strong></td>
172
+ </tr>
173
+ <tr>
174
+ <td align="center">GSM8K</td>
175
+ <td align="center">93.1</td>
176
+ <td align="center">87</td>
177
+ <td align="center">87.5 (92.9)</td>
178
+ <td align="center"><b>90.8</b></td>
179
+ <td align="center">90.3</td>
180
+ </tr>
181
+ <tr>
182
+ <td align="center">MATH</td>
183
+ <td align="center">72.9</td>
184
+ <td align="center">61.1</td>
185
+ <td align="center">63.5 (57.7)</td>
186
+ <td align="center"><b>81.7</b></td>
187
+ <td align="center">61.3</td>
188
+ </tr>
189
+
190
+ <tr>
191
+ <td align="center" colspan=6><strong>Coding</strong></td>
192
+ </tr>
193
+ <tr>
194
+ <td align="center">MBPP</td>
195
+ <td align="center">83.6</td>
196
+ <td align="center">78.8</td>
197
+ <td align="center">77.8 (84.5)</td>
198
+ <td align="center"><b>80.6</b></td>
199
+ <td align="center">74.6</td>
200
+ </tr>
201
+ <tr>
202
+ <td align="center">HumanEval</td>
203
+ <td align="center">78</td>
204
+ <td align="center">70.7</td>
205
+ <td align="center">47.6 (58.5)</td>
206
+ <td align="center"><b>76.8</b></td>
207
+ <td align="center">75.6</td>
208
+ </tr>
209
+ </tbody>
210
+ </table>
211
+ </div>
212
+
213
+ <sup>
214
+ - <b>Bold</b> denotes open-source SOTA.
215
+ </sup><br/><sup>
216
+ - "*" indicates that the results in this column are presented in the format of "reproduced_results (reported_results_if_any)".
217
+ </sup>
218
+
219
+ ### Seed-OSS-36B-Instruct
220
+
221
+ <div align="center">
222
+ <table>
223
+ <thead>
224
+ <tr>
225
+ <th align="center">Benchmark</th>
226
+ <th align="center"><sup><a href="https://console.volcengine.com/ark/region:ark+cn-beijing/model/detail?Id=doubao-seed-1-6-thinking">Seed1.6-Thinking-0715</a></sup></th>
227
+ <th align="center"><sup>OAI-OSS-20B*</sup></th>
228
+ <th align="center"><sup>Qwen3-30B-A3B-Thinking-2507*</sup></th>
229
+ <th align="center"><sup>Qwen3-32B*</sup></th>
230
+ <th align="center"><sup>Gemma3-27B</sup></th>
231
+ <th align="center"><sup>Seed-OSS-36B-Instruct</sup></th>
232
+ </tr>
233
+ </thead>
234
+ <tbody>
235
+ <tr>
236
+ <td align="center" colspan=7><strong>Knowledge</strong></td>
237
+ </tr>
238
+ <tr>
239
+ <td align="center">MMLU-Pro</td>
240
+ <td align="center">86.6</td>
241
+ <td align="center">76.2</td>
242
+ <td align="center"><ins>81.9</ins> (80.9)</td>
243
+ <td align="center">81.8</td>
244
+ <td align="center">67.5</td>
245
+ <td align="center"><b>82.7</b></td>
246
+ </tr>
247
+ <tr>
248
+ <td align="center">MMLU</td>
249
+ <td align="center">90.6</td>
250
+ <td align="center">81.7 (85.3)</td>
251
+ <td align="center"><ins>86.9</ins></td>
252
+ <td align="center">86.2</td>
253
+ <td align="center">76.9</td>
254
+ <td align="center"><b>87.4</b></td>
255
+ </tr>
256
+ <tr>
257
+ <td align="center">GPQA-D</td>
258
+ <td align="center">80.7</td>
259
+ <td align="center"><b>72.2</b> (71.5)</td>
260
+ <td align="center"><ins>71.4</ins> (73.4)</td>
261
+ <td align="center">66.7 (68.4)</td>
262
+ <td align="center">42.4</td>
263
+ <td align="center"><ins>71.4</ins></td>
264
+ </tr>
265
+ <tr>
266
+ <td align="center">SuperGPQA</td>
267
+ <td align="center">63.4</td>
268
+ <td align="center">50.1</td>
269
+ <td align="center"><b>57.3</b> (56.8)</td>
270
+ <td align="center">49.3</td>
271
+ <td align="center">-</td>
272
+ <td align="center"><ins>55.7</ins></td>
273
+ </tr>
274
+ <tr>
275
+ <td align="center">SimpleQA</td>
276
+ <td align="center">23.7</td>
277
+ <td align="center">6.7</td>
278
+ <td align="center"><b>23.6</b></td>
279
+ <td align="center">8.6</td>
280
+ <td align="center"><ins>10</ins></td>
281
+ <td align="center">9.7</td>
282
+ </tr>
283
+
284
+ <tr>
285
+ <td align="center" colspan=7><strong>Math</strong></td>
286
+ </tr>
287
+ <tr>
288
+ <td align="center">AIME24</td>
289
+ <td align="center">90.3</td>
290
+ <td align="center"><b>92.7</b> (92.1)</td>
291
+ <td align="center">87.7</td>
292
+ <td align="center">82.7 (81.4)</td>
293
+ <td align="center">-</td>
294
+ <td align="center"><ins>91.7</ins></td>
295
+ </tr>
296
+ <tr>
297
+ <td align="center">AIME25</td>
298
+ <td align="center">86</td>
299
+ <td align="center"><b>90.3</b> (91.7)</td>
300
+ <td align="center">81.3 (85)</td>
301
+ <td align="center">73.3 (72.9)</td>
302
+ <td align="center">-</td>
303
+ <td align="center"><ins>84.7</ins></td>
304
+ </tr>
305
+ <tr>
306
+ <td align="center">BeyondAIME</td>
307
+ <td align="center">60</td>
308
+ <td align="center"><b>69</b></td>
309
+ <td align="center">56</td>
310
+ <td align="center">29</td>
311
+ <td align="center">-</td>
312
+ <td align="center"><ins>65</ins></td>
313
+ </tr>
314
+
315
+ <tr>
316
+ <td align="center" colspan=7><strong>Reasoning</strong></td>
317
+ </tr>
318
+ <tr>
319
+ <td align="center">ArcAGI V2</td>
320
+ <td align="center">50.3</td>
321
+ <td align="center"><b>41.7</b></td>
322
+ <td align="center">37.8</td>
323
+ <td align="center">14.4</td>
324
+ <td align="center">-</td>
325
+ <td align="center"><ins>40.6</ins></td>
326
+ </tr>
327
+ <tr>
328
+ <td align="center">KORBench</td>
329
+ <td align="center">74.8</td>
330
+ <td align="center"><b>72.3</b></td>
331
+ <td align="center">70.2</td>
332
+ <td align="center">65.4</td>
333
+ <td align="center">-</td>
334
+ <td align="center"><ins>70.6</ins></td>
335
+ </tr>
336
+
337
+ <tr>
338
+ <td align="center" colspan=7><strong>Coding</strong></td>
339
+ </tr>
340
+ <tr>
341
+ <td align="center">LiveCodeBench v6<br/><sup>(02/2025-05/2025)</sup></td>
342
+ <td align="center">66.8</td>
343
+ <td align="center"><ins>63.8</ins></td>
344
+ <td align="center">60.3 (66)</td>
345
+ <td align="center">53.4</td>
346
+ <td align="center">-</td>
347
+ <td align="center"><b>67.4</b></td>
348
+ </tr>
349
+ <tr>
350
+ <td align="center">HLE</td>
351
+ <td align="center">13.9</td>
352
+ <td align="center"><b>12.7</b> (10.9)</td>
353
+ <td align="center">8.7</td>
354
+ <td align="center">6.9</td>
355
+ <td align="center">-</td>
356
+ <td align="center"><ins>10.1</ins></td>
357
+ </tr>
358
+
359
+ <tr>
360
+ <td align="center" colspan=7><strong>Instruction Following</strong></td>
361
+ </tr>
362
+ <tr>
363
+ <td align="center">IFEval</td>
364
+ <td align="center">86.3</td>
365
+ <td align="center"><b>92.8</b></td>
366
+ <td align="center">88 (88.9)</td>
367
+ <td align="center">88.4 (85)</td>
368
+ <td align="center"><ins>90.4</ins></td>
369
+ <td align="center">85.8</td>
370
+ </tr>
371
+
372
+
373
+ <tr>
374
+ <td align="center" colspan=7><strong>Agent</strong></td>
375
+ </tr>
376
+ <tr>
377
+ <td align="center">TAU1-Retail</td>
378
+ <td align="center">63</td>
379
+ <td align="center">(54.8)</td>
380
+ <td align="center"><ins>58.7</ins> (67.8)</td>
381
+ <td align="center">40.9</td>
382
+ <td align="center">-</td>
383
+ <td align="center"><b>70.4</b></td>
384
+ </tr>
385
+ <tr>
386
+ <td align="center">TAU1-Airline</td>
387
+ <td align="center">49</td>
388
+ <td align="center">(38)</td>
389
+ <td align="center"><b>47</b> (48)</td>
390
+ <td align="center">38</td>
391
+ <td align="center">-</td>
392
+ <td align="center"><ins>46</ins></td>
393
+ </tr>
394
+ <tr>
395
+ <td align="center">SWE-Bench Verified<br/><sup>(OpenHands)</sup></td>
396
+ <td align="center">41.8</td>
397
+ <td align="center"><b>(60.7)</b></td>
398
+ <td align="center">31</td>
399
+ <td align="center">23.4</td>
400
+ <td align="center">-</td>
401
+ <td align="center"><ins>56</ins></td>
402
+ </tr>
403
+ <tr>
404
+ <td align="center">SWE-Bench Verified<br/><sup>(AgentLess 4*10)</sup></td>
405
+ <td align="center">48.4</td>
406
+ <td align="center">-</td>
407
+ <td align="center">33.5</td>
408
+ <td align="center"><ins>39.7</ins></td>
409
+ <td align="center">-</td>
410
+ <td align="center"><b>47</b></td>
411
+ </tr>
412
+ <tr>
413
+ <td align="center">Multi-SWE-Bench</td>
414
+ <td align="center">17.7</td>
415
+ <td align="center">-</td>
416
+ <td align="center"><ins>9.5</ins></td>
417
+ <td align="center">7.7</td>
418
+ <td align="center">-</td>
419
+ <td align="center"><b>17</b></td>
420
+ </tr>
421
+
422
+ <tr>
423
+ <td align="center" colspan=7><strong>Multilingualism</strong></td>
424
+ </tr>
425
+ <tr>
426
+ <td align="center">MMMLU</td>
427
+ <td align="center">84.3</td>
428
+ <td align="center">77.4 (75.7)</td>
429
+ <td align="center"><b>79</b></td>
430
+ <td align="center"><b>79</b> (80.6)</td>
431
+ <td align="center">-</td>
432
+ <td align="center"><ins>78.4</ins></td>
433
+ </tr>
434
+
435
+ <tr>
436
+ <td align="center" colspan=7><strong>Long Context</strong></td>
437
+ </tr>
438
+ <tr>
439
+ <td align="center">RULER<br/><sup>(128K)</sup></td>
440
+ <td align="center">94.5</td>
441
+ <td align="center">78.7</td>
442
+ <td align="center"><ins>94.5</ins></td>
443
+ <td align="center">77.5</td>
444
+ <td align="center">-</td>
445
+ <td align="center"><b>94.6</b></td>
446
+ </tr>
447
+
448
+ <tr>
449
+ <td align="center" colspan=7><strong>Safety</strong></td>
450
+ </tr>
451
+ <tr>
452
+ <td align="center">AIR-Bench</td>
453
+ <td align="center">-</td>
454
+ <td align="center">-</td>
455
+ <td align="center">-</td>
456
+ <td align="center">-</td>
457
+ <td align="center">-</td>
458
+ <td align="center">75.6</td>
459
+ </tr>
460
+ </tbody>
461
+ </table>
462
+ </div>
463
+
464
+ <sup>
465
+ - <b>Bold</b> denotes open-source SOTA. <ins>Underlined</ins> indicates the second place in the open-source model.
466
+ </sup><br/><sup>
467
+ - "*" indicates that the results in this column are presented in the format of "reproduced_results (reported_results_if_any)". Some results have been omitted due to the failure of the evaluation run.
468
+ </sup><br/><sup>
469
+ - The results of Gemma3-27B are sourced directly from its technical report.
470
+ </sup><br/><sup>
471
+ - Generation configs for Seed-OSS-36B-Instruct: temperature=1.1, top_p=0.95. Specifically, for Taubench, temperature=1, top_p=0.7.
472
+ </sup><br/><sup>
473
+ </sup>
474
+
475
+ > [!NOTE]
476
+ > We recommend sampling with `temperature=1.1` and `top_p=0.95`.
477
+
478
+ ### Thinking Budget
479
+
480
+ Users can flexibly specify the model's thinking budget. The figure below shows the performance curves across different tasks as the thinking budget varies. For simpler tasks (such as IFEval), the model's chain of thought (CoT) is shorter, and the score exhibits fluctuations as the thinking budget increases. For more challenging tasks (such as AIME and LiveCodeBench), the model's CoT is longer, and the score improves with an increase in the thinking budget.
481
+
482
+ ![thinking_budget](./figures/thinking_budget.png)
483
+
484
+ Here is an example with a thinking budget set to 512: during the reasoning process, the model periodically triggers self-reflection to estimate the consumed and remaining budget, and delivers the final response once the budget is exhausted or the reasoning concludes.
485
+ ```
486
+ <seed:think>
487
+ Got it, let's try to solve this problem step by step. The problem says ... ...
488
+ <seed:cot_budget_reflect>I have used 129 tokens, and there are 383 tokens remaining for use.</seed:cot_budget_reflect>
489
+ Using the power rule, ... ...
490
+ <seed:cot_budget_reflect>I have used 258 tokens, and there are 254 tokens remaining for use.</seed:cot_budget_reflect>
491
+ Alternatively, remember that ... ...
492
+ <seed:cot_budget_reflect>I have used 393 tokens, and there are 119 tokens remaining for use.</seed:cot_budget_reflect>
493
+ Because if ... ...
494
+ <seed:cot_budget_reflect>I have exhausted my token budget, and now I will start answering the question.</seed:cot_budget_reflect>
495
+ </seed:think>
496
+ To solve the problem, we start by using the properties of logarithms to simplify the given equations: (full answer omitted).
497
+ ```
498
+
499
+ If no thinking budget is set (default mode), Seed-OSS will initiate thinking with unlimited length. If a thinking budget is specified, users are advised to prioritize values that are integer multiples of 512 (e.g., 512, 1K, 2K, 4K, 8K, or 16K), as the model has been extensively trained on these intervals. Models are instructed to output a direct response when the thinking budget is 0, and we recommend setting any budget below 512 to this value.
500
+
501
+ ## Quick Start
502
+ ```shell
503
+ pip3 install -r requirements.txt
504
+ pip install git+ssh://[email protected]/Fazziekey/transformers.git@seed-oss
505
+ ```
506
+
507
+ ```python
508
+ from transformers import AutoModelForCausalLM, AutoTokenizer
509
+ import os
510
+ import re
511
+
512
+ model_name_or_path = "ByteDance-Seed/Seed-OSS-36B-Instruct"
513
+
514
+ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
515
+ model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto") # You may want to use bfloat16 and/or move to GPU here
516
+ messages = [
517
+ {"role": "user", "content": "How to make pasta?"},
518
+ ]
519
+ tokenized_chat = tokenizer.apply_chat_template(
520
+ messages,
521
+ tokenize=True,
522
+ add_generation_prompt=True,
523
+ return_tensors="pt",
524
+ thinking_budget=512 # control the thinking budget
525
+ )
526
+
527
+ outputs = model.generate(tokenized_chat.to(model.device), max_new_tokens=2048)
528
+
529
+ output_text = tokenizer.decode(outputs[0])
530
+ ```
531
+
532
+ ## Inference
533
+
534
+ ### Download Model
535
+
536
+ Download Seed-OSS checkpoint to `./Seed-OSS-36B-Instruct`
537
+
538
+ ### Transformers
539
+ The `generate.py` script provides a simple interface for model inference with configurable options.
540
+
541
+ #### Basic Usage
542
+ ```shell
543
+ cd inference
544
+ python3 generate.py --model_path /path/to/model
545
+ ```
546
+
547
+ #### Key Parameters
548
+ | Parameter | Description |
549
+ |-----------|-------------|
550
+ | `--model_path` | Path to the pretrained model directory (required) |
551
+ | `--prompts` | Input prompts (default: sample cooking/code questions) |
552
+ | `--max_new_tokens` | Maximum tokens to generate (default: 4096) |
553
+ | `--attn_implementation` | Attention mechanism: `flash_attention_2` (default) or `eager` |
554
+ | `--load_in_4bit/8bit` | Enable 4-bit/8-bit quantization (reduces memory usage) |
555
+ | `--thinking_budget` | Thinking budget in tokens (default: -1 for unlimited budget) |
556
+
557
+ #### Quantization Examples
558
+ ```shell
559
+ # 8-bit quantization
560
+ python3 generate.py --model_path /path/to/model --load_in_8bit True
561
+
562
+ # 4-bit quantization
563
+ python3 generate.py --model_path /path/to/model --load_in_4bit True
564
+ ```
565
+
566
+ #### Custom Prompts
567
+ ```shell
568
+ python3 generate.py --model_path /path/to/model --prompts "['What is machine learning?', 'Explain quantum computing']"
569
+ ```
570
+
571
+ ### vLLM
572
+ Use vllm >= 0.10.0 or higher for inference.
573
+
574
+ - First install vLLM with Seed-OSS support version:
575
+ ```shell
576
+ VLLM_USE_PRECOMPILED=1 VLLM_TEST_USE_PRECOMPILED_NIGHTLY_WHEEL=1 pip install git+ssh://[email protected]/FoolPlayer/vllm.git@seed-oss
577
+ ```
578
+
579
+ - Start vLLM API server:
580
+ ```shell
581
+ python3 -m vllm.entrypoints.openai.api_server \
582
+ --host localhost \
583
+ --port 4321 \
584
+ --enable-auto-tool-choice \
585
+ --tool-call-parser seed_oss \
586
+ --trust-remote-code \
587
+ --model ./Seed-OSS-36B-Instruct \
588
+ --chat-template ./Seed-OSS-36B-Instruct/chat_template.jinja \
589
+ --tensor-parallel-size 8 \
590
+ --dtype bfloat16 \
591
+ --served-model-name seed_oss
592
+ ```
593
+
594
+ - Test with OpenAI client:
595
+
596
+ Chat
597
+
598
+ ```shell
599
+ python3 inference/vllm_chat.py
600
+ ```
601
+
602
+ Tool Call
603
+ ```shell
604
+ python3 inference/vllm_tool_call.py
605
+ ```
606
+
607
+
608
+ ## Model Card
609
+ See [MODEL_CARD](./MODEL_CARD.md).
610
+
611
+ ## License
612
+ This project is licensed under Apache-2.0. See the [LICENSE](./LICENSE) flie for details.
613
+
614
+ ## Citation
615
+
616
+ ```bibtex
617
+ @misc{seed2025seed-oss,
618
+ author={ByteDance Seed Team},
619
+ title={Seed-OSS Open-Source Models},
620
+ year={2025},
621
+ howpublished={\url{https://github.com/ByteDance-Seed/seed-oss}}
622
+ }
623
+ ```
624
+
625
+ ## About [ByteDance Seed Team](https://seed.bytedance.com/)
626
+
627
+ Founded in 2023, ByteDance Seed Team is dedicated to crafting the industry's most advanced AI foundation models. The team aspires to become a world-class research team and make significant contributions to the advancement of science and society.