redmoe-ai-v1 commited on
Commit
e1cb4af
·
verified ·
1 Parent(s): 7638595

Upload folder using huggingface_hub

Browse files
README.md CHANGED
@@ -1,3 +1,1225 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ library_name: dots_ocr
4
+ tags:
5
+ - ocr
6
+ language:
7
+ - en
8
+ - zh
9
+ - multilingual
10
+ ---
11
+
12
+ <div align="center">
13
+
14
+ <p align="center">
15
+ <img src="https://raw.githubusercontent.com/rednote-hilab/dots_ocr/main/assets/logo.png" width="300"/>
16
+ <p>
17
+
18
+ <h1 align="center">
19
+ dots.ocr: Multilingual Document Layout Parsing in a Single Vision-Language Model
20
+ </h1>
21
+
22
+ [![arXiv](https://img.shields.io/badge/Arxiv-dots.ocr-b31b1b.svg?logo=arXiv)]()
23
+ [![HuggingFace](https://img.shields.io/badge/HuggingFace%20Weights-black.svg?logo=HuggingFace)](https://huggingface.co/rednote-hilab/dots.ocr)
24
+
25
+
26
+ <div align="center">
27
+ <a href="https://dotsocr.xiaohongshu.com" target="_blank" rel="noopener noreferrer"><strong>🖥️ Live Demo</strong></a> |
28
+ <a href="https://raw.githubusercontent.com/rednote-hilab/dots_ocr/main/assets/wechat.jpg" target="_blank" rel="noopener noreferrer"><strong>💬 WeChat</strong></a> |
29
+ <a href="https://www.xiaohongshu.com/user/profile/683ffe42000000001d021a4c" target="_blank" rel="noopener noreferrer"><strong>📕 rednote</strong></a>
30
+ </div>
31
+
32
+ </div>
33
+
34
+
35
+
36
+ ## Introduction
37
+
38
+ **dots.ocr** is a powerful, multilingual document parser that unifies layout detection and content recognition within a single vision-language model while maintaining good reading order. Despite its compact 1.7B-parameter LLM foundation, it achieves state-of-the-art(SOTA) performance.
39
+
40
+ 1. **Powerful Performance:** **dots.ocr** achieves SOTA performance for text, tables, and reading order on [OmniDocBench](https://github.com/opendatalab/OmniDocBench), while delivering formula recognition results comparable to much larger models like Doubao-1.5 and gemini2.5-pro.
41
+ 2. **Multilingual Support:** **dots.ocr** demonstrates robust parsing capabilities for low-resource languages, achieving decisive advantages across both layout detection and content recognition on our in-house multilingual documents benchmark.
42
+ 3. **Unified and Simple Architecture:** By leveraging a single vision-language model, **dots.ocr** offers a significantly more streamlined architecture than conventional methods that rely on complex, multi-model pipelines. Switching between tasks is accomplished simply by altering the input prompt, proving that a VLM can achieve competitive detection results compared to traditional detection models like DocLayout-YOLO.
43
+ 4. **Efficient and Fast Performance:** Built upon a compact 1.7B LLM, **dots.ocr** provides faster inference speeds than many other high-performing models based on larger foundations.
44
+
45
+
46
+ ### Performance Comparison: dots.ocr vs. Competing Models
47
+ <img src="assets/chart.png" border="0" />
48
+
49
+ > **Notes:**
50
+ > - The EN, ZH metrics are the end2end evaluation results of [OmniDocBench](https://github.com/opendatalab/OmniDocBench), and Multilingual metric is the end2end evaluation results of dots.ocr-bench.
51
+
52
+
53
+ ## News
54
+ * ```2025.07.30 ``` 🚀 We release [dots.ocr](https://github.com/rednote-hilab/dots_ocr), — a multilingual documents parsing model based on 1.7b llm, with SOTA performance.
55
+
56
+
57
+
58
+ ## Benchmark Results
59
+
60
+ ### 1. OmniDocBench
61
+
62
+ #### The end-to-end evaluation results of different tasks.
63
+
64
+ <table>
65
+ <thead>
66
+ <tr>
67
+ <th rowspan="2"><strong>Model<br>Type</strong></th>
68
+ <th rowspan="2"><strong>Methods</strong></th>
69
+ <th colspan="2"><strong>Overall<sup>Edit</sup>↓</strong></th>
70
+ <th colspan="2"><strong>Text<sup>Edit</sup>↓</strong></th>
71
+ <th colspan="2"><strong>Formula<sup>Edit</sup>↓</strong></th>
72
+ <th colspan="2"><strong>Table<sup>TEDS</sup>↑</strong></th>
73
+ <th colspan="2"><strong>Table<sup>Edit</sup>↓</strong></th>
74
+ <th colspan="2"><strong>Read Order<sup>Edit</sup>↓</strong></th>
75
+ </tr>
76
+ <tr>
77
+ <th><em>EN</em></th>
78
+ <th><em>ZH</em></th>
79
+ <th><em>EN</em></th>
80
+ <th><em>ZH</em></th>
81
+ <th><em>EN</em></th>
82
+ <th><em>ZH</em></th>
83
+ <th><em>EN</em></th>
84
+ <th><em>ZH</em></th>
85
+ <th><em>EN</em></th>
86
+ <th><em>ZH</em></th>
87
+ <th><em>EN</em></th>
88
+ <th><em>ZH</em></th>
89
+ </tr>
90
+ </thead>
91
+ <tbody>
92
+ <tr>
93
+ <td rowspan="8"><strong>Pipeline<br>Tools</strong></td>
94
+ <td>MinerU</td>
95
+ <td>0.150</td>
96
+ <td>0.357</td>
97
+ <td>0.061</td>
98
+ <td>0.215</td>
99
+ <td>0.278</td>
100
+ <td>0.577</td>
101
+ <td>78.6</td>
102
+ <td>62.1</td>
103
+ <td>0.180</td>
104
+ <td>0.344</td>
105
+ <td>0.079</td>
106
+ <td>0.292</td>
107
+ </tr>
108
+ <tr>
109
+ <td>Marker</td>
110
+ <td>0.336</td>
111
+ <td>0.556</td>
112
+ <td>0.080</td>
113
+ <td>0.315</td>
114
+ <td>0.530</td>
115
+ <td>0.883</td>
116
+ <td>67.6</td>
117
+ <td>49.2</td>
118
+ <td>0.619</td>
119
+ <td>0.685</td>
120
+ <td>0.114</td>
121
+ <td>0.340</td>
122
+ </tr>
123
+ <tr>
124
+ <td>Mathpix</td>
125
+ <td>0.191</td>
126
+ <td>0.365</td>
127
+ <td>0.105</td>
128
+ <td>0.384</td>
129
+ <td>0.306</td>
130
+ <td>0.454</td>
131
+ <td>77.0</td>
132
+ <td>67.1</td>
133
+ <td>0.243</td>
134
+ <td>0.320</td>
135
+ <td>0.108</td>
136
+ <td>0.304</td>
137
+ </tr>
138
+ <tr>
139
+ <td>Docling</td>
140
+ <td>0.589</td>
141
+ <td>0.909</td>
142
+ <td>0.416</td>
143
+ <td>0.987</td>
144
+ <td>0.999</td>
145
+ <td>1</td>
146
+ <td>61.3</td>
147
+ <td>25.0</td>
148
+ <td>0.627</td>
149
+ <td>0.810</td>
150
+ <td>0.313</td>
151
+ <td>0.837</td>
152
+ </tr>
153
+ <tr>
154
+ <td>Pix2Text</td>
155
+ <td>0.320</td>
156
+ <td>0.528</td>
157
+ <td>0.138</td>
158
+ <td>0.356</td>
159
+ <td>0.276</td>
160
+ <td>0.611</td>
161
+ <td>73.6</td>
162
+ <td>66.2</td>
163
+ <td>0.584</td>
164
+ <td>0.645</td>
165
+ <td>0.281</td>
166
+ <td>0.499</td>
167
+ </tr>
168
+ <tr>
169
+ <td>Unstructured</td>
170
+ <td>0.586</td>
171
+ <td>0.716</td>
172
+ <td>0.198</td>
173
+ <td>0.481</td>
174
+ <td>0.999</td>
175
+ <td>1</td>
176
+ <td>0</td>
177
+ <td>0.06</td>
178
+ <td>1</td>
179
+ <td>0.998</td>
180
+ <td>0.145</td>
181
+ <td>0.387</td>
182
+ </tr>
183
+ <tr>
184
+ <td>OpenParse</td>
185
+ <td>0.646</td>
186
+ <td>0.814</td>
187
+ <td>0.681</td>
188
+ <td>0.974</td>
189
+ <td>0.996</td>
190
+ <td>1</td>
191
+ <td>64.8</td>
192
+ <td>27.5</td>
193
+ <td>0.284</td>
194
+ <td>0.639</td>
195
+ <td>0.595</td>
196
+ <td>0.641</td>
197
+ </tr>
198
+ <tr>
199
+ <td>PPStruct-V3</td>
200
+ <td>0.145</td>
201
+ <td>0.206</td>
202
+ <td>0.058</td>
203
+ <td>0.088</td>
204
+ <td>0.295</td>
205
+ <td>0.535</td>
206
+ <td>-</td>
207
+ <td>-</td>
208
+ <td>0.159</td>
209
+ <td>0.109</td>
210
+ <td>0.069</td>
211
+ <td>0.091</td>
212
+ </tr>
213
+ <tr>
214
+ <td rowspan="9"><strong>Expert<br>VLMs</strong></td>
215
+ <td>GOT-OCR</td>
216
+ <td>0.287</td>
217
+ <td>0.411</td>
218
+ <td>0.189</td>
219
+ <td>0.315</td>
220
+ <td>0.360</td>
221
+ <td>0.528</td>
222
+ <td>53.2</td>
223
+ <td>47.2</td>
224
+ <td>0.459</td>
225
+ <td>0.520</td>
226
+ <td>0.141</td>
227
+ <td>0.280</td>
228
+ </tr>
229
+ <tr>
230
+ <td>Nougat</td>
231
+ <td>0.452</td>
232
+ <td>0.973</td>
233
+ <td>0.365</td>
234
+ <td>0.998</td>
235
+ <td>0.488</td>
236
+ <td>0.941</td>
237
+ <td>39.9</td>
238
+ <td>0</td>
239
+ <td>0.572</td>
240
+ <td>1.000</td>
241
+ <td>0.382</td>
242
+ <td>0.954</td>
243
+ </tr>
244
+ <tr>
245
+ <td>Mistral OCR</td>
246
+ <td>0.268</td>
247
+ <td>0.439</td>
248
+ <td>0.072</td>
249
+ <td>0.325</td>
250
+ <td>0.318</td>
251
+ <td>0.495</td>
252
+ <td>75.8</td>
253
+ <td>63.6</td>
254
+ <td>0.600</td>
255
+ <td>0.650</td>
256
+ <td>0.083</td>
257
+ <td>0.284</td>
258
+ </tr>
259
+ <tr>
260
+ <td>OLMOCR-sglang</td>
261
+ <td>0.326</td>
262
+ <td>0.469</td>
263
+ <td>0.097</td>
264
+ <td>0.293</td>
265
+ <td>0.455</td>
266
+ <td>0.655</td>
267
+ <td>68.1</td>
268
+ <td>61.3</td>
269
+ <td>0.608</td>
270
+ <td>0.652</td>
271
+ <td>0.145</td>
272
+ <td>0.277</td>
273
+ </tr>
274
+ <tr>
275
+ <td>SmolDocling-256M</td>
276
+ <td>0.493</td>
277
+ <td>0.816</td>
278
+ <td>0.262</td>
279
+ <td>0.838</td>
280
+ <td>0.753</td>
281
+ <td>0.997</td>
282
+ <td>44.9</td>
283
+ <td>16.5</td>
284
+ <td>0.729</td>
285
+ <td>0.907</td>
286
+ <td>0.227</td>
287
+ <td>0.522</td>
288
+ </tr>
289
+ <tr>
290
+ <td>Dolphin</td>
291
+ <td>0.206</td>
292
+ <td>0.306</td>
293
+ <td>0.107</td>
294
+ <td>0.197</td>
295
+ <td>0.447</td>
296
+ <td>0.580</td>
297
+ <td>77.3</td>
298
+ <td>67.2</td>
299
+ <td>0.180</td>
300
+ <td>0.285</td>
301
+ <td>0.091</td>
302
+ <td>0.162</td>
303
+ </tr>
304
+ <tr>
305
+ <td>MinerU 2</td>
306
+ <td>0.139</td>
307
+ <td>0.240</td>
308
+ <td>0.047</td>
309
+ <td>0.109</td>
310
+ <td>0.297</td>
311
+ <td>0.536</td>
312
+ <td>82.5</td>
313
+ <td>79.0</td>
314
+ <td>0.141</td>
315
+ <td>0.195</td>
316
+ <td>0.069<</td>
317
+ <td>0.118</td>
318
+ </tr>
319
+ <tr>
320
+ <td>OCRFlux</td>
321
+ <td>0.195</td>
322
+ <td>0.281</td>
323
+ <td>0.064</td>
324
+ <td>0.183</td>
325
+ <td>0.379</td>
326
+ <td>0.613</td>
327
+ <td>71.6</td>
328
+ <td>81.3</td>
329
+ <td>0.253</td>
330
+ <td>0.139</td>
331
+ <td>0.086</td>
332
+ <td>0.187</td>
333
+ </tr>
334
+ <tr>
335
+ <td>MonkeyOCR-pro-3B</td>
336
+ <td>0.138</td>
337
+ <td>0.206</td>
338
+ <td>0.067</td>
339
+ <td>0.107</td>
340
+ <td><strong>0.246</strong></td>
341
+ <td>0.421</td>
342
+ <td>81.5</td>
343
+ <td>87.5</td>
344
+ <td>0.139</td>
345
+ <td>0.111</td>
346
+ <td>0.100</td>
347
+ <td>0.185</td>
348
+ </tr>
349
+ <tr>
350
+
351
+ <td rowspan="5"><strong>General<br>VLMs</strong></td>
352
+ <td>GPT4o</td>
353
+ <td>0.233</td>
354
+ <td>0.399</td>
355
+ <td>0.144</td>
356
+ <td>0.409</td>
357
+ <td>0.425</td>
358
+ <td>0.606</td>
359
+ <td>72.0</td>
360
+ <td>62.9</td>
361
+ <td>0.234</td>
362
+ <td>0.329</td>
363
+ <td>0.128</td>
364
+ <td>0.251</td>
365
+ </tr>
366
+ <tr>
367
+ <td>Qwen2-VL-72B</td>
368
+ <td>0.252</td>
369
+ <td>0.327</td>
370
+ <td>0.096</td>
371
+ <td>0.218</td>
372
+ <td>0.404</td>
373
+ <td>0.487</td>
374
+ <td>76.8</td>
375
+ <td>76.4</td>
376
+ <td>0.387</td>
377
+ <td>0.408</td>
378
+ <td>0.119</td>
379
+ <td>0.193</td>
380
+ </tr>
381
+ <tr>
382
+ <td>Qwen2.5-VL-72B</td>
383
+ <td>0.214</td>
384
+ <td>0.261</td>
385
+ <td>0.092</td>
386
+ <td>0.18</td>
387
+ <td>0.315</td>
388
+ <td>0.434</td>
389
+ <td>82.9</td>
390
+ <td>83.9</td>
391
+ <td>0.341</td>
392
+ <td>0.262</td>
393
+ <td>0.106</td>
394
+ <td>0.168</td>
395
+ </tr>
396
+ <tr>
397
+ <td>Gemini2.5-Pro</td>
398
+ <td>0.148</td>
399
+ <td>0.212</td>
400
+ <td>0.055</td>
401
+ <td>0.168</td>
402
+ <td>0.356</td>
403
+ <td>0.439</td>
404
+ <td>85.8</td>
405
+ <td>86.4</td>
406
+ <td>0.13</td>
407
+ <td>0.119</td>
408
+ <td>0.049</td>
409
+ <td>0.121</td>
410
+ </tr>
411
+ <tr>
412
+ <td>doubao-1-5-thinking-vision-pro-250428</td>
413
+ <td>0.140</td>
414
+ <td>0.162</td>
415
+ <td>0.043</td>
416
+ <td>0.085</td>
417
+ <td>0.295</td>
418
+ <td><strong>0.384</strong></td>
419
+ <td>83.3</td>
420
+ <td><strong>89.3</strong></td>
421
+ <td>0.165</td>
422
+ <td><strong>0.085</strong></td>
423
+ <td>0.058</td>
424
+ <td>0.094</td>
425
+ </tr>
426
+ <tr>
427
+ <td rowspan="1"><strong>Expert VLMs</strong></td>
428
+ <td><strong>dots.ocr</strong></td>
429
+ <td><strong>0.125</strong></td>
430
+ <td><strong>0.160</strong></td>
431
+ <td><strong>0.032</strong></td>
432
+ <td><strong>0.066</strong></td>
433
+ <td>0.329</td>
434
+ <td>0.416</td>
435
+ <td><strong>88.6</strong></td>
436
+ <td>89.0</td>
437
+ <td><strong>0.099</strong></td>
438
+ <td>0.092</td>
439
+ <td><strong>0.040</strong></td>
440
+ <td><strong>0.067</strong></td>
441
+ </tr>
442
+ <tr>
443
+ </tbody>
444
+ </table>
445
+
446
+
447
+ #### The end-to-end text recognition performance across 9 PDF page types.
448
+
449
+ <table>
450
+ <thead>
451
+ <tr>
452
+ <th><strong>Model<br>Type</strong></th>
453
+ <th><strong>Models</strong></th>
454
+ <th><strong>Book</strong></th>
455
+ <th><strong>Slides</strong></th>
456
+ <th><strong>Financial<br>Report</strong></th>
457
+ <th><strong>Textbook</strong></th>
458
+ <th><strong>Exam<br>Paper</strong></th>
459
+ <th><strong>Magazine</strong></th>
460
+ <th><strong>Academic<br>Papers</strong></th>
461
+ <th><strong>Notes</strong></th>
462
+ <th><strong>Newspaper</strong></th>
463
+ <th><strong>Overall</strong></th>
464
+ </tr>
465
+ </thead>
466
+ <tbody>
467
+ <tr>
468
+ <td rowspan="3"><strong>Pipeline<br>Tools</strong></td>
469
+ <td>MinerU</td>
470
+ <td>0.055</td>
471
+ <td>0.124</td>
472
+ <td><u>0.033</u></td>
473
+ <td>0.102</td>
474
+ <td>0.159</td>
475
+ <td><strong>0.072</strong></td>
476
+ <td><u>0.025</u></td>
477
+ <td>0.984</td>
478
+ <td>0.171</td>
479
+ <td>0.206</td>
480
+ </tr>
481
+ <tr>
482
+ <td>Marker</td>
483
+ <td>0.074</td>
484
+ <td>0.340</td>
485
+ <td>0.089</td>
486
+ <td>0.319</td>
487
+ <td>0.452</td>
488
+ <td>0.153</td>
489
+ <td>0.059</td>
490
+ <td>0.651</td>
491
+ <td>0.192</td>
492
+ <td>0.274</td>
493
+ </tr>
494
+ <tr>
495
+ <td>Mathpix</td>
496
+ <td>0.131</td>
497
+ <td>0.220</td>
498
+ <td>0.202</td>
499
+ <td>0.216</td>
500
+ <td>0.278</td>
501
+ <td>0.147</td>
502
+ <td>0.091</td>
503
+ <td>0.634</td>
504
+ <td>0.690</td>
505
+ <td>0.300</td>
506
+ </tr>
507
+ <tr>
508
+ <td rowspan="5"><strong>Expert<br>VLMs</strong></td>
509
+ <td>GOT-OCR</td>
510
+ <td>0.111</td>
511
+ <td>0.222</td>
512
+ <td>0.067</td>
513
+ <td>0.132</td>
514
+ <td>0.204</td>
515
+ <td>0.198</td>
516
+ <td>0.179</td>
517
+ <td>0.388</td>
518
+ <td>0.771</td>
519
+ <td>0.267</td>
520
+ </tr>
521
+ <tr>
522
+ <td>Nougat</td>
523
+ <td>0.734</td>
524
+ <td>0.958</td>
525
+ <td>1.000</td>
526
+ <td>0.820</td>
527
+ <td>0.930</td>
528
+ <td>0.830</td>
529
+ <td>0.214</td>
530
+ <td>0.991</td>
531
+ <td>0.871</td>
532
+ <td>0.806</td>
533
+ </tr>
534
+ <tr>
535
+ <td>Dolphin</td>
536
+ <td>0.091</td>
537
+ <td>0.131</td>
538
+ <td>0.057</td>
539
+ <td>0.146</td>
540
+ <td>0.231</td>
541
+ <td>0.121</td>
542
+ <td>0.074</td>
543
+ <td>0.363</td>
544
+ <td>0.307</td>
545
+ <td>0.177</td>
546
+ </tr>
547
+ <tr>
548
+ <td>OCRFlux</td>
549
+ <td>0.068</td>
550
+ <td>0.125</td>
551
+ <td>0.092</td>
552
+ <td>0.102</td>
553
+ <td>0.119</td>
554
+ <td>0.083</td>
555
+ <td>0.047</td>
556
+ <td>0.223</td>
557
+ <td>0.536</td>
558
+ <td>0.149</td>
559
+ </tr>
560
+ <tr>
561
+ <td>MonkeyOCR-pro-3B</td>
562
+ <td>0.084</td>
563
+ <td>0.129</td>
564
+ <td>0.060</td>
565
+ <td>0.090</td>
566
+ <td>0.107</td>
567
+ <td>0.073</td>
568
+ <td>0.050</td>
569
+ <td>0.171</td>
570
+ <td>0.107</td>
571
+ <td>0.100</td>
572
+ </tr>
573
+ <tr>
574
+ <td rowspan="4"><strong>General<br>VLMs</strong></td>
575
+ <td>GPT4o</td>
576
+ <td>0.157</td>
577
+ <td>0.163</td>
578
+ <td>0.348</td>
579
+ <td>0.187</td>
580
+ <td>0.281</td>
581
+ <td>0.173</td>
582
+ <td>0.146</td>
583
+ <td>0.607</td>
584
+ <td>0.751</td>
585
+ <td>0.316</td>
586
+ </tr>
587
+ <tr>
588
+ <td>Qwen2.5-VL-7B</td>
589
+ <td>0.148</td>
590
+ <td>0.053</td>
591
+ <td>0.111</td>
592
+ <td>0.137</td>
593
+ <td>0.189</td>
594
+ <td>0.117</td>
595
+ <td>0.134</td>
596
+ <td>0.204</td>
597
+ <td>0.706</td>
598
+ <td>0.205</td>
599
+ </tr>
600
+ <tr>
601
+ <td>InternVL3-8B</td>
602
+ <td>0.163</td>
603
+ <td>0.056</td>
604
+ <td>0.107</td>
605
+ <td>0.109</td>
606
+ <td>0.129</td>
607
+ <td>0.100</td>
608
+ <td>0.159</td>
609
+ <td>0.150</td>
610
+ <td>0.681</td>
611
+ <td>0.188</td>
612
+ </tr>
613
+ <tr>
614
+ <td>doubao-1-5-thinking-vision-pro-250428</td>
615
+ <td>0.048</td>
616
+ <td>0.048</td>
617
+ <td>0.024</td>
618
+ <td><strong>0.062</strong></td>
619
+ <td>0.085</td>
620
+ <td>0.051</td>
621
+ <td>0.039</td>
622
+ <td><strong>0.096</strong></td>
623
+ <td>0.181</td>
624
+ <td>0.073</td>
625
+ </tr>
626
+ <tr>
627
+ <td rowspan="1"><strong>Expert VLMs</strong></td>
628
+ <td><strong>dots.ocr</strong></td>
629
+ <td><strong>0.031</strong></td>
630
+ <td><strong>0.047</strong></td>
631
+ <td><strong>0.011</strong></td>
632
+ <td>0.082</td>
633
+ <td><strong>0.079</strong></td>
634
+ <td><strong>0.028</strong></td>
635
+ <td><strong>0.029</strong></td>
636
+ <td>0.109</td>
637
+ <td><strong>0.056</strong></td>
638
+ <td><strong>0.055</strong></td>
639
+ </tr>
640
+
641
+ </tbody>
642
+ </table>
643
+
644
+ > **Notes:**
645
+ > - The metrics are from [MonkeyOCR](https://github.com/Yuliang-Liu/MonkeyOCR), [OmniDocBench](https://github.com/opendatalab/OmniDocBench), and our own internal evaluations.
646
+ > - We delete the Page-header and Page-footer cells in the result markdown.
647
+ > - We use tikz_preprocess pipeline to upsample the images to dpi 200.
648
+
649
+
650
+ ### 2. **dots.ocr-bench**
651
+
652
+ This is an inhouse benchmark which contain 1493 pdf images with 100 languages.
653
+
654
+ #### The end-to-end evaluation results of different tasks.
655
+
656
+ <table>
657
+ <thead>
658
+ <tr>
659
+ <th rowspan="1"><strong>Methods</strong></th>
660
+ <th colspan="1"><strong>Overall<sup>Edit</sup>↓</strong></th>
661
+ <th colspan="1"><strong>Text<sup>Edit</sup>↓</strong></th>
662
+ <th colspan="1"><strong>Formula<sup>Edit</sup>↓</strong></th>
663
+ <th colspan="1"><strong>Table<sup>TEDS</sup>↑</strong></th>
664
+ <th colspan="1"><strong>Table<sup>Edit</sup>↓</strong></th>
665
+ <th colspan="1"><strong>Read Order<sup>Edit</sup>↓</strong></th>
666
+ </tr>
667
+ </thead>
668
+ <tbody>
669
+ <td>MonkeyOCR-3B</td>
670
+ <td>0.483</td>
671
+ <td>0.445</td>
672
+ <td>0.627</td>
673
+ <td>50.93</td>
674
+ <td>0.452</td>
675
+ <td>0.409</td>
676
+ </tr>
677
+ <tr>
678
+ <td>doubao-1-5-thinking-vision-pro-250428</td>
679
+ <td>0.291</td>
680
+ <td>0.226</td>
681
+ <td>0.440</td>
682
+ <td>71.2</td>
683
+ <td>0.260</td>
684
+ <td>0.238</td>
685
+ </tr>
686
+ <tr>
687
+ <td>doubao-1-6</td>
688
+ <td>0.299</td>
689
+ <td>0.270</td>
690
+ <td>0.417</td>
691
+ <td>71.0</td>
692
+ <td>0.258</td>
693
+ <td>0.253</td>
694
+ </tr>
695
+ <tr>
696
+ <td>Gemini2.5-Pro</td>
697
+ <td>0.251</td>
698
+ <td>0.163</td>
699
+ <td>0.402</td>
700
+ <td>77.1</td>
701
+ <td>0.236</td>
702
+ <td>0.202</td>
703
+ </tr>
704
+ <tr>
705
+ <td><strong>dots.ocr</strong> </td>
706
+ <td><strong>0.177</strong></td>
707
+ <td><strong>0.075</strong></td>
708
+ <td><strong>0.297</strong></td>
709
+ <td><strong>79.2</strong></td>
710
+ <td><strong>0.186</strong></td>
711
+ <td><strong>0.152</strong></td>
712
+ </tr>
713
+
714
+ </tbody>
715
+ </table>
716
+
717
+ > **Notes:**
718
+ > - We use the same metric calculation pipeline of [OmniDocBench](https://github.com/opendatalab/OmniDocBench).
719
+ > - We delete the Page-header and Page-footer cells in the result markdown.
720
+
721
+ #### Layout Detection
722
+
723
+ <table>
724
+ <thead>
725
+ <tr>
726
+ <th rowspan="2"><strong>Method</strong></th>
727
+ <th colspan="5" style="text-align: center;"><strong>F1@IoU=.50:.05:.95↑</strong></th>
728
+ <th colspan="5" style="text-align: center;"><strong>F1@IoU=.50↑</strong></th>
729
+ </tr>
730
+ <tr>
731
+ <th>Overall</th>
732
+ <th>Text</th>
733
+ <th>Formula</th>
734
+ <th>Table</th>
735
+ <th>Picture</th>
736
+ <th>Overall</th>
737
+ <th>Text</th>
738
+ <th>Formula</th>
739
+ <th>Table</th>
740
+ <th>Picture</th>
741
+ </tr>
742
+ </thead>
743
+
744
+ <tbody>
745
+ <td>DocLayout-YOLO-DocStructBench</td>
746
+ <td>0.733</td>
747
+ <td>0.694</td>
748
+ <td>0.480</td>
749
+ <td>0.803</td>
750
+ <td>0.619</td>
751
+ <td>0.806</td>
752
+ <td>0.779</td>
753
+ <td>0.620</td>
754
+ <td>0.858</td>
755
+ <td>0.678</td>
756
+ </tr>
757
+
758
+ <tr>
759
+ <td>dots.ocr-parse all</td>
760
+ <td>0.831</td>
761
+ <td>0.801</td>
762
+ <td>0.654</td>
763
+ <td>0.838</td>
764
+ <td>0.748</td>
765
+ <td>0.922</td>
766
+ <td>0.909</td>
767
+ <td>0.770</td>
768
+ <td>0.888</td>
769
+ <td>0.831</td>
770
+ </tr>
771
+
772
+ <tr>
773
+ <td> <strong>dots.ocr-detection only</strong> </td>
774
+ <td><strong>0.845</strong></td>
775
+ <td><strong>0.816</strong></td>
776
+ <td><strong>0.716</strong></td>
777
+ <td><strong>0.875</strong></td>
778
+ <td><strong>0.765</strong></td>
779
+ <td><strong>0.930</strong></td>
780
+ <td><strong>0.917</strong></td>
781
+ <td><strong>0.832</strong></td>
782
+ <td><strong>0.918</strong></td>
783
+ <td><strong>0.843</strong></td>
784
+ </tr>
785
+
786
+ </tbody>
787
+ </table>
788
+
789
+ > **Notes:**
790
+ > - prompt_layout_all_en for **parse all**, prompt_layout_only_en for **detection only**, please refer to [prompts](https://github.com/rednote-hilab/dots_ocr/blob/main/dots_ocr/utils/prompts.py)
791
+
792
+
793
+ ### 3. olmOCR-bench.
794
+
795
+ <table>
796
+ <thead>
797
+ <tr>
798
+ <th>Model</th>
799
+ <th>ArXiv</th>
800
+ <th>Old Scans<br>Math</th>
801
+ <th>Tables</th>
802
+ <th>Old Scans</th>
803
+ <th>Headers and<br>Footers</th>
804
+ <th>Multi<br>column</th>
805
+ <th>Long Tiny<br>Text</th>
806
+ <th>Base</th>
807
+ <th>Overall</th>
808
+ </tr>
809
+ </thead>
810
+ <tbody>
811
+ <tr>
812
+ <td>GOT OCR</td>
813
+ <td>52.7</td>
814
+ <td>52.0</td>
815
+ <td>0.2</td>
816
+ <td>22.1</td>
817
+ <td>93.6</td>
818
+ <td>42.0</td>
819
+ <td>29.9</td>
820
+ <td>94.0</td>
821
+ <td>48.3 ± 1.1</td>
822
+ </tr>
823
+ <tr>
824
+ <td>Marker</td>
825
+ <td>76.0</td>
826
+ <td>57.9</td>
827
+ <td>57.6</td>
828
+ <td>27.8</td>
829
+ <td>84.9</td>
830
+ <td>72.9</td>
831
+ <td>84.6</td>
832
+ <td>99.1</td>
833
+ <td>70.1 ± 1.1</td>
834
+ </tr>
835
+ <tr>
836
+ <td>MinerU</td>
837
+ <td>75.4</td>
838
+ <td>47.4</td>
839
+ <td>60.9</td>
840
+ <td>17.3</td>
841
+ <td><strong>96.6</strong></td>
842
+ <td>59.0</td>
843
+ <td>39.1</td>
844
+ <td>96.6</td>
845
+ <td>61.5 ± 1.1</td>
846
+ </tr>
847
+ <tr>
848
+ <td>Mistral OCR</td>
849
+ <td>77.2</td>
850
+ <td>67.5</td>
851
+ <td>60.6</td>
852
+ <td>29.3</td>
853
+ <td>93.6</td>
854
+ <td>71.3</td>
855
+ <td>77.1</td>
856
+ <td>99.4</td>
857
+ <td>72.0 ± 1.1</td>
858
+ </tr>
859
+ <tr>
860
+ <td>Nanonets OCR</td>
861
+ <td>67.0</td>
862
+ <td>68.6</td>
863
+ <td><strong>77.7</strong></td>
864
+ <td>39.5</td>
865
+ <td>40.7</td>
866
+ <td>69.9</td>
867
+ <td>53.4</td>
868
+ <td>99.3</td>
869
+ <td>64.5 ± 1.1</td>
870
+ </tr>
871
+ <tr>
872
+ <td>GPT-4o<br>(No Anchor)</td>
873
+ <td>51.5</td>
874
+ <td><strong>75.5</strong></td>
875
+ <td>69.1</td>
876
+ <td>40.9</td>
877
+ <td>94.2</td>
878
+ <td>68.9</td>
879
+ <td>54.1</td>
880
+ <td>96.7</td>
881
+ <td>68.9 ± 1.1</td>
882
+ </tr>
883
+ <tr>
884
+ <td>GPT-4o<br>(Anchored)</td>
885
+ <td>53.5</td>
886
+ <td>74.5</td>
887
+ <td>70.0</td>
888
+ <td>40.7</td>
889
+ <td>93.8</td>
890
+ <td>69.3</td>
891
+ <td>60.6</td>
892
+ <td>96.8</td>
893
+ <td>69.9 ± 1.1</td>
894
+ </tr>
895
+ <tr>
896
+ <td>Gemini Flash 2<br>(No Anchor)</td>
897
+ <td>32.1</td>
898
+ <td>56.3</td>
899
+ <td>61.4</td>
900
+ <td>27.8</td>
901
+ <td>48.0</td>
902
+ <td>58.7</td>
903
+ <td><strong>84.4</strong></td>
904
+ <td>94.0</td>
905
+ <td>57.8 ± 1.1</td>
906
+ </tr>
907
+ <tr>
908
+ <td>Gemini Flash 2<br>(Anchored)</td>
909
+ <td>54.5</td>
910
+ <td>56.1</td>
911
+ <td>72.1</td>
912
+ <td>34.2</td>
913
+ <td>64.7</td>
914
+ <td>61.5</td>
915
+ <td>71.5</td>
916
+ <td>95.6</td>
917
+ <td>63.8 ± 1.2</td>
918
+ </tr>
919
+ <tr>
920
+ <td>Qwen 2 VL<br>(No Anchor)</td>
921
+ <td>19.7</td>
922
+ <td>31.7</td>
923
+ <td>24.2</td>
924
+ <td>17.1</td>
925
+ <td>88.9</td>
926
+ <td>8.3</td>
927
+ <td>6.8</td>
928
+ <td>55.5</td>
929
+ <td>31.5 ± 0.9</td>
930
+ </tr>
931
+ <tr>
932
+ <td>Qwen 2.5 VL<br>(No Anchor)</td>
933
+ <td>63.1</td>
934
+ <td>65.7</td>
935
+ <td>67.3</td>
936
+ <td>38.6</td>
937
+ <td>73.6</td>
938
+ <td>68.3</td>
939
+ <td>49.1</td>
940
+ <td>98.3</td>
941
+ <td>65.5 ± 1.2</td>
942
+ </tr>
943
+ <tr>
944
+ <td>olmOCR v0.1.75<br>(No Anchor)</td>
945
+ <td>71.5</td>
946
+ <td>71.4</td>
947
+ <td>71.4</td>
948
+ <td><strong>42.8</strong></td>
949
+ <td>94.1</td>
950
+ <td>77.7</td>
951
+ <td>71.0</td>
952
+ <td>97.8</td>
953
+ <td>74.7 ± 1.1</td>
954
+ </tr>
955
+ <tr>
956
+ <td>olmOCR v0.1.75<br>(Anchored)</td>
957
+ <td>74.9</td>
958
+ <td>71.2</td>
959
+ <td>71.0</td>
960
+ <td>42.2</td>
961
+ <td>94.5</td>
962
+ <td>78.3</td>
963
+ <td>73.3</td>
964
+ <td>98.3</td>
965
+ <td>75.5 ± 1.0</td>
966
+ </tr>
967
+ <tr>
968
+ <td>MonkeyOCR-pro-3B <a href="http://vlrlabmonkey.xyz:7685/">[Demo]</a></td>
969
+ <td><strong>83.8</strong></td>
970
+ <td>68.8</td>
971
+ <td>74.6</td>
972
+ <td>36.1</td>
973
+ <td>91.2</td>
974
+ <td>76.6</td>
975
+ <td>80.1</td>
976
+ <td>95.3</td>
977
+ <td>75.8 ± 1.0</td>
978
+ </tr>
979
+ <tr>
980
+ <td><strong>dots.ocr</strong></td>
981
+ <td>82.1</td>
982
+ <td>64.2</td>
983
+ <td><strong>88.3</strong></td>
984
+ <td>40.9</td>
985
+ <td>94.1</td>
986
+ <td><strong>82.4</strong></td>
987
+ <td>81.2</td>
988
+ <td><strong>99.5</strong></td>
989
+ <td><strong>79.1 ± 1.0</strong></td>
990
+ </tr>
991
+ </tbody>
992
+ </table>
993
+
994
+
995
+ > **Note:**
996
+ > - The metrics are from [MonkeyOCR](https://github.com/Yuliang-Liu/MonkeyOCR),
997
+ [olmocr](https://github.com/allenai/olmocr), and our own internal evaluations.
998
+ > - We delete the Page-header and Page-footer cells in the result markdown.
999
+
1000
+
1001
+
1002
+ # Quick Start
1003
+ ## 1. Installation
1004
+ ### Install dots.ocr
1005
+ ```shell
1006
+ conda create -n dots_ocr python=3.12
1007
+ conda activate dots_ocr
1008
+
1009
+ git clone https://github.com/rednote-hilab/dots.ocr.git
1010
+ cd dots.ocr
1011
+
1012
+ # Install pytorch, see https://pytorch.org/get-started/previous-versions/ for your cuda version
1013
+ pip install torch==2.7.0 torchvision==0.22.0 torchaudio==2.7.0 --index-url https://download.pytorch.org/whl/cu128
1014
+ pip install -e .
1015
+ ```
1016
+
1017
+ If you have trouble with the installation, try our [Docker Image](https://hub.docker.com/r/rednotehilab/dots.ocr) for an easier setup, and follow these steps:
1018
+ ```shell
1019
+ git clone https://github.com/rednote-hilab/dots.ocr.git
1020
+ cd dots.ocr
1021
+ pip install -e .
1022
+ ```
1023
+
1024
+
1025
+ ### Download Model Weights
1026
+ > 💡**Note:** Please use a directory name without periods (e.g., `DotsOCR` instead of `dots.ocr`) for the model save path. This is a temporary workaround pending our integration with Transformers.
1027
+ ```shell
1028
+ python tools/download_model.py
1029
+ ```
1030
+
1031
+
1032
+ ## 2. Deployment
1033
+ ### vLLM inference
1034
+ We highly recommend using vllm for deployment and inference. All of our evaluations results are based on vllm version 0.9.1.
1035
+ The [Docker Image](https://hub.docker.com/r/rednotehilab/dots.ocr) is based on the official vllm image. You can also follow [Dockerfile](https://github.com/rednote-hilab/dots_ocr/blob/main/docker/Dockerfile) to build the deployment environment by yourself.
1036
+
1037
+ ```shell
1038
+ # You need to register model to vllm at first
1039
+ hf_model_path=./weights/DotsOCR # Path to your downloaded model weights
1040
+ export PYTHONPATH=$(dirname "$hf_model_path"):$PYTHONPATH
1041
+ sed -i '/^from vllm\.entrypoints\.cli\.main import main$/a\
1042
+ from DotsOCR import modeling_dots_ocr_vllm' `which vllm`
1043
+
1044
+ # launch vllm server
1045
+ CUDA_VISIBLE_DEVICES=0 vllm serve ${hf_model_path} --tensor-parallel-size 1 --gpu-memory-utilization 0.95 --chat-template-content-format string --served-model-name model --trust-remote-code
1046
+
1047
+ # vllm api demo
1048
+ python3 ./demo/demo_vllm.py --prompt_mode prompt_layout_all_en
1049
+ ```
1050
+
1051
+ ### Hugginface inference
1052
+ ```shell
1053
+ python3 demo/demo_hf.py
1054
+ ```
1055
+
1056
+ <details>
1057
+ <summary><b>Hugginface inference details</b></summary>
1058
+
1059
+ ```python
1060
+ import torch
1061
+ from transformers import AutoModelForCausalLM, AutoProcessor, AutoTokenizer
1062
+ from qwen_vl_utils import process_vision_info
1063
+ from dots_ocr.utils import dict_promptmode_to_prompt
1064
+
1065
+ model_path = "./weights/DotsOCR"
1066
+ model = AutoModelForCausalLM.from_pretrained(
1067
+ model_path,
1068
+ attn_implementation="flash_attention_2",
1069
+ torch_dtype=torch.bfloat16,
1070
+ device_map="auto",
1071
+ trust_remote_code=True
1072
+ )
1073
+ processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)
1074
+
1075
+ image_path = "demo/demo_image1.jpg"
1076
+ prompt = """Please output the layout information from the PDF image, including each layout element's bbox, its category, and the corresponding text content within the bbox.
1077
+
1078
+ 1. Bbox format: [x1, y1, x2, y2]
1079
+
1080
+ 2. Layout Categories: The possible categories are ['Caption', 'Footnote', 'Formula', 'List-item', 'Page-footer', 'Page-header', 'Picture', 'Section-header', 'Table', 'Text', 'Title'].
1081
+
1082
+ 3. Text Extraction & Formatting Rules:
1083
+ - Picture: For the 'Picture' category, the text field should be omitted.
1084
+ - Formula: Format its text as LaTeX.
1085
+ - Table: Format its text as HTML.
1086
+ - All Others (Text, Title, etc.): Format their text as Markdown.
1087
+
1088
+ 4. Constraints:
1089
+ - The output text must be the original text from the image, with no translation.
1090
+ - All layout elements must be sorted according to human reading order.
1091
+
1092
+ 5. Final Output: The entire output must be a single JSON object.
1093
+ """
1094
+
1095
+ messages = [
1096
+ {
1097
+ "role": "user",
1098
+ "content": [
1099
+ {
1100
+ "type": "image",
1101
+ "image": image_path
1102
+ },
1103
+ {"type": "text", "text": prompt}
1104
+ ]
1105
+ }
1106
+ ]
1107
+
1108
+ # Preparation for inference
1109
+ text = processor.apply_chat_template(
1110
+ messages,
1111
+ tokenize=False,
1112
+ add_generation_prompt=True
1113
+ )
1114
+ image_inputs, video_inputs = process_vision_info(messages)
1115
+ inputs = processor(
1116
+ text=[text],
1117
+ images=image_inputs,
1118
+ videos=video_inputs,
1119
+ padding=True,
1120
+ return_tensors="pt",
1121
+ )
1122
+
1123
+ inputs = inputs.to("cuda")
1124
+
1125
+ # Inference: Generation of the output
1126
+ generated_ids = model.generate(**inputs, max_new_tokens=24000)
1127
+ generated_ids_trimmed = [
1128
+ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
1129
+ ]
1130
+ output_text = processor.batch_decode(
1131
+ generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
1132
+ )
1133
+ print(output_text)
1134
+
1135
+ ```
1136
+
1137
+ </details>
1138
+
1139
+ ## 3. Document Parse
1140
+ **Based on vLLM server**, you can parse an image or a pdf file using the following commands:
1141
+ ```bash
1142
+
1143
+ # Parse all layout info, both detection and recognition
1144
+ # Parse a single image
1145
+ python3 dots_ocr/parser.py demo/demo_image1.jpg
1146
+ # Parse a single PDF
1147
+ python3 dots_ocr/parser.py demo/demo_pdf1.pdf --num_threads 64 # try bigger num_threads for pdf with a large number of pages
1148
+
1149
+ # Layout detection only
1150
+ python3 dots_ocr/parser.py demo/demo_image1.jpg --prompt prompt_layout_only_en
1151
+
1152
+ # Parse text only, except Page-header and Page-footer
1153
+ python3 dots_ocr/parser.py demo/demo_image1.jpg --prompt prompt_ocr
1154
+
1155
+ # Parse layout info by bbox
1156
+ python3 dots_ocr/parser.py demo/demo_image1.jpg --prompt prompt_grounding_ocr --bbox 163 241 1536 705
1157
+
1158
+ ```
1159
+
1160
+ <details>
1161
+ <summary><b>Output Results</b></summary>
1162
+
1163
+ 1. **Structured Layout Data** (`demo_image1.json`): A JSON file containing the detected layout elements, including their bounding boxes, categories, and extracted text.
1164
+ 2. **Processed Markdown File** (`demo_image1.md`): A Markdown file generated from the concatenated text of all detected cells.
1165
+ * An additional version, `demo_image1_nohf.md`, is also provided, which excludes page headers and footers for compatibility with benchmarks like Omnidocbench and olmOCR-bench.
1166
+ 3. **Layout Visualization** (`demo_image1.jpg`): The original image with the detected layout bounding boxes drawn on it.
1167
+
1168
+ </details>
1169
+
1170
+ ## 4. Demo
1171
+ You can run the demo with the following command, or try directly at [live demo](https://dotsocr.xiaohongshu.com/)
1172
+ ```bash
1173
+ python demo/demo_gradio.py
1174
+ ```
1175
+
1176
+ We also provide a demo for grounding ocr:
1177
+ ```bash
1178
+ python demo/demo_gradio_annotion.py
1179
+ ```
1180
+
1181
+
1182
+ ### Example for formula document
1183
+ <img src="assets/showcase/formula1.png" alt="formula1.png" border="0" />
1184
+ <img src="assets/showcase/formula2.png" alt="formula2.png" border="0" />
1185
+ <img src="assets/showcase/formula3.png" alt="formula3.png" border="0" />
1186
+
1187
+ ### Example for table document
1188
+ <img src="assets/showcase/table1.png" alt="table1.png" border="0" />
1189
+ <img src="assets/showcase/table2.png" alt="table2.png" border="0" />
1190
+ <img src="assets/showcase/table3.png" alt="table3.png" border="0" />
1191
+
1192
+ ### Example for multilingual document
1193
+ <img src="assets/showcase/Tibetan.png" alt="Tibetan.png" border="0" />
1194
+ <img src="assets/showcase/tradition_zh.png" alt="tradition_zh.png" border="0" />
1195
+ <img src="assets/showcase/nl.png" alt="nl.png" border="0" />
1196
+ <img src="assets/showcase/kannada.png" alt="kannada.png" border="0" />
1197
+ <img src="assets/showcase/russian.png" alt="russian.png" border="0" />
1198
+
1199
+ ### Example for reading order
1200
+ <img src="assets/showcase/reading_order.png" alt="reading_order.png" border="0" />
1201
+
1202
+ ### Example for grounding ocr
1203
+ <img src="assets/showcase/grounding.png" alt="grounding.png" border="0" />
1204
+
1205
+
1206
+ ## Acknowledgments
1207
+ We would like to thank [Qwen2.5-VL](https://github.com/QwenLM/Qwen2.5-VL), [aimv2](https://github.com/apple/ml-aim), [MonkeyOCR](https://github.com/Yuliang-Liu/MonkeyOCR),
1208
+ [OmniDocBench](https://github.com/opendatalab/OmniDocBench), [PyMuPDF](https://github.com/pymupdf/PyMuPDF), for providing code and models.
1209
+
1210
+ We also thank [DocLayNet](https://github.com/DS4SD/DocLayNet), [M6Doc](https://github.com/HCIILAB/M6Doc), [CDLA](https://github.com/buptlihang/CDLA), [D4LA](https://github.com/AlibabaResearch/AdvancedLiterateMachinery) for providing valuable datasets.
1211
+
1212
+ ## Limitation & Future Work
1213
+
1214
+ - **Complex Document Elements:**
1215
+ - **Table&Formula**: dots.ocr is not yet perfect for high-complexity tables and formula extraction.
1216
+ - **Picture**: Pictures in documents are currently not parsed.
1217
+
1218
+ - **Parsing Failures:** The model may fail to parse under certain conditions:
1219
+ - When the character-to-pixel ratio is excessively high. Try enlarging the image or increasing the PDF parsing DPI (a setting of 200 is recommended). However, please note that the model performs optimally on images with a resolution under 11289600 pixels.
1220
+ - Continuous special characters, such as ellipses (`...`) and underscores (`_`), may cause the prediction output to repeat endlessly. In such scenarios, consider using alternative prompts like `prompt_layout_only_en`, `prompt_ocr`, or `prompt_grounding_ocr` ([details here](https://github.com/rednote-hilab/dots_ocr/blob/main/dots_ocr/utils/prompts.py)).
1221
+
1222
+ - **Performance Bottleneck:** Despite its 1.7B parameter LLM foundation, **dots.ocr** is not yet optimized for high-throughput processing of large PDF volumes.
1223
+
1224
+ We are committed to achieving more accurate table and formula parsing, as well as enhancing the model's OCR capabilities for broader generalization, all while aiming for **a more powerful, more efficient model**. Furthermore, we are actively considering the development of **a more general-purpose perception model** based on Vision-Language Models (VLMs), which would integrate general detection, image captioning, and OCR tasks into a unified framework. **Parsing the content of the pictures in the documents** is also a key priority for our future work.
1225
+ We believe that collaboration is the key to tackling these exciting challenges. If you are passionate about advancing the frontiers of document intelligence and are interested in contributing to these future endeavors, we would love to hear from you. Please reach out to us via email at: [[email protected]].
chat_template.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "chat_template": "{% set image_count = namespace(value=0) %}{% set video_count = namespace(value=0) %}{%- for m in messages %}{%- if m.role == 'system' %}{{- '<|system|>' + m.content + '<|endofsystem|>\n' }}{%- elif m.role == 'user' %}{% if m.content is string %}{{- '<|user|>' + m.content + '<|endofuser|>' }}{% else %} {% for content in m.content %}{% if content['type'] == 'image' or 'image' in content or 'image_url' in content %}{% set image_count.value = image_count.value + 1 %}{% if add_vision_id %}Picture {{ image_count.value }}: {% endif %}<|img|><|imgpad|><|endofimg|>{% elif content['type'] == 'video' or 'video' in content %}{% set video_count.value = video_count.value + 1 %}{% if add_vision_id %}Video {{ video_count.value }}: {% endif %}<|img|><|video_pad|><|endofimg|>{% elif 'text' in content %}{{ content['text'] }}{% endif %}{% endfor %}{%- endif %}{%- elif m.role == 'assistant' %}{{- '<|assistant|>' + m.content }}{%- if not loop.last %}{{- '<|endofassistant|>' }}{%- endif %}{%- endif %}{%- endfor %}{%- if messages[-1].role != 'assistant' %}{{- '<|assistant|>' }}{%- endif %}"
3
+ }
config.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "DotsOCRForCausalLM"
4
+ ],
5
+ "model_type": "dots_ocr",
6
+ "auto_map": {
7
+ "AutoConfig": "configuration_dots.DotsOCRConfig",
8
+ "AutoModelForCausalLM": "modeling_dots_ocr.DotsOCRForCausalLM"
9
+ },
10
+ "attention_bias": true,
11
+ "attention_dropout": 0.0,
12
+ "hidden_act": "silu",
13
+ "hidden_size": 1536,
14
+ "initializer_range": 0.02,
15
+ "intermediate_size": 8960,
16
+ "max_position_embeddings": 131072,
17
+ "max_window_layers": 28,
18
+ "num_attention_heads": 12,
19
+ "num_hidden_layers": 28,
20
+ "num_key_value_heads": 2,
21
+ "rms_norm_eps": 1e-06,
22
+ "rope_scaling": null,
23
+ "rope_theta": 1000000,
24
+ "sliding_window": 131072,
25
+ "tie_word_embeddings": false,
26
+ "torch_dtype": "bfloat16",
27
+ "transformers_version": "4.51.0",
28
+ "use_cache": true,
29
+ "use_sliding_window": false,
30
+ "vocab_size": 151936,
31
+ "image_token_id": 151665,
32
+ "video_token_id": 151656,
33
+ "vision_config": {
34
+ "embed_dim": 1536,
35
+ "hidden_size": 1536,
36
+ "intermediate_size": 4224,
37
+ "num_hidden_layers": 42,
38
+ "num_attention_heads": 12,
39
+ "num_channels": 3,
40
+ "patch_size": 14,
41
+ "post_norm": true,
42
+ "rms_norm_eps": 1e-05,
43
+ "spatial_merge_size": 2,
44
+ "temporal_patch_size": 1,
45
+ "use_bias": false,
46
+ "attn_implementation": "flash_attention_2",
47
+ "init_merger_std": 0.02,
48
+ "initializer_range": 0.02,
49
+ "is_causal": false
50
+ }
51
+ }
configuration_dots.py ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Any, Optional
2
+ from transformers.configuration_utils import PretrainedConfig
3
+ from transformers.models.qwen2 import Qwen2Config
4
+ from transformers import Qwen2_5_VLProcessor, AutoProcessor
5
+ from transformers.models.auto.configuration_auto import CONFIG_MAPPING
6
+
7
+
8
+ class DotsVisionConfig(PretrainedConfig):
9
+ model_type: str = "dots_vit"
10
+
11
+ def __init__(
12
+ self,
13
+ embed_dim: int = 1536, # vision encoder embed size
14
+ hidden_size: int = 1536, # after merger hidden size
15
+ intermediate_size: int = 4224,
16
+ num_hidden_layers: int = 42,
17
+ num_attention_heads: int = 12,
18
+ num_channels: int = 3,
19
+ patch_size: int = 14,
20
+ spatial_merge_size: int = 2,
21
+ temporal_patch_size: int = 1,
22
+ rms_norm_eps: float = 1e-5,
23
+ use_bias: bool = False,
24
+ attn_implementation="flash_attention_2", # "eager","sdpa","flash_attention_2"
25
+ initializer_range=0.02,
26
+ init_merger_std=0.02,
27
+ is_causal=False, # ve causal forward
28
+ post_norm=True,
29
+ gradient_checkpointing=False,
30
+ **kwargs: Any,
31
+ ):
32
+ super().__init__(**kwargs)
33
+ self.embed_dim = embed_dim
34
+ self.hidden_size = hidden_size
35
+ self.intermediate_size = intermediate_size
36
+ self.num_hidden_layers = num_hidden_layers
37
+ self.num_attention_heads = num_attention_heads
38
+ self.num_channels = num_channels
39
+ self.patch_size = patch_size
40
+ self.spatial_merge_size = spatial_merge_size
41
+ self.temporal_patch_size = temporal_patch_size
42
+ self.rms_norm_eps = rms_norm_eps
43
+ self.use_bias = use_bias
44
+ self.attn_implementation = attn_implementation
45
+ self.initializer_range = initializer_range
46
+ self.init_merger_std = init_merger_std
47
+ self.is_causal = is_causal
48
+ self.post_norm = post_norm
49
+ self.gradient_checkpointing = gradient_checkpointing
50
+
51
+
52
+
53
+ class DotsOCRConfig(Qwen2Config):
54
+ model_type = "dots_ocr"
55
+ def __init__(self,
56
+ image_token_id = 151665,
57
+ video_token_id = 151656,
58
+ vision_config: Optional[dict] = None, *args, **kwargs):
59
+ super().__init__(*args, **kwargs)
60
+ self.image_token_id = image_token_id
61
+ self.video_token_id = video_token_id
62
+ self.vision_config = DotsVisionConfig(**(vision_config or {}))
63
+
64
+ def save_pretrained(self, save_directory, **kwargs):
65
+ self._auto_class = None
66
+ super().save_pretrained(save_directory, **kwargs)
67
+
68
+
69
+ class DotsVLProcessor(Qwen2_5_VLProcessor):
70
+ def __init__(self, image_processor=None, tokenizer=None, chat_template=None, **kwargs):
71
+ super().__init__(image_processor, tokenizer, chat_template=chat_template)
72
+ self.image_token = "<|imgpad|>" if not hasattr(tokenizer, "image_token") else tokenizer.image_token
73
+
74
+
75
+ AutoProcessor.register("dots_ocr", DotsVLProcessor)
76
+ CONFIG_MAPPING.register("dots_ocr", DotsOCRConfig)
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "max_length": 32768,
3
+ "eos_token_id": [
4
+ 151643,
5
+ 151673
6
+ ]
7
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model-00001-of-00002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea1d532184f3adf5cbcfcc00b2cf5b2abfa6fe182768a3ae63d441a9b5fc99ac
3
+ size 4292758192
model-00002-of-00002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:26ab1ec6c8b4e4116befbd59af42159f1dbcb0ad0c045a15e890bb2f6e8b0dae
3
+ size 1785673544
model.safetensors.index.json ADDED
@@ -0,0 +1,650 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 6078358528
4
+ },
5
+ "weight_map": {
6
+ "lm_head.weight": "model-00001-of-00002.safetensors",
7
+ "model.embed_tokens.weight": "model-00001-of-00002.safetensors",
8
+ "model.layers.0.input_layernorm.weight": "model-00001-of-00002.safetensors",
9
+ "model.layers.0.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
10
+ "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
11
+ "model.layers.0.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
12
+ "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
13
+ "model.layers.0.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
14
+ "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
15
+ "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
16
+ "model.layers.0.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
17
+ "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
18
+ "model.layers.0.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
19
+ "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
20
+ "model.layers.1.input_layernorm.weight": "model-00001-of-00002.safetensors",
21
+ "model.layers.1.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
22
+ "model.layers.1.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
23
+ "model.layers.1.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
24
+ "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
25
+ "model.layers.1.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
26
+ "model.layers.1.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
27
+ "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
28
+ "model.layers.1.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
29
+ "model.layers.1.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
30
+ "model.layers.1.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
31
+ "model.layers.1.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
32
+ "model.layers.10.input_layernorm.weight": "model-00001-of-00002.safetensors",
33
+ "model.layers.10.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
34
+ "model.layers.10.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
35
+ "model.layers.10.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
36
+ "model.layers.10.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
37
+ "model.layers.10.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
38
+ "model.layers.10.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
39
+ "model.layers.10.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
40
+ "model.layers.10.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
41
+ "model.layers.10.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
42
+ "model.layers.10.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
43
+ "model.layers.10.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
44
+ "model.layers.11.input_layernorm.weight": "model-00001-of-00002.safetensors",
45
+ "model.layers.11.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
46
+ "model.layers.11.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
47
+ "model.layers.11.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
48
+ "model.layers.11.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
49
+ "model.layers.11.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
50
+ "model.layers.11.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
51
+ "model.layers.11.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
52
+ "model.layers.11.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
53
+ "model.layers.11.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
54
+ "model.layers.11.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
55
+ "model.layers.11.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
56
+ "model.layers.12.input_layernorm.weight": "model-00001-of-00002.safetensors",
57
+ "model.layers.12.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
58
+ "model.layers.12.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
59
+ "model.layers.12.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
60
+ "model.layers.12.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
61
+ "model.layers.12.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
62
+ "model.layers.12.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
63
+ "model.layers.12.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
64
+ "model.layers.12.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
65
+ "model.layers.12.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
66
+ "model.layers.12.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
67
+ "model.layers.12.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
68
+ "model.layers.13.input_layernorm.weight": "model-00001-of-00002.safetensors",
69
+ "model.layers.13.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
70
+ "model.layers.13.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
71
+ "model.layers.13.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
72
+ "model.layers.13.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
73
+ "model.layers.13.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
74
+ "model.layers.13.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
75
+ "model.layers.13.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
76
+ "model.layers.13.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
77
+ "model.layers.13.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
78
+ "model.layers.13.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
79
+ "model.layers.13.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
80
+ "model.layers.14.input_layernorm.weight": "model-00001-of-00002.safetensors",
81
+ "model.layers.14.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
82
+ "model.layers.14.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
83
+ "model.layers.14.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
84
+ "model.layers.14.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
85
+ "model.layers.14.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
86
+ "model.layers.14.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
87
+ "model.layers.14.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
88
+ "model.layers.14.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
89
+ "model.layers.14.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
90
+ "model.layers.14.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
91
+ "model.layers.14.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
92
+ "model.layers.15.input_layernorm.weight": "model-00001-of-00002.safetensors",
93
+ "model.layers.15.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
94
+ "model.layers.15.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
95
+ "model.layers.15.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
96
+ "model.layers.15.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
97
+ "model.layers.15.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
98
+ "model.layers.15.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
99
+ "model.layers.15.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
100
+ "model.layers.15.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
101
+ "model.layers.15.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
102
+ "model.layers.15.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
103
+ "model.layers.15.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
104
+ "model.layers.16.input_layernorm.weight": "model-00001-of-00002.safetensors",
105
+ "model.layers.16.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
106
+ "model.layers.16.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
107
+ "model.layers.16.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
108
+ "model.layers.16.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
109
+ "model.layers.16.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
110
+ "model.layers.16.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
111
+ "model.layers.16.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
112
+ "model.layers.16.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
113
+ "model.layers.16.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
114
+ "model.layers.16.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
115
+ "model.layers.16.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
116
+ "model.layers.17.input_layernorm.weight": "model-00001-of-00002.safetensors",
117
+ "model.layers.17.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
118
+ "model.layers.17.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
119
+ "model.layers.17.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
120
+ "model.layers.17.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
121
+ "model.layers.17.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
122
+ "model.layers.17.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
123
+ "model.layers.17.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
124
+ "model.layers.17.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
125
+ "model.layers.17.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
126
+ "model.layers.17.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
127
+ "model.layers.17.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
128
+ "model.layers.18.input_layernorm.weight": "model-00001-of-00002.safetensors",
129
+ "model.layers.18.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
130
+ "model.layers.18.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
131
+ "model.layers.18.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
132
+ "model.layers.18.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
133
+ "model.layers.18.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
134
+ "model.layers.18.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
135
+ "model.layers.18.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
136
+ "model.layers.18.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
137
+ "model.layers.18.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
138
+ "model.layers.18.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
139
+ "model.layers.18.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
140
+ "model.layers.19.input_layernorm.weight": "model-00001-of-00002.safetensors",
141
+ "model.layers.19.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
142
+ "model.layers.19.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
143
+ "model.layers.19.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
144
+ "model.layers.19.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
145
+ "model.layers.19.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
146
+ "model.layers.19.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
147
+ "model.layers.19.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
148
+ "model.layers.19.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
149
+ "model.layers.19.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
150
+ "model.layers.19.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
151
+ "model.layers.19.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
152
+ "model.layers.2.input_layernorm.weight": "model-00001-of-00002.safetensors",
153
+ "model.layers.2.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
154
+ "model.layers.2.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
155
+ "model.layers.2.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
156
+ "model.layers.2.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
157
+ "model.layers.2.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
158
+ "model.layers.2.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
159
+ "model.layers.2.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
160
+ "model.layers.2.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
161
+ "model.layers.2.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
162
+ "model.layers.2.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
163
+ "model.layers.2.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
164
+ "model.layers.20.input_layernorm.weight": "model-00001-of-00002.safetensors",
165
+ "model.layers.20.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
166
+ "model.layers.20.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
167
+ "model.layers.20.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
168
+ "model.layers.20.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
169
+ "model.layers.20.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
170
+ "model.layers.20.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
171
+ "model.layers.20.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
172
+ "model.layers.20.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
173
+ "model.layers.20.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
174
+ "model.layers.20.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
175
+ "model.layers.20.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
176
+ "model.layers.21.input_layernorm.weight": "model-00001-of-00002.safetensors",
177
+ "model.layers.21.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
178
+ "model.layers.21.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
179
+ "model.layers.21.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
180
+ "model.layers.21.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
181
+ "model.layers.21.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
182
+ "model.layers.21.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
183
+ "model.layers.21.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
184
+ "model.layers.21.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
185
+ "model.layers.21.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
186
+ "model.layers.21.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
187
+ "model.layers.21.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
188
+ "model.layers.22.input_layernorm.weight": "model-00001-of-00002.safetensors",
189
+ "model.layers.22.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
190
+ "model.layers.22.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
191
+ "model.layers.22.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
192
+ "model.layers.22.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
193
+ "model.layers.22.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
194
+ "model.layers.22.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
195
+ "model.layers.22.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
196
+ "model.layers.22.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
197
+ "model.layers.22.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
198
+ "model.layers.22.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
199
+ "model.layers.22.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
200
+ "model.layers.23.input_layernorm.weight": "model-00001-of-00002.safetensors",
201
+ "model.layers.23.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
202
+ "model.layers.23.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
203
+ "model.layers.23.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
204
+ "model.layers.23.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
205
+ "model.layers.23.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
206
+ "model.layers.23.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
207
+ "model.layers.23.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
208
+ "model.layers.23.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
209
+ "model.layers.23.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
210
+ "model.layers.23.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
211
+ "model.layers.23.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
212
+ "model.layers.24.input_layernorm.weight": "model-00001-of-00002.safetensors",
213
+ "model.layers.24.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
214
+ "model.layers.24.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
215
+ "model.layers.24.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
216
+ "model.layers.24.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
217
+ "model.layers.24.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
218
+ "model.layers.24.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
219
+ "model.layers.24.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
220
+ "model.layers.24.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
221
+ "model.layers.24.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
222
+ "model.layers.24.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
223
+ "model.layers.24.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
224
+ "model.layers.25.input_layernorm.weight": "model-00001-of-00002.safetensors",
225
+ "model.layers.25.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
226
+ "model.layers.25.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
227
+ "model.layers.25.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
228
+ "model.layers.25.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
229
+ "model.layers.25.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
230
+ "model.layers.25.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
231
+ "model.layers.25.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
232
+ "model.layers.25.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
233
+ "model.layers.25.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
234
+ "model.layers.25.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
235
+ "model.layers.25.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
236
+ "model.layers.26.input_layernorm.weight": "model-00001-of-00002.safetensors",
237
+ "model.layers.26.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
238
+ "model.layers.26.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
239
+ "model.layers.26.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
240
+ "model.layers.26.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
241
+ "model.layers.26.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
242
+ "model.layers.26.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
243
+ "model.layers.26.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
244
+ "model.layers.26.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
245
+ "model.layers.26.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
246
+ "model.layers.26.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
247
+ "model.layers.26.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
248
+ "model.layers.27.input_layernorm.weight": "model-00001-of-00002.safetensors",
249
+ "model.layers.27.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
250
+ "model.layers.27.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
251
+ "model.layers.27.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
252
+ "model.layers.27.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
253
+ "model.layers.27.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
254
+ "model.layers.27.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
255
+ "model.layers.27.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
256
+ "model.layers.27.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
257
+ "model.layers.27.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
258
+ "model.layers.27.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
259
+ "model.layers.27.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
260
+ "model.layers.3.input_layernorm.weight": "model-00001-of-00002.safetensors",
261
+ "model.layers.3.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
262
+ "model.layers.3.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
263
+ "model.layers.3.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
264
+ "model.layers.3.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
265
+ "model.layers.3.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
266
+ "model.layers.3.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
267
+ "model.layers.3.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
268
+ "model.layers.3.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
269
+ "model.layers.3.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
270
+ "model.layers.3.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
271
+ "model.layers.3.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
272
+ "model.layers.4.input_layernorm.weight": "model-00001-of-00002.safetensors",
273
+ "model.layers.4.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
274
+ "model.layers.4.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
275
+ "model.layers.4.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
276
+ "model.layers.4.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
277
+ "model.layers.4.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
278
+ "model.layers.4.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
279
+ "model.layers.4.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
280
+ "model.layers.4.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
281
+ "model.layers.4.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
282
+ "model.layers.4.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
283
+ "model.layers.4.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
284
+ "model.layers.5.input_layernorm.weight": "model-00001-of-00002.safetensors",
285
+ "model.layers.5.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
286
+ "model.layers.5.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
287
+ "model.layers.5.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
288
+ "model.layers.5.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
289
+ "model.layers.5.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
290
+ "model.layers.5.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
291
+ "model.layers.5.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
292
+ "model.layers.5.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
293
+ "model.layers.5.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
294
+ "model.layers.5.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
295
+ "model.layers.5.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
296
+ "model.layers.6.input_layernorm.weight": "model-00001-of-00002.safetensors",
297
+ "model.layers.6.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
298
+ "model.layers.6.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
299
+ "model.layers.6.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
300
+ "model.layers.6.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
301
+ "model.layers.6.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
302
+ "model.layers.6.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
303
+ "model.layers.6.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
304
+ "model.layers.6.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
305
+ "model.layers.6.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
306
+ "model.layers.6.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
307
+ "model.layers.6.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
308
+ "model.layers.7.input_layernorm.weight": "model-00001-of-00002.safetensors",
309
+ "model.layers.7.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
310
+ "model.layers.7.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
311
+ "model.layers.7.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
312
+ "model.layers.7.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
313
+ "model.layers.7.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
314
+ "model.layers.7.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
315
+ "model.layers.7.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
316
+ "model.layers.7.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
317
+ "model.layers.7.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
318
+ "model.layers.7.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
319
+ "model.layers.7.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
320
+ "model.layers.8.input_layernorm.weight": "model-00001-of-00002.safetensors",
321
+ "model.layers.8.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
322
+ "model.layers.8.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
323
+ "model.layers.8.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
324
+ "model.layers.8.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
325
+ "model.layers.8.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
326
+ "model.layers.8.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
327
+ "model.layers.8.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
328
+ "model.layers.8.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
329
+ "model.layers.8.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
330
+ "model.layers.8.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
331
+ "model.layers.8.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
332
+ "model.layers.9.input_layernorm.weight": "model-00001-of-00002.safetensors",
333
+ "model.layers.9.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
334
+ "model.layers.9.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
335
+ "model.layers.9.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
336
+ "model.layers.9.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
337
+ "model.layers.9.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
338
+ "model.layers.9.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
339
+ "model.layers.9.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
340
+ "model.layers.9.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
341
+ "model.layers.9.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
342
+ "model.layers.9.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
343
+ "model.layers.9.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
344
+ "model.norm.weight": "model-00001-of-00002.safetensors",
345
+ "vision_tower.blocks.0.attn.proj.weight": "model-00001-of-00002.safetensors",
346
+ "vision_tower.blocks.0.attn.qkv.weight": "model-00001-of-00002.safetensors",
347
+ "vision_tower.blocks.0.mlp.fc1.weight": "model-00001-of-00002.safetensors",
348
+ "vision_tower.blocks.0.mlp.fc2.weight": "model-00001-of-00002.safetensors",
349
+ "vision_tower.blocks.0.mlp.fc3.weight": "model-00001-of-00002.safetensors",
350
+ "vision_tower.blocks.0.norm1.weight": "model-00001-of-00002.safetensors",
351
+ "vision_tower.blocks.0.norm2.weight": "model-00001-of-00002.safetensors",
352
+ "vision_tower.blocks.1.attn.proj.weight": "model-00001-of-00002.safetensors",
353
+ "vision_tower.blocks.1.attn.qkv.weight": "model-00001-of-00002.safetensors",
354
+ "vision_tower.blocks.1.mlp.fc1.weight": "model-00001-of-00002.safetensors",
355
+ "vision_tower.blocks.1.mlp.fc2.weight": "model-00001-of-00002.safetensors",
356
+ "vision_tower.blocks.1.mlp.fc3.weight": "model-00001-of-00002.safetensors",
357
+ "vision_tower.blocks.1.norm1.weight": "model-00001-of-00002.safetensors",
358
+ "vision_tower.blocks.1.norm2.weight": "model-00001-of-00002.safetensors",
359
+ "vision_tower.blocks.10.attn.proj.weight": "model-00001-of-00002.safetensors",
360
+ "vision_tower.blocks.10.attn.qkv.weight": "model-00001-of-00002.safetensors",
361
+ "vision_tower.blocks.10.mlp.fc1.weight": "model-00001-of-00002.safetensors",
362
+ "vision_tower.blocks.10.mlp.fc2.weight": "model-00001-of-00002.safetensors",
363
+ "vision_tower.blocks.10.mlp.fc3.weight": "model-00001-of-00002.safetensors",
364
+ "vision_tower.blocks.10.norm1.weight": "model-00001-of-00002.safetensors",
365
+ "vision_tower.blocks.10.norm2.weight": "model-00001-of-00002.safetensors",
366
+ "vision_tower.blocks.11.attn.proj.weight": "model-00001-of-00002.safetensors",
367
+ "vision_tower.blocks.11.attn.qkv.weight": "model-00001-of-00002.safetensors",
368
+ "vision_tower.blocks.11.mlp.fc1.weight": "model-00001-of-00002.safetensors",
369
+ "vision_tower.blocks.11.mlp.fc2.weight": "model-00001-of-00002.safetensors",
370
+ "vision_tower.blocks.11.mlp.fc3.weight": "model-00001-of-00002.safetensors",
371
+ "vision_tower.blocks.11.norm1.weight": "model-00001-of-00002.safetensors",
372
+ "vision_tower.blocks.11.norm2.weight": "model-00001-of-00002.safetensors",
373
+ "vision_tower.blocks.12.attn.proj.weight": "model-00001-of-00002.safetensors",
374
+ "vision_tower.blocks.12.attn.qkv.weight": "model-00001-of-00002.safetensors",
375
+ "vision_tower.blocks.12.mlp.fc1.weight": "model-00001-of-00002.safetensors",
376
+ "vision_tower.blocks.12.mlp.fc2.weight": "model-00001-of-00002.safetensors",
377
+ "vision_tower.blocks.12.mlp.fc3.weight": "model-00001-of-00002.safetensors",
378
+ "vision_tower.blocks.12.norm1.weight": "model-00001-of-00002.safetensors",
379
+ "vision_tower.blocks.12.norm2.weight": "model-00001-of-00002.safetensors",
380
+ "vision_tower.blocks.13.attn.proj.weight": "model-00001-of-00002.safetensors",
381
+ "vision_tower.blocks.13.attn.qkv.weight": "model-00001-of-00002.safetensors",
382
+ "vision_tower.blocks.13.mlp.fc1.weight": "model-00001-of-00002.safetensors",
383
+ "vision_tower.blocks.13.mlp.fc2.weight": "model-00001-of-00002.safetensors",
384
+ "vision_tower.blocks.13.mlp.fc3.weight": "model-00001-of-00002.safetensors",
385
+ "vision_tower.blocks.13.norm1.weight": "model-00001-of-00002.safetensors",
386
+ "vision_tower.blocks.13.norm2.weight": "model-00001-of-00002.safetensors",
387
+ "vision_tower.blocks.14.attn.proj.weight": "model-00001-of-00002.safetensors",
388
+ "vision_tower.blocks.14.attn.qkv.weight": "model-00001-of-00002.safetensors",
389
+ "vision_tower.blocks.14.mlp.fc1.weight": "model-00001-of-00002.safetensors",
390
+ "vision_tower.blocks.14.mlp.fc2.weight": "model-00001-of-00002.safetensors",
391
+ "vision_tower.blocks.14.mlp.fc3.weight": "model-00001-of-00002.safetensors",
392
+ "vision_tower.blocks.14.norm1.weight": "model-00001-of-00002.safetensors",
393
+ "vision_tower.blocks.14.norm2.weight": "model-00001-of-00002.safetensors",
394
+ "vision_tower.blocks.15.attn.proj.weight": "model-00001-of-00002.safetensors",
395
+ "vision_tower.blocks.15.attn.qkv.weight": "model-00001-of-00002.safetensors",
396
+ "vision_tower.blocks.15.mlp.fc1.weight": "model-00001-of-00002.safetensors",
397
+ "vision_tower.blocks.15.mlp.fc2.weight": "model-00001-of-00002.safetensors",
398
+ "vision_tower.blocks.15.mlp.fc3.weight": "model-00001-of-00002.safetensors",
399
+ "vision_tower.blocks.15.norm1.weight": "model-00001-of-00002.safetensors",
400
+ "vision_tower.blocks.15.norm2.weight": "model-00001-of-00002.safetensors",
401
+ "vision_tower.blocks.16.attn.proj.weight": "model-00001-of-00002.safetensors",
402
+ "vision_tower.blocks.16.attn.qkv.weight": "model-00001-of-00002.safetensors",
403
+ "vision_tower.blocks.16.mlp.fc1.weight": "model-00001-of-00002.safetensors",
404
+ "vision_tower.blocks.16.mlp.fc2.weight": "model-00001-of-00002.safetensors",
405
+ "vision_tower.blocks.16.mlp.fc3.weight": "model-00001-of-00002.safetensors",
406
+ "vision_tower.blocks.16.norm1.weight": "model-00001-of-00002.safetensors",
407
+ "vision_tower.blocks.16.norm2.weight": "model-00001-of-00002.safetensors",
408
+ "vision_tower.blocks.17.attn.proj.weight": "model-00001-of-00002.safetensors",
409
+ "vision_tower.blocks.17.attn.qkv.weight": "model-00001-of-00002.safetensors",
410
+ "vision_tower.blocks.17.mlp.fc1.weight": "model-00001-of-00002.safetensors",
411
+ "vision_tower.blocks.17.mlp.fc2.weight": "model-00001-of-00002.safetensors",
412
+ "vision_tower.blocks.17.mlp.fc3.weight": "model-00001-of-00002.safetensors",
413
+ "vision_tower.blocks.17.norm1.weight": "model-00001-of-00002.safetensors",
414
+ "vision_tower.blocks.17.norm2.weight": "model-00001-of-00002.safetensors",
415
+ "vision_tower.blocks.18.attn.proj.weight": "model-00001-of-00002.safetensors",
416
+ "vision_tower.blocks.18.attn.qkv.weight": "model-00001-of-00002.safetensors",
417
+ "vision_tower.blocks.18.mlp.fc1.weight": "model-00001-of-00002.safetensors",
418
+ "vision_tower.blocks.18.mlp.fc2.weight": "model-00001-of-00002.safetensors",
419
+ "vision_tower.blocks.18.mlp.fc3.weight": "model-00001-of-00002.safetensors",
420
+ "vision_tower.blocks.18.norm1.weight": "model-00001-of-00002.safetensors",
421
+ "vision_tower.blocks.18.norm2.weight": "model-00001-of-00002.safetensors",
422
+ "vision_tower.blocks.19.attn.proj.weight": "model-00001-of-00002.safetensors",
423
+ "vision_tower.blocks.19.attn.qkv.weight": "model-00001-of-00002.safetensors",
424
+ "vision_tower.blocks.19.mlp.fc1.weight": "model-00001-of-00002.safetensors",
425
+ "vision_tower.blocks.19.mlp.fc2.weight": "model-00001-of-00002.safetensors",
426
+ "vision_tower.blocks.19.mlp.fc3.weight": "model-00001-of-00002.safetensors",
427
+ "vision_tower.blocks.19.norm1.weight": "model-00001-of-00002.safetensors",
428
+ "vision_tower.blocks.19.norm2.weight": "model-00001-of-00002.safetensors",
429
+ "vision_tower.blocks.2.attn.proj.weight": "model-00001-of-00002.safetensors",
430
+ "vision_tower.blocks.2.attn.qkv.weight": "model-00001-of-00002.safetensors",
431
+ "vision_tower.blocks.2.mlp.fc1.weight": "model-00001-of-00002.safetensors",
432
+ "vision_tower.blocks.2.mlp.fc2.weight": "model-00001-of-00002.safetensors",
433
+ "vision_tower.blocks.2.mlp.fc3.weight": "model-00002-of-00002.safetensors",
434
+ "vision_tower.blocks.2.norm1.weight": "model-00002-of-00002.safetensors",
435
+ "vision_tower.blocks.2.norm2.weight": "model-00002-of-00002.safetensors",
436
+ "vision_tower.blocks.20.attn.proj.weight": "model-00002-of-00002.safetensors",
437
+ "vision_tower.blocks.20.attn.qkv.weight": "model-00002-of-00002.safetensors",
438
+ "vision_tower.blocks.20.mlp.fc1.weight": "model-00002-of-00002.safetensors",
439
+ "vision_tower.blocks.20.mlp.fc2.weight": "model-00002-of-00002.safetensors",
440
+ "vision_tower.blocks.20.mlp.fc3.weight": "model-00002-of-00002.safetensors",
441
+ "vision_tower.blocks.20.norm1.weight": "model-00002-of-00002.safetensors",
442
+ "vision_tower.blocks.20.norm2.weight": "model-00002-of-00002.safetensors",
443
+ "vision_tower.blocks.21.attn.proj.weight": "model-00002-of-00002.safetensors",
444
+ "vision_tower.blocks.21.attn.qkv.weight": "model-00002-of-00002.safetensors",
445
+ "vision_tower.blocks.21.mlp.fc1.weight": "model-00002-of-00002.safetensors",
446
+ "vision_tower.blocks.21.mlp.fc2.weight": "model-00002-of-00002.safetensors",
447
+ "vision_tower.blocks.21.mlp.fc3.weight": "model-00002-of-00002.safetensors",
448
+ "vision_tower.blocks.21.norm1.weight": "model-00002-of-00002.safetensors",
449
+ "vision_tower.blocks.21.norm2.weight": "model-00002-of-00002.safetensors",
450
+ "vision_tower.blocks.22.attn.proj.weight": "model-00002-of-00002.safetensors",
451
+ "vision_tower.blocks.22.attn.qkv.weight": "model-00002-of-00002.safetensors",
452
+ "vision_tower.blocks.22.mlp.fc1.weight": "model-00002-of-00002.safetensors",
453
+ "vision_tower.blocks.22.mlp.fc2.weight": "model-00002-of-00002.safetensors",
454
+ "vision_tower.blocks.22.mlp.fc3.weight": "model-00002-of-00002.safetensors",
455
+ "vision_tower.blocks.22.norm1.weight": "model-00002-of-00002.safetensors",
456
+ "vision_tower.blocks.22.norm2.weight": "model-00002-of-00002.safetensors",
457
+ "vision_tower.blocks.23.attn.proj.weight": "model-00002-of-00002.safetensors",
458
+ "vision_tower.blocks.23.attn.qkv.weight": "model-00002-of-00002.safetensors",
459
+ "vision_tower.blocks.23.mlp.fc1.weight": "model-00002-of-00002.safetensors",
460
+ "vision_tower.blocks.23.mlp.fc2.weight": "model-00002-of-00002.safetensors",
461
+ "vision_tower.blocks.23.mlp.fc3.weight": "model-00002-of-00002.safetensors",
462
+ "vision_tower.blocks.23.norm1.weight": "model-00002-of-00002.safetensors",
463
+ "vision_tower.blocks.23.norm2.weight": "model-00002-of-00002.safetensors",
464
+ "vision_tower.blocks.24.attn.proj.weight": "model-00002-of-00002.safetensors",
465
+ "vision_tower.blocks.24.attn.qkv.weight": "model-00002-of-00002.safetensors",
466
+ "vision_tower.blocks.24.mlp.fc1.weight": "model-00002-of-00002.safetensors",
467
+ "vision_tower.blocks.24.mlp.fc2.weight": "model-00002-of-00002.safetensors",
468
+ "vision_tower.blocks.24.mlp.fc3.weight": "model-00002-of-00002.safetensors",
469
+ "vision_tower.blocks.24.norm1.weight": "model-00002-of-00002.safetensors",
470
+ "vision_tower.blocks.24.norm2.weight": "model-00002-of-00002.safetensors",
471
+ "vision_tower.blocks.25.attn.proj.weight": "model-00002-of-00002.safetensors",
472
+ "vision_tower.blocks.25.attn.qkv.weight": "model-00002-of-00002.safetensors",
473
+ "vision_tower.blocks.25.mlp.fc1.weight": "model-00002-of-00002.safetensors",
474
+ "vision_tower.blocks.25.mlp.fc2.weight": "model-00002-of-00002.safetensors",
475
+ "vision_tower.blocks.25.mlp.fc3.weight": "model-00002-of-00002.safetensors",
476
+ "vision_tower.blocks.25.norm1.weight": "model-00002-of-00002.safetensors",
477
+ "vision_tower.blocks.25.norm2.weight": "model-00002-of-00002.safetensors",
478
+ "vision_tower.blocks.26.attn.proj.weight": "model-00002-of-00002.safetensors",
479
+ "vision_tower.blocks.26.attn.qkv.weight": "model-00002-of-00002.safetensors",
480
+ "vision_tower.blocks.26.mlp.fc1.weight": "model-00002-of-00002.safetensors",
481
+ "vision_tower.blocks.26.mlp.fc2.weight": "model-00002-of-00002.safetensors",
482
+ "vision_tower.blocks.26.mlp.fc3.weight": "model-00002-of-00002.safetensors",
483
+ "vision_tower.blocks.26.norm1.weight": "model-00002-of-00002.safetensors",
484
+ "vision_tower.blocks.26.norm2.weight": "model-00002-of-00002.safetensors",
485
+ "vision_tower.blocks.27.attn.proj.weight": "model-00002-of-00002.safetensors",
486
+ "vision_tower.blocks.27.attn.qkv.weight": "model-00002-of-00002.safetensors",
487
+ "vision_tower.blocks.27.mlp.fc1.weight": "model-00002-of-00002.safetensors",
488
+ "vision_tower.blocks.27.mlp.fc2.weight": "model-00002-of-00002.safetensors",
489
+ "vision_tower.blocks.27.mlp.fc3.weight": "model-00002-of-00002.safetensors",
490
+ "vision_tower.blocks.27.norm1.weight": "model-00002-of-00002.safetensors",
491
+ "vision_tower.blocks.27.norm2.weight": "model-00002-of-00002.safetensors",
492
+ "vision_tower.blocks.28.attn.proj.weight": "model-00002-of-00002.safetensors",
493
+ "vision_tower.blocks.28.attn.qkv.weight": "model-00002-of-00002.safetensors",
494
+ "vision_tower.blocks.28.mlp.fc1.weight": "model-00002-of-00002.safetensors",
495
+ "vision_tower.blocks.28.mlp.fc2.weight": "model-00002-of-00002.safetensors",
496
+ "vision_tower.blocks.28.mlp.fc3.weight": "model-00002-of-00002.safetensors",
497
+ "vision_tower.blocks.28.norm1.weight": "model-00002-of-00002.safetensors",
498
+ "vision_tower.blocks.28.norm2.weight": "model-00002-of-00002.safetensors",
499
+ "vision_tower.blocks.29.attn.proj.weight": "model-00002-of-00002.safetensors",
500
+ "vision_tower.blocks.29.attn.qkv.weight": "model-00002-of-00002.safetensors",
501
+ "vision_tower.blocks.29.mlp.fc1.weight": "model-00002-of-00002.safetensors",
502
+ "vision_tower.blocks.29.mlp.fc2.weight": "model-00002-of-00002.safetensors",
503
+ "vision_tower.blocks.29.mlp.fc3.weight": "model-00002-of-00002.safetensors",
504
+ "vision_tower.blocks.29.norm1.weight": "model-00002-of-00002.safetensors",
505
+ "vision_tower.blocks.29.norm2.weight": "model-00002-of-00002.safetensors",
506
+ "vision_tower.blocks.3.attn.proj.weight": "model-00002-of-00002.safetensors",
507
+ "vision_tower.blocks.3.attn.qkv.weight": "model-00002-of-00002.safetensors",
508
+ "vision_tower.blocks.3.mlp.fc1.weight": "model-00002-of-00002.safetensors",
509
+ "vision_tower.blocks.3.mlp.fc2.weight": "model-00002-of-00002.safetensors",
510
+ "vision_tower.blocks.3.mlp.fc3.weight": "model-00002-of-00002.safetensors",
511
+ "vision_tower.blocks.3.norm1.weight": "model-00002-of-00002.safetensors",
512
+ "vision_tower.blocks.3.norm2.weight": "model-00002-of-00002.safetensors",
513
+ "vision_tower.blocks.30.attn.proj.weight": "model-00002-of-00002.safetensors",
514
+ "vision_tower.blocks.30.attn.qkv.weight": "model-00002-of-00002.safetensors",
515
+ "vision_tower.blocks.30.mlp.fc1.weight": "model-00002-of-00002.safetensors",
516
+ "vision_tower.blocks.30.mlp.fc2.weight": "model-00002-of-00002.safetensors",
517
+ "vision_tower.blocks.30.mlp.fc3.weight": "model-00002-of-00002.safetensors",
518
+ "vision_tower.blocks.30.norm1.weight": "model-00002-of-00002.safetensors",
519
+ "vision_tower.blocks.30.norm2.weight": "model-00002-of-00002.safetensors",
520
+ "vision_tower.blocks.31.attn.proj.weight": "model-00002-of-00002.safetensors",
521
+ "vision_tower.blocks.31.attn.qkv.weight": "model-00002-of-00002.safetensors",
522
+ "vision_tower.blocks.31.mlp.fc1.weight": "model-00002-of-00002.safetensors",
523
+ "vision_tower.blocks.31.mlp.fc2.weight": "model-00002-of-00002.safetensors",
524
+ "vision_tower.blocks.31.mlp.fc3.weight": "model-00002-of-00002.safetensors",
525
+ "vision_tower.blocks.31.norm1.weight": "model-00002-of-00002.safetensors",
526
+ "vision_tower.blocks.31.norm2.weight": "model-00002-of-00002.safetensors",
527
+ "vision_tower.blocks.32.attn.proj.weight": "model-00002-of-00002.safetensors",
528
+ "vision_tower.blocks.32.attn.qkv.weight": "model-00002-of-00002.safetensors",
529
+ "vision_tower.blocks.32.mlp.fc1.weight": "model-00002-of-00002.safetensors",
530
+ "vision_tower.blocks.32.mlp.fc2.weight": "model-00002-of-00002.safetensors",
531
+ "vision_tower.blocks.32.mlp.fc3.weight": "model-00002-of-00002.safetensors",
532
+ "vision_tower.blocks.32.norm1.weight": "model-00002-of-00002.safetensors",
533
+ "vision_tower.blocks.32.norm2.weight": "model-00002-of-00002.safetensors",
534
+ "vision_tower.blocks.33.attn.proj.weight": "model-00002-of-00002.safetensors",
535
+ "vision_tower.blocks.33.attn.qkv.weight": "model-00002-of-00002.safetensors",
536
+ "vision_tower.blocks.33.mlp.fc1.weight": "model-00002-of-00002.safetensors",
537
+ "vision_tower.blocks.33.mlp.fc2.weight": "model-00002-of-00002.safetensors",
538
+ "vision_tower.blocks.33.mlp.fc3.weight": "model-00002-of-00002.safetensors",
539
+ "vision_tower.blocks.33.norm1.weight": "model-00002-of-00002.safetensors",
540
+ "vision_tower.blocks.33.norm2.weight": "model-00002-of-00002.safetensors",
541
+ "vision_tower.blocks.34.attn.proj.weight": "model-00002-of-00002.safetensors",
542
+ "vision_tower.blocks.34.attn.qkv.weight": "model-00002-of-00002.safetensors",
543
+ "vision_tower.blocks.34.mlp.fc1.weight": "model-00002-of-00002.safetensors",
544
+ "vision_tower.blocks.34.mlp.fc2.weight": "model-00002-of-00002.safetensors",
545
+ "vision_tower.blocks.34.mlp.fc3.weight": "model-00002-of-00002.safetensors",
546
+ "vision_tower.blocks.34.norm1.weight": "model-00002-of-00002.safetensors",
547
+ "vision_tower.blocks.34.norm2.weight": "model-00002-of-00002.safetensors",
548
+ "vision_tower.blocks.35.attn.proj.weight": "model-00002-of-00002.safetensors",
549
+ "vision_tower.blocks.35.attn.qkv.weight": "model-00002-of-00002.safetensors",
550
+ "vision_tower.blocks.35.mlp.fc1.weight": "model-00002-of-00002.safetensors",
551
+ "vision_tower.blocks.35.mlp.fc2.weight": "model-00002-of-00002.safetensors",
552
+ "vision_tower.blocks.35.mlp.fc3.weight": "model-00002-of-00002.safetensors",
553
+ "vision_tower.blocks.35.norm1.weight": "model-00002-of-00002.safetensors",
554
+ "vision_tower.blocks.35.norm2.weight": "model-00002-of-00002.safetensors",
555
+ "vision_tower.blocks.36.attn.proj.weight": "model-00002-of-00002.safetensors",
556
+ "vision_tower.blocks.36.attn.qkv.weight": "model-00002-of-00002.safetensors",
557
+ "vision_tower.blocks.36.mlp.fc1.weight": "model-00002-of-00002.safetensors",
558
+ "vision_tower.blocks.36.mlp.fc2.weight": "model-00002-of-00002.safetensors",
559
+ "vision_tower.blocks.36.mlp.fc3.weight": "model-00002-of-00002.safetensors",
560
+ "vision_tower.blocks.36.norm1.weight": "model-00002-of-00002.safetensors",
561
+ "vision_tower.blocks.36.norm2.weight": "model-00002-of-00002.safetensors",
562
+ "vision_tower.blocks.37.attn.proj.weight": "model-00002-of-00002.safetensors",
563
+ "vision_tower.blocks.37.attn.qkv.weight": "model-00002-of-00002.safetensors",
564
+ "vision_tower.blocks.37.mlp.fc1.weight": "model-00002-of-00002.safetensors",
565
+ "vision_tower.blocks.37.mlp.fc2.weight": "model-00002-of-00002.safetensors",
566
+ "vision_tower.blocks.37.mlp.fc3.weight": "model-00002-of-00002.safetensors",
567
+ "vision_tower.blocks.37.norm1.weight": "model-00002-of-00002.safetensors",
568
+ "vision_tower.blocks.37.norm2.weight": "model-00002-of-00002.safetensors",
569
+ "vision_tower.blocks.38.attn.proj.weight": "model-00002-of-00002.safetensors",
570
+ "vision_tower.blocks.38.attn.qkv.weight": "model-00002-of-00002.safetensors",
571
+ "vision_tower.blocks.38.mlp.fc1.weight": "model-00002-of-00002.safetensors",
572
+ "vision_tower.blocks.38.mlp.fc2.weight": "model-00002-of-00002.safetensors",
573
+ "vision_tower.blocks.38.mlp.fc3.weight": "model-00002-of-00002.safetensors",
574
+ "vision_tower.blocks.38.norm1.weight": "model-00002-of-00002.safetensors",
575
+ "vision_tower.blocks.38.norm2.weight": "model-00002-of-00002.safetensors",
576
+ "vision_tower.blocks.39.attn.proj.weight": "model-00002-of-00002.safetensors",
577
+ "vision_tower.blocks.39.attn.qkv.weight": "model-00002-of-00002.safetensors",
578
+ "vision_tower.blocks.39.mlp.fc1.weight": "model-00002-of-00002.safetensors",
579
+ "vision_tower.blocks.39.mlp.fc2.weight": "model-00002-of-00002.safetensors",
580
+ "vision_tower.blocks.39.mlp.fc3.weight": "model-00002-of-00002.safetensors",
581
+ "vision_tower.blocks.39.norm1.weight": "model-00002-of-00002.safetensors",
582
+ "vision_tower.blocks.39.norm2.weight": "model-00002-of-00002.safetensors",
583
+ "vision_tower.blocks.4.attn.proj.weight": "model-00002-of-00002.safetensors",
584
+ "vision_tower.blocks.4.attn.qkv.weight": "model-00002-of-00002.safetensors",
585
+ "vision_tower.blocks.4.mlp.fc1.weight": "model-00002-of-00002.safetensors",
586
+ "vision_tower.blocks.4.mlp.fc2.weight": "model-00002-of-00002.safetensors",
587
+ "vision_tower.blocks.4.mlp.fc3.weight": "model-00002-of-00002.safetensors",
588
+ "vision_tower.blocks.4.norm1.weight": "model-00002-of-00002.safetensors",
589
+ "vision_tower.blocks.4.norm2.weight": "model-00002-of-00002.safetensors",
590
+ "vision_tower.blocks.40.attn.proj.weight": "model-00002-of-00002.safetensors",
591
+ "vision_tower.blocks.40.attn.qkv.weight": "model-00002-of-00002.safetensors",
592
+ "vision_tower.blocks.40.mlp.fc1.weight": "model-00002-of-00002.safetensors",
593
+ "vision_tower.blocks.40.mlp.fc2.weight": "model-00002-of-00002.safetensors",
594
+ "vision_tower.blocks.40.mlp.fc3.weight": "model-00002-of-00002.safetensors",
595
+ "vision_tower.blocks.40.norm1.weight": "model-00002-of-00002.safetensors",
596
+ "vision_tower.blocks.40.norm2.weight": "model-00002-of-00002.safetensors",
597
+ "vision_tower.blocks.41.attn.proj.weight": "model-00002-of-00002.safetensors",
598
+ "vision_tower.blocks.41.attn.qkv.weight": "model-00002-of-00002.safetensors",
599
+ "vision_tower.blocks.41.mlp.fc1.weight": "model-00002-of-00002.safetensors",
600
+ "vision_tower.blocks.41.mlp.fc2.weight": "model-00002-of-00002.safetensors",
601
+ "vision_tower.blocks.41.mlp.fc3.weight": "model-00002-of-00002.safetensors",
602
+ "vision_tower.blocks.41.norm1.weight": "model-00002-of-00002.safetensors",
603
+ "vision_tower.blocks.41.norm2.weight": "model-00002-of-00002.safetensors",
604
+ "vision_tower.blocks.5.attn.proj.weight": "model-00002-of-00002.safetensors",
605
+ "vision_tower.blocks.5.attn.qkv.weight": "model-00002-of-00002.safetensors",
606
+ "vision_tower.blocks.5.mlp.fc1.weight": "model-00002-of-00002.safetensors",
607
+ "vision_tower.blocks.5.mlp.fc2.weight": "model-00002-of-00002.safetensors",
608
+ "vision_tower.blocks.5.mlp.fc3.weight": "model-00002-of-00002.safetensors",
609
+ "vision_tower.blocks.5.norm1.weight": "model-00002-of-00002.safetensors",
610
+ "vision_tower.blocks.5.norm2.weight": "model-00002-of-00002.safetensors",
611
+ "vision_tower.blocks.6.attn.proj.weight": "model-00002-of-00002.safetensors",
612
+ "vision_tower.blocks.6.attn.qkv.weight": "model-00002-of-00002.safetensors",
613
+ "vision_tower.blocks.6.mlp.fc1.weight": "model-00002-of-00002.safetensors",
614
+ "vision_tower.blocks.6.mlp.fc2.weight": "model-00002-of-00002.safetensors",
615
+ "vision_tower.blocks.6.mlp.fc3.weight": "model-00002-of-00002.safetensors",
616
+ "vision_tower.blocks.6.norm1.weight": "model-00002-of-00002.safetensors",
617
+ "vision_tower.blocks.6.norm2.weight": "model-00002-of-00002.safetensors",
618
+ "vision_tower.blocks.7.attn.proj.weight": "model-00002-of-00002.safetensors",
619
+ "vision_tower.blocks.7.attn.qkv.weight": "model-00002-of-00002.safetensors",
620
+ "vision_tower.blocks.7.mlp.fc1.weight": "model-00002-of-00002.safetensors",
621
+ "vision_tower.blocks.7.mlp.fc2.weight": "model-00002-of-00002.safetensors",
622
+ "vision_tower.blocks.7.mlp.fc3.weight": "model-00002-of-00002.safetensors",
623
+ "vision_tower.blocks.7.norm1.weight": "model-00002-of-00002.safetensors",
624
+ "vision_tower.blocks.7.norm2.weight": "model-00002-of-00002.safetensors",
625
+ "vision_tower.blocks.8.attn.proj.weight": "model-00002-of-00002.safetensors",
626
+ "vision_tower.blocks.8.attn.qkv.weight": "model-00002-of-00002.safetensors",
627
+ "vision_tower.blocks.8.mlp.fc1.weight": "model-00002-of-00002.safetensors",
628
+ "vision_tower.blocks.8.mlp.fc2.weight": "model-00002-of-00002.safetensors",
629
+ "vision_tower.blocks.8.mlp.fc3.weight": "model-00002-of-00002.safetensors",
630
+ "vision_tower.blocks.8.norm1.weight": "model-00002-of-00002.safetensors",
631
+ "vision_tower.blocks.8.norm2.weight": "model-00002-of-00002.safetensors",
632
+ "vision_tower.blocks.9.attn.proj.weight": "model-00002-of-00002.safetensors",
633
+ "vision_tower.blocks.9.attn.qkv.weight": "model-00002-of-00002.safetensors",
634
+ "vision_tower.blocks.9.mlp.fc1.weight": "model-00002-of-00002.safetensors",
635
+ "vision_tower.blocks.9.mlp.fc2.weight": "model-00002-of-00002.safetensors",
636
+ "vision_tower.blocks.9.mlp.fc3.weight": "model-00002-of-00002.safetensors",
637
+ "vision_tower.blocks.9.norm1.weight": "model-00002-of-00002.safetensors",
638
+ "vision_tower.blocks.9.norm2.weight": "model-00002-of-00002.safetensors",
639
+ "vision_tower.merger.ln_q.bias": "model-00002-of-00002.safetensors",
640
+ "vision_tower.merger.ln_q.weight": "model-00002-of-00002.safetensors",
641
+ "vision_tower.merger.mlp.0.bias": "model-00002-of-00002.safetensors",
642
+ "vision_tower.merger.mlp.0.weight": "model-00002-of-00002.safetensors",
643
+ "vision_tower.merger.mlp.2.bias": "model-00002-of-00002.safetensors",
644
+ "vision_tower.merger.mlp.2.weight": "model-00002-of-00002.safetensors",
645
+ "vision_tower.patch_embed.patchifier.norm.weight": "model-00002-of-00002.safetensors",
646
+ "vision_tower.patch_embed.patchifier.proj.bias": "model-00002-of-00002.safetensors",
647
+ "vision_tower.patch_embed.patchifier.proj.weight": "model-00002-of-00002.safetensors",
648
+ "vision_tower.post_trunk_norm.weight": "model-00002-of-00002.safetensors"
649
+ }
650
+ }
modeling_dots_ocr.py ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import List, Optional, Tuple, Union
2
+
3
+ import torch
4
+ from transformers.modeling_outputs import CausalLMOutputWithPast
5
+ from transformers.models.qwen2 import Qwen2ForCausalLM
6
+
7
+ from .configuration_dots import DotsVisionConfig, DotsOCRConfig
8
+ from .modeling_dots_vision import DotsVisionTransformer
9
+
10
+
11
+ DOTS_VLM_MAX_IMAGES = 200
12
+
13
+
14
+ class DotsOCRForCausalLM(Qwen2ForCausalLM):
15
+ config_class = DotsOCRConfig
16
+
17
+ def __init__(self, config: DotsOCRConfig):
18
+ super().__init__(config)
19
+
20
+ if isinstance(self.config.vision_config, dict):
21
+ vision_config = DotsVisionConfig(**self.config.vision_config)
22
+ self.config.vision_config = vision_config
23
+ else:
24
+ vision_config = self.config.vision_config
25
+
26
+ self.vision_tower = DotsVisionTransformer(vision_config)
27
+
28
+ def prepare_inputs_embeds(
29
+ self,
30
+ input_ids: torch.LongTensor,
31
+ pixel_values: Optional[torch.FloatTensor] = None,
32
+ grid_thw: Optional[torch.FloatTensor] = None,
33
+ img_mask: Optional[torch.BoolTensor] = None,
34
+ ) -> torch.Tensor:
35
+ inputs_embeds = self.get_input_embeddings()(input_ids)
36
+
37
+ if pixel_values is not None:
38
+ assert img_mask is not None
39
+ if grid_thw.shape[0] > DOTS_VLM_MAX_IMAGES:
40
+ print(
41
+ f"Num image exceeded: {grid_thw.shape[0]} > {DOTS_VLM_MAX_IMAGES}, which may cause FSDP hang"
42
+ )
43
+
44
+ vision_embeddings = self.vision_tower(pixel_values, grid_thw)
45
+
46
+ true_indices = torch.nonzero(img_mask).squeeze()
47
+ if len(true_indices) > vision_embeddings.size(0):
48
+ print(
49
+ f"img_mask sum > VE and will be truncated, mask.sum()={len(true_indices)} {vision_embeddings.size(0)=}"
50
+ )
51
+ true_indices = true_indices[: vision_embeddings.size(0)]
52
+ new_img_mask = torch.zeros_like(img_mask, device=img_mask.device)
53
+ new_img_mask[true_indices[:, 0], true_indices[:, 1]] = True
54
+ else:
55
+ new_img_mask = img_mask
56
+
57
+ assert (
58
+ vision_embeddings.size(0) == new_img_mask.sum()
59
+ ), f"{vision_embeddings.size(0)=}, {new_img_mask.sum()=}"
60
+
61
+ inputs_embeds = inputs_embeds.masked_scatter(
62
+ new_img_mask.to(inputs_embeds.device).unsqueeze(-1).expand_as(inputs_embeds),
63
+ vision_embeddings.to(inputs_embeds.device).type(inputs_embeds.dtype),
64
+ )
65
+
66
+ return inputs_embeds
67
+
68
+ def forward(
69
+ self,
70
+ input_ids: torch.LongTensor,
71
+ pixel_values: Optional[torch.FloatTensor] = None,
72
+ image_grid_thw: Optional[torch.FloatTensor] = None,
73
+ inputs_embeds: Optional[torch.Tensor] = None,
74
+ attention_mask: Optional[torch.Tensor] = None,
75
+ position_ids: Optional[torch.LongTensor] = None,
76
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
77
+ labels: Optional[torch.LongTensor] = None,
78
+ output_attentions: Optional[bool] = None,
79
+ output_hidden_states: Optional[bool] = None,
80
+ return_dict: Optional[bool] = None,
81
+ use_cache: Optional[bool] = None,
82
+ logits_to_keep: int = 0,
83
+ **loss_kwargs,
84
+ ) -> Union[Tuple, CausalLMOutputWithPast]:
85
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
86
+ assert len(input_ids) >= 1, f"empty input_ids {input_ids.shape=} will cause gradnorm nan"
87
+ if inputs_embeds is None:
88
+ img_mask = input_ids == self.config.image_token_id
89
+ inputs_embeds = self.prepare_inputs_embeds(input_ids, pixel_values, image_grid_thw, img_mask)
90
+
91
+ outputs = super().forward(
92
+ inputs_embeds=inputs_embeds,
93
+ attention_mask=attention_mask,
94
+ position_ids=position_ids,
95
+ past_key_values=past_key_values,
96
+ labels=labels,
97
+ use_cache=use_cache if use_cache is not None else self.config.use_cache,
98
+ output_attentions=output_attentions,
99
+ output_hidden_states=output_hidden_states,
100
+ # return_dict=return_dict,
101
+ logits_to_keep=logits_to_keep,
102
+ **loss_kwargs,
103
+ )
104
+
105
+ return outputs
106
+
107
+ def prepare_inputs_for_generation(
108
+ self,
109
+ input_ids,
110
+ past_key_values=None,
111
+ inputs_embeds=None,
112
+ pixel_values=None,
113
+ attention_mask=None,
114
+ cache_position=None,
115
+ num_logits_to_keep=None,
116
+ **kwargs,
117
+ ):
118
+ model_inputs = super().prepare_inputs_for_generation(
119
+ input_ids,
120
+ past_key_values=past_key_values,
121
+ inputs_embeds=inputs_embeds,
122
+ attention_mask=attention_mask,
123
+ cache_position=cache_position,
124
+ num_logits_to_keep=num_logits_to_keep,
125
+ **kwargs,
126
+ )
127
+
128
+ if cache_position[0] == 0:
129
+ model_inputs["pixel_values"] = pixel_values
130
+
131
+ return model_inputs
modeling_dots_ocr_vllm.py ADDED
@@ -0,0 +1,429 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from functools import cached_property
2
+ from typing import Iterable, Literal, Mapping, Optional, Set, Tuple, TypedDict, Union
3
+
4
+ import torch
5
+ import torch.nn as nn
6
+ from transformers.models.qwen2_vl import Qwen2VLImageProcessor, Qwen2VLProcessor
7
+ from transformers.models.qwen2_vl.image_processing_qwen2_vl import smart_resize
8
+ from vllm import ModelRegistry
9
+ from vllm.config import VllmConfig
10
+ from vllm.model_executor.layers.sampler import SamplerOutput, get_sampler
11
+ from vllm.model_executor.models.interfaces import MultiModalEmbeddings, SupportsMultiModal
12
+ from vllm.model_executor.models.qwen2 import Qwen2ForCausalLM
13
+ from vllm.model_executor.models.qwen2_5_vl import (
14
+ Qwen2_5_VLMultiModalProcessor,
15
+ Qwen2_5_VLProcessingInfo,
16
+ )
17
+ from vllm.model_executor.models.qwen2_vl import Qwen2VLDummyInputsBuilder
18
+ from vllm.model_executor.models.utils import (
19
+ AutoWeightsLoader,
20
+ WeightsMapper,
21
+ init_vllm_registered_model,
22
+ maybe_prefix,
23
+ merge_multimodal_embeddings,
24
+ )
25
+ from vllm.model_executor.sampling_metadata import SamplingMetadata
26
+ from vllm.multimodal import MULTIMODAL_REGISTRY
27
+ from vllm.multimodal.inputs import MultiModalDataDict
28
+ from vllm.multimodal.parse import ImageSize
29
+ from vllm.sequence import IntermediateTensors
30
+
31
+ from .configuration_dots import DotsVisionConfig
32
+ from .configuration_dots import DotsOCRConfig
33
+ from .modeling_dots_vision import DotsVisionTransformer
34
+
35
+
36
+ class DotsOCRImagePixelInputs(TypedDict):
37
+ type: Literal["pixel_values", "image_grid_thw"]
38
+
39
+ pixel_values: torch.Tensor
40
+ image_grid_thw: torch.Tensor
41
+
42
+
43
+ class DotsOCRImageEmbeddingInputs(TypedDict):
44
+ type: Literal["image_embeds", "image_grid_thw"]
45
+ image_embeds: torch.Tensor
46
+ """Supported types:
47
+ - List[`torch.Tensor`]: A list of tensors holding all images' features.
48
+ Each tensor holds an image's features.
49
+ - `torch.Tensor`: A tensor holding all images' features
50
+ (concatenation of all images' feature tensors).
51
+
52
+ Tensor shape: `(num_image_features, hidden_size)`
53
+ - `num_image_features` varies based on
54
+ the number and resolution of the images.
55
+ - `hidden_size` must match the hidden size of language model backbone.
56
+ """
57
+
58
+ image_grid_thw: torch.Tensor
59
+
60
+
61
+ DotsOCRImageInputs = Union[DotsOCRImagePixelInputs, DotsOCRImageEmbeddingInputs]
62
+
63
+
64
+ class DotsOCRMultiModalProcessor(Qwen2_5_VLMultiModalProcessor):
65
+ pass
66
+
67
+
68
+ class DotsOCRDummyInputsBuilder(Qwen2VLDummyInputsBuilder):
69
+ def get_dummy_mm_data(
70
+ self,
71
+ seq_len: int,
72
+ mm_counts: Mapping[str, int],
73
+ ) -> MultiModalDataDict:
74
+ num_images = mm_counts.get("image", 0)
75
+
76
+ target_width, target_height = self.info.get_image_size_with_most_features()
77
+
78
+ return {
79
+ "image": self._get_dummy_images(width=target_width, height=target_height, num_images=num_images),
80
+ }
81
+
82
+
83
+ class DotsOCRProcessingInfo(Qwen2_5_VLProcessingInfo):
84
+ def get_hf_config(self) -> DotsOCRConfig:
85
+ config = self.ctx.get_hf_config()
86
+ if not config.__class__.__name__ == 'DotsOCRConfig':
87
+ raise TypeError(f"Expected DotsOCRConfig, got {type(config)}")
88
+
89
+ if hasattr(config, "vision_config") and isinstance(config.vision_config, dict):
90
+ config.vision_config = DotsVisionConfig(**config.vision_config)
91
+
92
+ return config
93
+
94
+ def get_hf_processor(
95
+ self,
96
+ *,
97
+ min_pixels: Optional[int] = None,
98
+ max_pixels: Optional[int] = None,
99
+ size: Optional[dict[str, int]] = None,
100
+ **kwargs: object,
101
+ ) -> Qwen2VLProcessor:
102
+ processor = self.ctx.get_hf_processor(
103
+ Qwen2VLProcessor,
104
+ image_processor=self.get_image_processor(min_pixels=min_pixels, max_pixels=max_pixels, size=size),
105
+ **kwargs,
106
+ )
107
+ processor.image_token = "<|imgpad|>"
108
+ processor.video_token = "<|video_pad|>"
109
+ return processor
110
+
111
+ def _get_vision_info(
112
+ self,
113
+ *,
114
+ image_width: int,
115
+ image_height: int,
116
+ num_frames: int = 1,
117
+ do_resize: bool = True,
118
+ image_processor: Optional[Qwen2VLImageProcessor],
119
+ ) -> tuple[ImageSize, int]:
120
+ if image_processor is None:
121
+ image_processor = self.get_image_processor()
122
+
123
+ hf_config: DotsOCRConfig = self.get_hf_config()
124
+ vision_config = hf_config.vision_config
125
+ patch_size = vision_config.patch_size
126
+ merge_size = vision_config.spatial_merge_size
127
+ temporal_patch_size = vision_config.temporal_patch_size
128
+
129
+ if do_resize:
130
+ resized_height, resized_width = smart_resize(
131
+ height=image_height,
132
+ width=image_width,
133
+ factor=patch_size * merge_size,
134
+ min_pixels=image_processor.min_pixels,
135
+ max_pixels=image_processor.max_pixels,
136
+ )
137
+ preprocessed_size = ImageSize(width=resized_width, height=resized_height)
138
+ else:
139
+ preprocessed_size = ImageSize(width=image_width, height=image_height)
140
+
141
+ # NOTE: Frames are padded to be divisible by `temporal_patch_size`
142
+ # https://github.com/huggingface/transformers/blob/v4.48.3/src/transformers/models/qwen2_vl/image_processing_qwen2_vl.py#L294
143
+ padded_num_frames = num_frames + num_frames % temporal_patch_size
144
+
145
+ grid_t = max(padded_num_frames // temporal_patch_size, 1)
146
+ grid_h = preprocessed_size.height // patch_size
147
+ grid_w = preprocessed_size.width // patch_size
148
+
149
+ num_patches = grid_t * grid_h * grid_w
150
+ num_vision_tokens = num_patches // (merge_size**2)
151
+
152
+ return preprocessed_size, num_vision_tokens
153
+
154
+
155
+ @MULTIMODAL_REGISTRY.register_processor(
156
+ Qwen2_5_VLMultiModalProcessor,
157
+ info=DotsOCRProcessingInfo,
158
+ dummy_inputs=DotsOCRDummyInputsBuilder,
159
+ )
160
+ class DotsOCRForCausalLM(nn.Module, SupportsMultiModal):
161
+ hf_to_vllm_mapper = WeightsMapper(
162
+ orig_to_new_prefix={
163
+ "lm_head.": "language_model.lm_head.",
164
+ "model.": "language_model.model.",
165
+ }
166
+ )
167
+ _tp_plan = {}
168
+
169
+ def __init__(self, *, vllm_config: VllmConfig, prefix: str = ""):
170
+ super().__init__()
171
+
172
+ self.config: DotsOCRConfig = vllm_config.model_config.hf_config
173
+ self.quant_config = vllm_config.quant_config
174
+ self.multimodal_config = vllm_config.model_config.multimodal_config
175
+
176
+ if isinstance(self.config.vision_config, dict):
177
+ vision_config = DotsVisionConfig(**self.config.vision_config)
178
+ self.config.vision_config = vision_config
179
+ else:
180
+ vision_config = self.config.vision_config
181
+
182
+ self.vision_tower = DotsVisionTransformer(vision_config)
183
+ self.language_model: Qwen2ForCausalLM = init_vllm_registered_model(
184
+ vllm_config=vllm_config,
185
+ hf_config=self.config,
186
+ prefix=maybe_prefix(prefix, "language_model"),
187
+ architectures=["Qwen2ForCausalLM"],
188
+ )
189
+
190
+ @cached_property
191
+ def sampler(self):
192
+ if hasattr(self.language_model, "sampler"):
193
+ return self.language_model.sampler
194
+
195
+ return get_sampler()
196
+
197
+ def _validate_and_reshape_mm_tensor(self, mm_input: object, name: str) -> torch.Tensor:
198
+ if not isinstance(mm_input, (torch.Tensor, list)):
199
+ raise ValueError(f"Incorrect type of {name}. " f"Got type: {type(mm_input)}")
200
+ if isinstance(mm_input, torch.Tensor):
201
+ if mm_input.ndim == 2:
202
+ return mm_input
203
+ if mm_input.ndim != 3:
204
+ raise ValueError(
205
+ f"{name} should be 2D or batched 3D tensor. "
206
+ f"Got ndim: {mm_input.ndim} "
207
+ f"(shape={mm_input.shape})"
208
+ )
209
+ return torch.concat(list(mm_input))
210
+ else:
211
+ return torch.concat(mm_input)
212
+
213
+ def _parse_and_validate_image_input(self, **kwargs: object) -> Optional[DotsOCRImageInputs]:
214
+ pixel_values = kwargs.pop("pixel_values", None)
215
+ image_embeds = kwargs.pop("image_embeds", None)
216
+ image_grid_thw = kwargs.pop("image_grid_thw", None)
217
+
218
+ if pixel_values is None and image_embeds is None:
219
+ return None
220
+
221
+ if pixel_values is not None:
222
+ pixel_values = self._validate_and_reshape_mm_tensor(pixel_values, "image pixel values")
223
+ image_grid_thw = self._validate_and_reshape_mm_tensor(image_grid_thw, "image grid_thw")
224
+
225
+ if not isinstance(pixel_values, (torch.Tensor, list)):
226
+ raise ValueError("Incorrect type of image pixel values. " f"Got type: {type(pixel_values)}")
227
+
228
+ return DotsOCRImagePixelInputs(
229
+ type="pixel_values", pixel_values=pixel_values, image_grid_thw=image_grid_thw
230
+ )
231
+
232
+ if image_embeds is not None:
233
+ image_embeds = self._validate_and_reshape_mm_tensor(image_embeds, "image embeds")
234
+ image_grid_thw = self._validate_and_reshape_mm_tensor(image_grid_thw, "image grid_thw")
235
+
236
+ if not isinstance(image_embeds, torch.Tensor):
237
+ raise ValueError("Incorrect type of image embeddings. " f"Got type: {type(image_embeds)}")
238
+ return DotsOCRImageEmbeddingInputs(
239
+ type="image_embeds", image_embeds=image_embeds, image_grid_thw=image_grid_thw
240
+ )
241
+
242
+ def vision_forward(self, pixel_values: torch.Tensor, image_grid_thw: torch.Tensor):
243
+ from vllm.distributed import (
244
+ get_tensor_model_parallel_group,
245
+ get_tensor_model_parallel_rank,
246
+ get_tensor_model_parallel_world_size,
247
+ )
248
+
249
+ assert self.vision_tower is not None
250
+
251
+ tp_rank = get_tensor_model_parallel_rank()
252
+ tp = get_tensor_model_parallel_world_size()
253
+
254
+ image_grid_thw_chunk = image_grid_thw.chunk(tp)
255
+ image_sizes_consum = torch.tensor([i.prod(-1).sum() for i in image_grid_thw_chunk]).cumsum(dim=0)
256
+ merge_size_square = self.vision_tower.config.spatial_merge_size**2
257
+ image_embedding = torch.zeros(
258
+ (
259
+ pixel_values.shape[0] // merge_size_square,
260
+ self.vision_tower.config.hidden_size,
261
+ ),
262
+ device=pixel_values.device,
263
+ dtype=pixel_values.dtype,
264
+ )
265
+
266
+ if tp_rank < len(image_sizes_consum):
267
+ idx_start = 0 if tp_rank == 0 else image_sizes_consum[tp_rank - 1].item()
268
+ idx_end = image_sizes_consum[tp_rank].item()
269
+ pixel_values_part = pixel_values[idx_start:idx_end]
270
+ image_grid_thw_part = image_grid_thw_chunk[tp_rank]
271
+ image_embedding_part = self.vision_tower(pixel_values_part, image_grid_thw_part)
272
+ image_embedding[idx_start // merge_size_square : idx_end // merge_size_square] = image_embedding_part
273
+
274
+ group = get_tensor_model_parallel_group().device_group
275
+ torch.distributed.all_reduce(image_embedding, group=group)
276
+ return image_embedding
277
+
278
+ def _process_image_input(self, image_input: DotsOCRImageInputs) -> tuple[torch.Tensor, ...]:
279
+ grid_thw = image_input["image_grid_thw"]
280
+ assert grid_thw.ndim == 2
281
+
282
+ if image_input["type"] == "image_embeds":
283
+ image_embeds = image_input["image_embeds"].type(self.vision_tower.dtype)
284
+ else:
285
+ pixel_values = image_input["pixel_values"].type(self.vision_tower.dtype)
286
+ image_embeds = self.vision_forward(pixel_values, grid_thw)[
287
+ :, : self.config.hidden_size
288
+ ]
289
+
290
+ # Split concatenated embeddings for each image item.
291
+ merge_size = self.vision_tower.config.spatial_merge_size
292
+ sizes = grid_thw.prod(-1) // merge_size // merge_size
293
+
294
+ return image_embeds.split(sizes.tolist())
295
+
296
+ def _parse_and_validate_multimodal_inputs(self, **kwargs: object) -> dict:
297
+ modalities = {}
298
+
299
+ # Preserve the order of modalities if there are multiple of them
300
+ # from the order of kwargs.
301
+ for input_key in kwargs:
302
+ if input_key in ("pixel_values", "image_embeds") and "images" not in modalities:
303
+ modalities["images"] = self._parse_and_validate_image_input(**kwargs)
304
+ return modalities
305
+
306
+ def get_language_model(self) -> torch.nn.Module:
307
+ return self.language_model
308
+
309
+ def get_multimodal_embeddings(self, **kwargs: object) -> Optional[MultiModalEmbeddings]:
310
+ modalities = self._parse_and_validate_multimodal_inputs(**kwargs)
311
+ if not modalities:
312
+ return None
313
+
314
+ # The result multimodal_embeddings is tuple of tensors, with each
315
+ # tensor correspoending to a multimodal data item (image or video).
316
+ multimodal_embeddings: tuple[torch.Tensor, ...] = ()
317
+
318
+ # NOTE: It is important to iterate over the keys in this dictionary
319
+ # to preserve the order of the modalities.
320
+ for modality in modalities:
321
+ if modality == "images":
322
+ image_input = modalities["images"]
323
+ vision_embeddings = self._process_image_input(image_input)
324
+ multimodal_embeddings += vision_embeddings
325
+
326
+ return multimodal_embeddings
327
+
328
+ def get_input_embeddings(
329
+ self,
330
+ input_ids: torch.Tensor,
331
+ multimodal_embeddings: Optional[MultiModalEmbeddings] = None,
332
+ ) -> torch.Tensor:
333
+ inputs_embeds = self.language_model.get_input_embeddings(input_ids)
334
+ if multimodal_embeddings is not None:
335
+ inputs_embeds = merge_multimodal_embeddings(
336
+ input_ids,
337
+ inputs_embeds,
338
+ multimodal_embeddings,
339
+ [self.config.image_token_id, self.config.video_token_id],
340
+ )
341
+
342
+ return inputs_embeds
343
+
344
+ def get_input_embeddings_v0(
345
+ self,
346
+ input_ids: torch.Tensor,
347
+ image_input: Optional[DotsOCRImagePixelInputs] = None,
348
+ ) -> torch.Tensor:
349
+ inputs_embeds = self.get_input_embeddings(input_ids)
350
+ if image_input is not None:
351
+ image_embeds = self._process_image_input(image_input)
352
+ inputs_embeds = merge_multimodal_embeddings(
353
+ input_ids,
354
+ inputs_embeds,
355
+ image_embeds,
356
+ placeholder_token_id=self.config.image_token_id,
357
+ )
358
+ return inputs_embeds
359
+
360
+ def forward(
361
+ self,
362
+ input_ids: Optional[torch.Tensor],
363
+ positions: torch.Tensor,
364
+ intermediate_tensors: Optional[IntermediateTensors] = None,
365
+ inputs_embeds: Optional[torch.Tensor] = None,
366
+ **kwargs,
367
+ ) -> Union[torch.Tensor, IntermediateTensors]:
368
+ if intermediate_tensors is not None:
369
+ inputs_embeds = None
370
+ elif inputs_embeds is None and kwargs.get("pixel_values") is not None:
371
+ image_input = self._parse_and_validate_image_input(**kwargs)
372
+ if image_input is None:
373
+ inputs_embeds = None
374
+ else:
375
+ assert input_ids is not None
376
+ inputs_embeds = self.get_input_embeddings_v0(
377
+ input_ids,
378
+ image_input=image_input,
379
+ )
380
+ input_ids = None
381
+
382
+ hidden_states = self.language_model(
383
+ input_ids=input_ids,
384
+ positions=positions,
385
+ intermediate_tensors=intermediate_tensors,
386
+ inputs_embeds=inputs_embeds,
387
+ )
388
+
389
+ return hidden_states
390
+
391
+ def compute_logits(
392
+ self,
393
+ hidden_states: torch.Tensor,
394
+ sampling_metadata: SamplingMetadata,
395
+ ) -> Optional[torch.Tensor]:
396
+ return self.language_model.compute_logits(hidden_states, sampling_metadata)
397
+
398
+ def sample(
399
+ self,
400
+ logits: Optional[torch.Tensor],
401
+ sampling_metadata: SamplingMetadata,
402
+ ) -> Optional[SamplerOutput]:
403
+ next_tokens = self.sampler(logits, sampling_metadata)
404
+ return next_tokens
405
+
406
+ def load_weights(self, weights: Iterable[Tuple[str, torch.Tensor]]) -> Set[str]:
407
+ loader = AutoWeightsLoader(self)
408
+ return loader.load_weights(weights, mapper=self.hf_to_vllm_mapper)
409
+
410
+
411
+ def patch_vllm_chat_placeholder():
412
+ from vllm.entrypoints.chat_utils import BaseMultiModalItemTracker
413
+
414
+ ori = BaseMultiModalItemTracker._placeholder_str
415
+
416
+ def _placeholder_str(self, modality, current_count: int) -> Optional[str]:
417
+ hf_config = self._model_config.hf_config
418
+ model_type = hf_config.model_type
419
+ if modality in ("image",) and model_type in ["dots_ocr"]:
420
+ return "<|img|><|imgpad|><|endofimg|>"
421
+ return ori(self, modality, current_count)
422
+
423
+ BaseMultiModalItemTracker._placeholder_str = _placeholder_str
424
+
425
+ ModelRegistry.register_model(
426
+ "DotsOCRForCausalLM", DotsOCRForCausalLM,
427
+ )
428
+
429
+ patch_vllm_chat_placeholder()
modeling_dots_vision.py ADDED
@@ -0,0 +1,405 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import math
2
+
3
+ import torch
4
+ import torch.nn as nn
5
+ import torch.nn.functional as F
6
+ import torch.utils.checkpoint
7
+ from flash_attn import flash_attn_varlen_func
8
+ from torch.nn import LayerNorm
9
+ from transformers.modeling_utils import PreTrainedModel
10
+ from .configuration_dots import DotsVisionConfig
11
+
12
+
13
+ def rotate_half(x):
14
+ """Rotates half the hidden dims of the input."""
15
+ x1 = x[..., : x.shape[-1] // 2]
16
+ x2 = x[..., x.shape[-1] // 2 :]
17
+ return torch.cat((-x2, x1), dim=-1)
18
+
19
+
20
+ def apply_rotary_pos_emb_vision(tensor: torch.Tensor, freqs: torch.Tensor) -> torch.Tensor:
21
+ orig_dtype = tensor.dtype
22
+ tensor = tensor.float()
23
+
24
+ cos = freqs.cos()
25
+ sin = freqs.sin()
26
+
27
+ cos = cos.unsqueeze(1).repeat(1, 1, 2).unsqueeze(0).float()
28
+ sin = sin.unsqueeze(1).repeat(1, 1, 2).unsqueeze(0).float()
29
+
30
+ output = (tensor * cos) + (rotate_half(tensor) * sin)
31
+
32
+ output = output.to(orig_dtype)
33
+
34
+ return output
35
+
36
+
37
+ class VisionRotaryEmbedding(nn.Module):
38
+ def __init__(self, dim: int, theta: float = 10000.0) -> None:
39
+ super().__init__()
40
+ inv_freq = 1.0 / (theta ** (torch.arange(0, dim, 2, dtype=torch.float) / dim))
41
+ self.register_buffer("inv_freq", inv_freq, persistent=False)
42
+
43
+ def forward(self, seqlen: int) -> torch.Tensor:
44
+ seq = torch.arange(seqlen, device=self.inv_freq.device, dtype=self.inv_freq.dtype)
45
+ freqs = torch.outer(seq, self.inv_freq)
46
+ return freqs
47
+
48
+
49
+ class PatchMerger(nn.Module):
50
+ def __init__(
51
+ self,
52
+ dim: int,
53
+ context_dim: int,
54
+ spatial_merge_size: int = 2,
55
+ pre_norm="layernorm",
56
+ init_merger_std=None,
57
+ ) -> None:
58
+ super().__init__()
59
+ self.hidden_size = context_dim * (spatial_merge_size**2)
60
+ self.pre_norm = pre_norm
61
+ if self.pre_norm == "layernorm":
62
+ self.ln_q = LayerNorm(context_dim, eps=1e-6)
63
+ elif self.pre_norm == "rmsnorm":
64
+ self.ln_q = RMSNorm(context_dim, eps=1e-6)
65
+ else:
66
+ print("no norm in patch merger")
67
+
68
+ self.mlp = nn.Sequential(
69
+ nn.Linear(self.hidden_size, self.hidden_size),
70
+ nn.GELU(),
71
+ nn.Linear(self.hidden_size, dim),
72
+ )
73
+
74
+ if init_merger_std is not None:
75
+ nn.init.normal_(self.mlp[0].weight, mean=0.0, std=init_merger_std)
76
+ nn.init.zeros_(self.mlp[0].bias)
77
+ nn.init.normal_(self.mlp[2].weight, mean=0.0, std=init_merger_std)
78
+ nn.init.zeros_(self.mlp[2].bias)
79
+
80
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
81
+ if self.pre_norm:
82
+ x = self.mlp(self.ln_q(x).view(-1, self.hidden_size))
83
+ else:
84
+ x = self.mlp(x.view(-1, self.hidden_size))
85
+ return x
86
+
87
+
88
+ class VisionAttention(nn.Module):
89
+ def __init__(self, config, dim: int, num_heads: int = 16, bias=True) -> None:
90
+ super().__init__()
91
+ self.num_heads = num_heads
92
+ self.head_dim = dim // num_heads
93
+ self.qkv = nn.Linear(dim, dim * 3, bias=bias)
94
+ self.proj = nn.Linear(dim, dim, bias=bias)
95
+
96
+ def forward(
97
+ self,
98
+ hidden_states: torch.Tensor,
99
+ cu_seqlens: torch.Tensor,
100
+ rotary_pos_emb: torch.Tensor = None,
101
+ ) -> torch.Tensor:
102
+ seq_length = hidden_states.shape[0]
103
+
104
+ q, k, v = self.qkv(hidden_states).reshape(seq_length, 3, self.num_heads, -1).permute(1, 0, 2, 3).unbind(0)
105
+ q = apply_rotary_pos_emb_vision(q.unsqueeze(0), rotary_pos_emb).squeeze(0)
106
+ k = apply_rotary_pos_emb_vision(k.unsqueeze(0), rotary_pos_emb).squeeze(0)
107
+
108
+ attention_mask = torch.full(
109
+ [1, seq_length, seq_length], torch.finfo(q.dtype).min, device=q.device, dtype=q.dtype
110
+ )
111
+ for i in range(1, len(cu_seqlens)):
112
+ attention_mask[..., cu_seqlens[i - 1] : cu_seqlens[i], cu_seqlens[i - 1] : cu_seqlens[i]] = 0
113
+
114
+ q = q.transpose(0, 1)
115
+ k = k.transpose(0, 1)
116
+ v = v.transpose(0, 1)
117
+ attn_weights = torch.matmul(q, k.transpose(1, 2)) / math.sqrt(self.head_dim)
118
+ attn_weights = attn_weights + attention_mask
119
+ attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(q.dtype)
120
+ attn_output = torch.matmul(attn_weights, v)
121
+ attn_output = attn_output.transpose(0, 1)
122
+ attn_output = attn_output.reshape(seq_length, -1)
123
+ attn_output = self.proj(attn_output)
124
+ return attn_output
125
+
126
+
127
+ class VisionFlashAttention2(nn.Module):
128
+ def __init__(self, config, dim: int, num_heads: int = 16, bias=True) -> None:
129
+ super().__init__()
130
+ self.num_heads = num_heads
131
+ self.qkv = nn.Linear(dim, dim * 3, bias=bias)
132
+ self.proj = nn.Linear(dim, dim, bias=bias)
133
+ self.config = config
134
+ self.is_causal = config.is_causal
135
+
136
+ def forward(
137
+ self,
138
+ hidden_states: torch.Tensor,
139
+ cu_seqlens: torch.Tensor,
140
+ rotary_pos_emb: torch.Tensor = None,
141
+ ) -> torch.Tensor:
142
+ seq_length = hidden_states.shape[0]
143
+ q, k, v = (
144
+ self.qkv(hidden_states).reshape(seq_length, 3, self.num_heads, -1).permute(1, 0, 2, 3).unbind(0)
145
+ ) # 'shd'
146
+ q = apply_rotary_pos_emb_vision(q.unsqueeze(0), rotary_pos_emb).squeeze(0)
147
+ k = apply_rotary_pos_emb_vision(k.unsqueeze(0), rotary_pos_emb).squeeze(0)
148
+ max_seqlen = (cu_seqlens[1:] - cu_seqlens[:-1]).max().item()
149
+ attn_output = flash_attn_varlen_func(
150
+ q, k, v, cu_seqlens, cu_seqlens, max_seqlen, max_seqlen, causal=self.is_causal
151
+ ).reshape(seq_length, -1)
152
+ attn_output = self.proj(attn_output)
153
+
154
+ return attn_output
155
+
156
+
157
+ class VisionSdpaAttention(nn.Module):
158
+ def __init__(self, config, dim: int, num_heads: int = 16, bias=True) -> None:
159
+ super().__init__()
160
+ self.num_heads = num_heads
161
+ self.qkv = nn.Linear(dim, dim * 3, bias=bias)
162
+ self.proj = nn.Linear(dim, dim, bias=bias)
163
+ self.config = config
164
+
165
+ def forward(
166
+ self,
167
+ hidden_states: torch.Tensor,
168
+ cu_seqlens: torch.Tensor,
169
+ rotary_pos_emb: torch.Tensor = None,
170
+ ) -> torch.Tensor:
171
+ seq_length = hidden_states.shape[0]
172
+ q, k, v = self.qkv(hidden_states).reshape(seq_length, 3, self.num_heads, -1).permute(1, 0, 2, 3).unbind(0)
173
+
174
+ q = apply_rotary_pos_emb_vision(q.unsqueeze(0), rotary_pos_emb).squeeze(0)
175
+ k = apply_rotary_pos_emb_vision(k.unsqueeze(0), rotary_pos_emb).squeeze(0)
176
+
177
+ attention_mask = torch.zeros([1, seq_length, seq_length], device=q.device, dtype=torch.bool)
178
+ for i in range(1, len(cu_seqlens)):
179
+ attention_mask[..., cu_seqlens[i - 1] : cu_seqlens[i], cu_seqlens[i - 1] : cu_seqlens[i]] = True
180
+
181
+ q = q.transpose(0, 1)
182
+ k = k.transpose(0, 1)
183
+ v = v.transpose(0, 1)
184
+
185
+ attn_output = F.scaled_dot_product_attention(q, k, v, attention_mask, dropout_p=0.0)
186
+ attn_output = attn_output.transpose(0, 1)
187
+ attn_output = attn_output.reshape(seq_length, -1)
188
+
189
+ attn_output = self.proj(attn_output)
190
+ return attn_output
191
+
192
+
193
+ DOTS_VISION_ATTENTION_CLASSES = {
194
+ "eager": VisionAttention,
195
+ "flash_attention_2": VisionFlashAttention2,
196
+ "sdpa": VisionSdpaAttention,
197
+ }
198
+
199
+
200
+ class RMSNorm(nn.Module):
201
+ def __init__(self, dim: int, eps: float = 1e-6):
202
+ super().__init__()
203
+ self.weight = nn.Parameter(torch.ones(dim))
204
+ self.eps = eps
205
+
206
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
207
+ output = self._norm(x.float()).type_as(x)
208
+ return output * self.weight
209
+
210
+ def extra_repr(self) -> str:
211
+ return f"{tuple(self.weight.shape)}, eps={self.eps}"
212
+
213
+ def _norm(self, x: torch.Tensor) -> torch.Tensor:
214
+ return x * torch.rsqrt(x.pow(2).mean(-1, keepdim=True) + self.eps)
215
+
216
+
217
+ class DotsSwiGLUFFN(nn.Module):
218
+ def __init__(self, config):
219
+ super().__init__()
220
+ hidden_features = config.intermediate_size
221
+ in_features = config.embed_dim
222
+ bias = config.use_bias
223
+
224
+ self.fc1 = nn.Linear(in_features, hidden_features, bias=bias)
225
+ self.fc2 = nn.Linear(hidden_features, in_features, bias=bias)
226
+ self.fc3 = nn.Linear(in_features, hidden_features, bias=bias)
227
+
228
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
229
+ x = F.silu(self.fc1(x)) * self.fc3(x)
230
+ x = self.fc2(x)
231
+ return x
232
+
233
+
234
+
235
+ class DotsPatchEmbed(nn.Module):
236
+ def __init__(self, config):
237
+ super().__init__()
238
+ self.num_channels = config.num_channels
239
+ self.patch_size = config.patch_size
240
+ self.temporal_patch_size = config.temporal_patch_size
241
+ self.embed_dim = config.embed_dim
242
+ self.config = config
243
+ self.proj = nn.Conv2d(
244
+ config.num_channels,
245
+ config.embed_dim,
246
+ kernel_size=(config.patch_size, config.patch_size),
247
+ stride=(config.patch_size, config.patch_size),
248
+ )
249
+ self.norm = RMSNorm(config.embed_dim, eps=config.rms_norm_eps)
250
+
251
+ def forward(self, x: torch.Tensor, grid_thw=None) -> torch.Tensor:
252
+ x = x.view(-1, self.num_channels, self.temporal_patch_size, self.patch_size, self.patch_size)[:, :, 0]
253
+ x = self.proj(x).view(-1, self.embed_dim)
254
+ x = self.norm(x)
255
+ return x
256
+
257
+
258
+ class DotsViTPreprocessor(nn.Module):
259
+ def __init__(self, config):
260
+ super().__init__()
261
+ self.patch_h = config.patch_size
262
+ self.patch_w = config.patch_size
263
+ self.embed_dim = config.embed_dim
264
+ self.config = config
265
+ self.patchifier = DotsPatchEmbed(config)
266
+
267
+ def forward(self, x: torch.Tensor, grid_thw=None) -> torch.Tensor:
268
+ tokens = self.patchifier(x, grid_thw)
269
+ return tokens
270
+
271
+
272
+ class DotsVisionBlock(nn.Module):
273
+ def __init__(self, config, attn_implementation: str = "flash_attention_2"):
274
+ super().__init__()
275
+ self.attn = DOTS_VISION_ATTENTION_CLASSES[attn_implementation](
276
+ config, config.embed_dim, num_heads=config.num_attention_heads, bias=config.use_bias
277
+ )
278
+ self.norm1 = RMSNorm(config.embed_dim, eps=config.rms_norm_eps)
279
+ self.mlp = DotsSwiGLUFFN(config)
280
+ self.norm2 = RMSNorm(config.embed_dim, eps=config.rms_norm_eps)
281
+
282
+ def forward(self, hidden_states, cu_seqlens, rotary_pos_emb) -> torch.Tensor:
283
+ hidden_states = hidden_states + self.attn(
284
+ self.norm1(hidden_states), cu_seqlens=cu_seqlens, rotary_pos_emb=rotary_pos_emb
285
+ )
286
+ hidden_states = hidden_states + self.mlp(self.norm2(hidden_states))
287
+ return hidden_states
288
+
289
+
290
+ class DotsVisionTransformer(PreTrainedModel):
291
+ def __init__(self, config: DotsVisionConfig) -> None:
292
+ super().__init__(config)
293
+ self.config = config
294
+ self.spatial_merge_size = config.spatial_merge_size
295
+
296
+ self.patch_embed = DotsViTPreprocessor(config)
297
+ self._init_weights(self.patch_embed.patchifier.proj)
298
+
299
+ head_dim = config.embed_dim // config.num_attention_heads
300
+
301
+ self.rotary_pos_emb = VisionRotaryEmbedding(head_dim // 2)
302
+
303
+ _num_hidden_layers = config.num_hidden_layers
304
+ self.blocks = nn.ModuleList(
305
+ [DotsVisionBlock(config, config.attn_implementation) for _ in range(_num_hidden_layers)]
306
+ )
307
+
308
+ if self.config.post_norm:
309
+ self.post_trunk_norm = RMSNorm(config.embed_dim, eps=config.rms_norm_eps)
310
+
311
+ self.merger = PatchMerger(
312
+ dim=config.hidden_size,
313
+ context_dim=config.embed_dim,
314
+ spatial_merge_size=config.spatial_merge_size,
315
+ init_merger_std=self.config.init_merger_std,
316
+ )
317
+
318
+ self.gradient_checkpointing = False
319
+ self._gradient_checkpointing_func = torch.utils.checkpoint.checkpoint
320
+
321
+ def _init_weights(self, module):
322
+ std = self.config.initializer_range
323
+ if isinstance(module, (nn.Linear, nn.Conv3d)):
324
+ module.weight.data.normal_(mean=0.0, std=std)
325
+ if module.bias is not None:
326
+ module.bias.data.zero_()
327
+ elif isinstance(module, nn.Embedding):
328
+ module.weight.data.normal_(mean=0.0, std=std)
329
+ if module.padding_idx is not None:
330
+ module.weight.data[module.padding_idx].zero_()
331
+
332
+ @property
333
+ def dtype(self) -> torch.dtype:
334
+ return self.blocks[0].mlp.fc2.weight.dtype
335
+
336
+ @property
337
+ def device(self) -> torch.device:
338
+ return self.blocks[0].mlp.fc2.weight.device
339
+
340
+ def get_pos_ids_by_grid(self, grid_thw):
341
+ pos_ids = []
342
+ for t, h, w in grid_thw:
343
+ hpos_ids = torch.arange(h).unsqueeze(1).expand(-1, w)
344
+ hpos_ids = hpos_ids.reshape(
345
+ h // self.spatial_merge_size,
346
+ self.spatial_merge_size,
347
+ w // self.spatial_merge_size,
348
+ self.spatial_merge_size,
349
+ )
350
+ hpos_ids = hpos_ids.permute(0, 2, 1, 3)
351
+ hpos_ids = hpos_ids.flatten()
352
+
353
+ wpos_ids = torch.arange(w).unsqueeze(0).expand(h, -1)
354
+ wpos_ids = wpos_ids.reshape(
355
+ h // self.spatial_merge_size,
356
+ self.spatial_merge_size,
357
+ w // self.spatial_merge_size,
358
+ self.spatial_merge_size,
359
+ )
360
+ wpos_ids = wpos_ids.permute(0, 2, 1, 3)
361
+ wpos_ids = wpos_ids.flatten()
362
+ pos_ids.append(
363
+ torch.stack([hpos_ids, wpos_ids], dim=-1).repeat(t, 1)
364
+ )
365
+
366
+ return pos_ids
367
+
368
+ def rot_pos_emb(self, grid_thw):
369
+ pos_ids = self.get_pos_ids_by_grid(grid_thw)
370
+ pos_ids = torch.cat(pos_ids, dim=0)
371
+ max_grid_size = grid_thw[:, 1:].max()
372
+ rotary_pos_emb_full = self.rotary_pos_emb(max_grid_size)
373
+ rotary_pos_emb = rotary_pos_emb_full[pos_ids].flatten(1)
374
+ return rotary_pos_emb
375
+
376
+ def forward(self, hidden_states: torch.Tensor, grid_thw: torch.Tensor, bf16=True) -> torch.Tensor:
377
+ if bf16:
378
+ hidden_states = hidden_states.bfloat16()
379
+ hidden_states = self.patch_embed(hidden_states, grid_thw)
380
+
381
+ rotary_pos_emb = self.rot_pos_emb(grid_thw)
382
+
383
+ cu_seqlens = torch.repeat_interleave(grid_thw[:, 1] * grid_thw[:, 2], grid_thw[:, 0]).cumsum(
384
+ dim=0,
385
+ dtype=grid_thw.dtype if torch.jit.is_tracing() else torch.int32,
386
+ )
387
+ cu_seqlens = F.pad(cu_seqlens, (1, 0), value=0)
388
+
389
+ for blk in self.blocks:
390
+ if self.gradient_checkpointing and self.training:
391
+ hidden_states = self._gradient_checkpointing_func(
392
+ blk.__call__,
393
+ hidden_states,
394
+ cu_seqlens,
395
+ rotary_pos_emb,
396
+ use_reentrant=(self.config.ckpt_use_reentrant or self.config.ve_ckpt_use_reentrant),
397
+ )
398
+ else:
399
+ hidden_states = blk(hidden_states, cu_seqlens=cu_seqlens, rotary_pos_emb=rotary_pos_emb)
400
+
401
+ if self.config.post_norm:
402
+ hidden_states = self.post_trunk_norm(hidden_states)
403
+
404
+ hidden_states = self.merger(hidden_states)
405
+ return hidden_states
preprocessor_config.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "min_pixels": 3136,
3
+ "max_pixels": 11289600,
4
+ "patch_size": 14,
5
+ "temporal_patch_size": 1,
6
+ "merge_size": 2,
7
+ "image_mean": [
8
+ 0.48145466,
9
+ 0.4578275,
10
+ 0.40821073
11
+ ],
12
+ "image_std": [
13
+ 0.26862954,
14
+ 0.26130258,
15
+ 0.27577711
16
+ ],
17
+ "image_processor_type": "Qwen2VLImageProcessor",
18
+ "processor_class": "DotsVLProcessor"
19
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>",
5
+ "<|object_ref_start|>",
6
+ "<|object_ref_end|>",
7
+ "<|box_start|>",
8
+ "<|box_end|>",
9
+ "<|quad_start|>",
10
+ "<|quad_end|>",
11
+ "<|vision_start|>",
12
+ "<|vision_end|>",
13
+ "<|vision_pad|>",
14
+ "<|image_pad|>",
15
+ "<|video_pad|>"
16
+ ],
17
+ "eos_token": {
18
+ "content": "<|endoftext|>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ },
24
+ "pad_token": "[PAD]"
25
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,391 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "151643": {
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "151644": {
14
+ "content": "<|im_start|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "151645": {
22
+ "content": "<|im_end|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "151646": {
30
+ "content": "<|object_ref_start|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "151647": {
38
+ "content": "<|object_ref_end|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "151648": {
46
+ "content": "<|box_start|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "151649": {
54
+ "content": "<|box_end|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "151650": {
62
+ "content": "<|quad_start|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "151651": {
70
+ "content": "<|quad_end|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "151652": {
78
+ "content": "<|vision_start|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "151653": {
86
+ "content": "<|vision_end|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "151654": {
94
+ "content": "<|vision_pad|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": false,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "151655": {
102
+ "content": "<|image_pad|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "151656": {
110
+ "content": "<|video_pad|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": false,
114
+ "single_word": false,
115
+ "special": true
116
+ },
117
+ "151657": {
118
+ "content": "<tool_call>",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": false,
122
+ "single_word": false,
123
+ "special": false
124
+ },
125
+ "151658": {
126
+ "content": "</tool_call>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false,
131
+ "special": false
132
+ },
133
+ "151659": {
134
+ "content": "<|fim_prefix|>",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": false,
138
+ "single_word": false,
139
+ "special": false
140
+ },
141
+ "151660": {
142
+ "content": "<|fim_middle|>",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": false,
146
+ "single_word": false,
147
+ "special": false
148
+ },
149
+ "151661": {
150
+ "content": "<|fim_suffix|>",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": false,
154
+ "single_word": false,
155
+ "special": false
156
+ },
157
+ "151662": {
158
+ "content": "<|fim_pad|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false,
163
+ "special": false
164
+ },
165
+ "151663": {
166
+ "content": "<|repo_name|>",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": false,
170
+ "single_word": false,
171
+ "special": false
172
+ },
173
+ "151664": {
174
+ "content": "<|file_sep|>",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": false,
178
+ "single_word": false,
179
+ "special": false
180
+ },
181
+ "151665": {
182
+ "content": "<|imgpad|>",
183
+ "lstrip": false,
184
+ "normalized": false,
185
+ "rstrip": false,
186
+ "single_word": false,
187
+ "special": true
188
+ },
189
+ "151666": {
190
+ "content": "<|img|>",
191
+ "lstrip": false,
192
+ "normalized": false,
193
+ "rstrip": false,
194
+ "single_word": false,
195
+ "special": true
196
+ },
197
+ "151667": {
198
+ "content": "<|endofimg|>",
199
+ "lstrip": false,
200
+ "normalized": false,
201
+ "rstrip": false,
202
+ "single_word": false,
203
+ "special": true
204
+ },
205
+ "151668": {
206
+ "content": "<|systemprompt|>",
207
+ "lstrip": false,
208
+ "normalized": false,
209
+ "rstrip": false,
210
+ "single_word": false,
211
+ "special": true
212
+ },
213
+ "151669": {
214
+ "content": "<|endofsystemprompt|>",
215
+ "lstrip": false,
216
+ "normalized": false,
217
+ "rstrip": false,
218
+ "single_word": false,
219
+ "special": true
220
+ },
221
+ "151670": {
222
+ "content": "<|user|>",
223
+ "lstrip": false,
224
+ "normalized": false,
225
+ "rstrip": false,
226
+ "single_word": false,
227
+ "special": true
228
+ },
229
+ "151671": {
230
+ "content": "<|endofuser|>",
231
+ "lstrip": false,
232
+ "normalized": false,
233
+ "rstrip": false,
234
+ "single_word": false,
235
+ "special": true
236
+ },
237
+ "151672": {
238
+ "content": "<|assistant|>",
239
+ "lstrip": false,
240
+ "normalized": false,
241
+ "rstrip": false,
242
+ "single_word": false,
243
+ "special": true
244
+ },
245
+ "151673": {
246
+ "content": "<|endofassistant|>",
247
+ "lstrip": false,
248
+ "normalized": false,
249
+ "rstrip": false,
250
+ "single_word": false,
251
+ "special": true
252
+ },
253
+ "151674": {
254
+ "content": "<|ref_start|>",
255
+ "lstrip": false,
256
+ "normalized": false,
257
+ "rstrip": false,
258
+ "single_word": false,
259
+ "special": true
260
+ },
261
+ "151675": {
262
+ "content": "<|ref_end|>",
263
+ "lstrip": false,
264
+ "normalized": false,
265
+ "rstrip": false,
266
+ "single_word": false,
267
+ "special": true
268
+ },
269
+ "151676": {
270
+ "content": "[SEP]",
271
+ "lstrip": false,
272
+ "normalized": false,
273
+ "rstrip": false,
274
+ "single_word": false,
275
+ "special": true
276
+ },
277
+ "151677": {
278
+ "content": "<|pic|>",
279
+ "lstrip": false,
280
+ "normalized": false,
281
+ "rstrip": false,
282
+ "single_word": false,
283
+ "special": true
284
+ },
285
+ "151678": {
286
+ "content": "<|text|>",
287
+ "lstrip": false,
288
+ "normalized": false,
289
+ "rstrip": false,
290
+ "single_word": false,
291
+ "special": true
292
+ },
293
+ "151679": {
294
+ "content": "<|pictotext|>",
295
+ "lstrip": false,
296
+ "normalized": false,
297
+ "rstrip": false,
298
+ "single_word": false,
299
+ "special": true
300
+ },
301
+ "151680": {
302
+ "content": "[PAD]",
303
+ "lstrip": false,
304
+ "normalized": false,
305
+ "rstrip": false,
306
+ "single_word": false,
307
+ "special": true
308
+ },
309
+ "151681": {
310
+ "content": "<|slice|>",
311
+ "lstrip": false,
312
+ "normalized": false,
313
+ "rstrip": false,
314
+ "single_word": false,
315
+ "special": true
316
+ },
317
+ "151682": {
318
+ "content": "<|endofslice|>",
319
+ "lstrip": false,
320
+ "normalized": false,
321
+ "rstrip": false,
322
+ "single_word": false,
323
+ "special": true
324
+ },
325
+ "151683": {
326
+ "content": "<|imgrowend|>",
327
+ "lstrip": false,
328
+ "normalized": false,
329
+ "rstrip": false,
330
+ "single_word": false,
331
+ "special": true
332
+ },
333
+ "151684": {
334
+ "content": "<|polygon_start|>",
335
+ "lstrip": false,
336
+ "normalized": false,
337
+ "rstrip": false,
338
+ "single_word": false,
339
+ "special": true
340
+ },
341
+ "151685": {
342
+ "content": "<|polygon_end|>",
343
+ "lstrip": false,
344
+ "normalized": false,
345
+ "rstrip": false,
346
+ "single_word": false,
347
+ "special": true
348
+ },
349
+ "151686": {
350
+ "content": "<|image_gen_start|>",
351
+ "lstrip": false,
352
+ "normalized": false,
353
+ "rstrip": false,
354
+ "single_word": false,
355
+ "special": true
356
+ },
357
+ "151687": {
358
+ "content": "<|image_gen_end|>",
359
+ "lstrip": false,
360
+ "normalized": false,
361
+ "rstrip": false,
362
+ "single_word": false,
363
+ "special": true
364
+ }
365
+ },
366
+ "additional_special_tokens": [
367
+ "<|im_start|>",
368
+ "<|im_end|>",
369
+ "<|object_ref_start|>",
370
+ "<|object_ref_end|>",
371
+ "<|box_start|>",
372
+ "<|box_end|>",
373
+ "<|quad_start|>",
374
+ "<|quad_end|>",
375
+ "<|vision_start|>",
376
+ "<|vision_end|>",
377
+ "<|vision_pad|>",
378
+ "<|image_pad|>",
379
+ "<|video_pad|>"
380
+ ],
381
+ "bos_token": null,
382
+ "chat_template": "{%- for m in messages %}\n {%- if m.role == 'system' %}\n {{- '<|system|>' + m.content + '<|endofsystem|>\\n' }}\n {%- elif m.role == 'user' %}\n {{- '<|user|>' + m.content + '<|endofuser|>' }}\n {%- elif m.role == 'assistant' %}\n {{- '<|assistant|>' + m.content }}\n {%- if not loop.last %}\n {{- '<|endofassistant|>' }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if messages[-1].role != 'assistant' %}\n {{- '<|assistant|>' }}\n{%- endif %}",
383
+ "clean_up_tokenization_spaces": false,
384
+ "eos_token": "<|endoftext|>",
385
+ "errors": "replace",
386
+ "model_max_length": 131072,
387
+ "pad_token": "[PAD]",
388
+ "split_special_tokens": false,
389
+ "tokenizer_class": "Qwen2Tokenizer",
390
+ "unk_token": null
391
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff