cpatonn commited on
Commit
3fe091d
·
verified ·
1 Parent(s): b30e723

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,144 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - zh
5
+ - en
6
+ base_model:
7
+ - zai-org/GLM-4.5V
8
+ pipeline_tag: image-text-to-text
9
+ library_name: transformers
10
+ ---
11
+
12
+ # GLM-4.5V-AWQ-8bit
13
+
14
+ ## Method
15
+ [vllm-project/llm-compressor](https://github.com/vllm-project/llm-compressor.git) and [neuralmagic/calibration](https://huggingface.co/datasets/neuralmagic/calibration) were used for quantization and calibration.
16
+ For configuration information, please visit [config.json](https://huggingface.co/cpatonn/GLM-4.5V-AWQ-8bit/blob/main/config.json).
17
+
18
+ ## Inference
19
+
20
+ ### Prerequisite
21
+ Please install the latest vllm:
22
+ ```
23
+ pip install -U vllm \
24
+ --pre \
25
+ --extra-index-url https://wheels.vllm.ai/nightly
26
+
27
+ ```
28
+
29
+ ### vllm
30
+ Please load the model into vllm and sglang as float16 data type for AWQ support and use `tensor_parallel_size <= 2` i.e.,
31
+ ```
32
+ vllm serve cpatonn/GLM-4.5V-AWQ-8bit \
33
+ --dtype float16 \
34
+ --tensor-parallel-size 2 \
35
+ --pipeline-parallel-size 2 \
36
+ --tool-call-parser glm45 \
37
+ --reasoning-parser glm45 \
38
+ --enable-auto-tool-choice \
39
+ --served-model-name glm-4.5v \
40
+ --allowed-local-media-path / \
41
+ --media-io-kwargs '{"video": {"num_frames": -1}}'
42
+ ```
43
+
44
+ # GLM-4.5V
45
+
46
+ <div align="center">
47
+ <img src=https://raw.githubusercontent.com/zai-org/GLM-V/refs/heads/main/resources/logo.svg width="40%"/>
48
+ </div>
49
+ <p align="center">
50
+ 👋 Join our <a href="https://discord.com/invite/8cnQKdAprg" target="_blank">Discord</a> communities.
51
+ <br>
52
+ 📖 Check out the <a href="https://github.com/zai-org/GLM-V/blob/main/resources/GLM-4.5V_technical_report.pdf" target="_blank">paper</a>.
53
+ <br>
54
+ 📍 Access the GLM-V series models via API on the <a href="https://docs.z.ai/guides/vlm/glm-4.5v">ZhipuAI Open Platform</a>.
55
+ </p>
56
+
57
+ ## Introduction
58
+
59
+ Vision-language models (VLMs) have become a key cornerstone of intelligent systems. As real-world AI tasks grow increasingly complex, VLMs urgently need to enhance reasoning capabilities beyond basic multimodal perception — improving accuracy, comprehensiveness, and intelligence — to enable complex problem solving, long-context understanding, and multimodal agents.
60
+
61
+ Through our open-source work, we aim to explore the technological frontier together with the community while empowering more developers to create exciting and innovative applications.
62
+
63
+ GLM-4.5V is based on ZhipuAI’s next-generation flagship text foundation model GLM-4.5-Air (106B parameters, 12B active). It continues the technical approach of GLM-4.1V-Thinking, achieving SOTA performance among models of the same scale on 42 public vision-language benchmarks. It covers common tasks such as image, video, and document understanding, as well as GUI agent operations.
64
+
65
+ ![bench_45](https://raw.githubusercontent.com/zai-org/GLM-V/refs/heads/main/resources/bench_45v.jpeg)
66
+
67
+ Beyond benchmark performance, GLM-4.5V focuses on real-world usability. Through efficient hybrid training, it can handle diverse types of visual content, enabling full-spectrum vision reasoning, including:
68
+ - **Image reasoning** (scene understanding, complex multi-image analysis, spatial recognition)
69
+ - **Video understanding** (long video segmentation and event recognition)
70
+ - **GUI tasks** (screen reading, icon recognition, desktop operation assistance)
71
+ - **Complex chart & long document parsing** (research report analysis, information extraction)
72
+ - **Grounding** (precise visual element localization)
73
+
74
+ The model also introduces a **Thinking Mode** switch, allowing users to balance between quick responses and deep reasoning. This switch works the same as in the `GLM-4.5` language model.
75
+
76
+ ## Quick Start
77
+
78
+ Using with transformers:
79
+
80
+ ```shell
81
+ pip install transformers-v4.55.0-GLM-4.5V-preview
82
+ ```
83
+
84
+ and then run:
85
+
86
+ ```shell
87
+ from transformers import AutoProcessor, Glm4vMoeForConditionalGeneration
88
+ import torch
89
+
90
+ MODEL_PATH = "zai-org/GLM-4.5V"
91
+ messages = [
92
+ {
93
+ "role": "user",
94
+ "content": [
95
+ {
96
+ "type": "image",
97
+ "url": "https://upload.wikimedia.org/wikipedia/commons/f/fa/Grayscale_8bits_palette_sample_image.png"
98
+ },
99
+ {
100
+ "type": "text",
101
+ "text": "describe this image"
102
+ }
103
+ ],
104
+ }
105
+ ]
106
+ processor = AutoProcessor.from_pretrained(MODEL_PATH)
107
+ model = Glm4vMoeForConditionalGeneration.from_pretrained(
108
+ pretrained_model_name_or_path=MODEL_PATH,
109
+ torch_dtype="auto",
110
+ device_map="auto",
111
+ )
112
+ inputs = processor.apply_chat_template(
113
+ messages,
114
+ tokenize=True,
115
+ add_generation_prompt=True,
116
+ return_dict=True,
117
+ return_tensors="pt"
118
+ ).to(model.device)
119
+ inputs.pop("token_type_ids", None)
120
+ generated_ids = model.generate(**inputs, max_new_tokens=8192)
121
+ output_text = processor.decode(generated_ids[0][inputs["input_ids"].shape[1]:], skip_special_tokens=False)
122
+ print(output_text)
123
+ ```
124
+
125
+ The special tokens `<|begin_of_box|>` and `<|end_of_box|>` in the response mark the answer’s bounding box in the image. The bounding box is given as four numbers — for example `[x1, y1, x2, y2]`, where `(x1, y1)` is the top-left corner and `(x2, y2`)` is the bottom-right corner. The bracket style may vary ([], [[]], (), <>, etc.), but the meaning is the same: it encloses the coordinates of the box. These coordinates are relative values between 0 and 1000, normalized to the image size.
126
+
127
+ For more code information, please visit our [GitHub](https://github.com/zai-org/GLM-V/).
128
+
129
+ ## Citation
130
+
131
+ If you use this model, please cite the following paper:
132
+
133
+ ```bibtex
134
+ @misc{glmvteam2025glm41vthinkingversatilemultimodalreasoning,
135
+ title={GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning with Scalable Reinforcement Learning},
136
+ author={GLM-V Team and Wenyi Hong and Wenmeng Yu and Xiaotao Gu and Guo Wang and Guobing Gan and Haomiao Tang and Jiale Cheng and Ji Qi and Junhui Ji and Lihang Pan and Shuaiqi Duan and Weihan Wang and Yan Wang and Yean Cheng and Zehai He and Zhe Su and Zhen Yang and Ziyang Pan and Aohan Zeng and Baoxu Wang and Boyan Shi and Changyu Pang and Chenhui Zhang and Da Yin and Fan Yang and Guoqing Chen and Jiazheng Xu and Jiali Chen and Jing Chen and Jinhao Chen and Jinghao Lin and Jinjiang Wang and Junjie Chen and Leqi Lei and Letian Gong and Leyi Pan and Mingzhi Zhang and Qinkai Zheng and Sheng Yang and Shi Zhong and Shiyu Huang and Shuyuan Zhao and Siyan Xue and Shangqin Tu and Shengbiao Meng and Tianshu Zhang and Tianwei Luo and Tianxiang Hao and Wenkai Li and Wei Jia and Xin Lyu and Xuancheng Huang and Yanling Wang and Yadong Xue and Yanfeng Wang and Yifan An and Yifan Du and Yiming Shi and Yiheng Huang and Yilin Niu and Yuan Wang and Yuanchang Yue and Yuchen Li and Yutao Zhang and Yuxuan Zhang and Zhanxiao Du and Zhenyu Hou and Zhao Xue and Zhengxiao Du and Zihan Wang and Peng Zhang and Debing Liu and Bin Xu and Juanzi Li and Minlie Huang and Yuxiao Dong and Jie Tang},
137
+ year={2025},
138
+ eprint={2507.01006},
139
+ archivePrefix={arXiv},
140
+ primaryClass={cs.CV},
141
+ url={https://arxiv.org/abs/2507.01006},
142
+ }
143
+ ```
144
+
chat_template.jinja ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [gMASK]<sop>
2
+ {%- if tools -%}
3
+ <|system|>
4
+ # Tools
5
+
6
+ You may call one or more functions to assist with the user query.
7
+
8
+ You are provided with function signatures within <tools></tools> XML tags:
9
+ <tools>
10
+ {% for tool in tools %}
11
+ {{ tool | tojson(ensure_ascii=False) }}
12
+ {% endfor %}
13
+ </tools>
14
+
15
+ For each function call, output the function name and arguments within the following XML format:
16
+ <tool_call>{function-name}
17
+ <arg_key>{arg-key-1}</arg_key>
18
+ <arg_value>{arg-value-1}</arg_value>
19
+ <arg_key>{arg-key-2}</arg_key>
20
+ <arg_value>{arg-value-2}</arg_value>
21
+ ...
22
+ </tool_call>{%- endif -%}
23
+ {%- macro visible_text(content) -%}
24
+ {%- if content is string -%}
25
+ {{- content }}
26
+ {%- elif content is iterable and content is not mapping -%}
27
+ {%- for item in content -%}
28
+ {%- if item is mapping and item.type == 'text' -%}
29
+ {{- item.text }}
30
+ {%- elif item is mapping and (item.type == 'image' or 'image' in item) -%}
31
+ <|begin_of_image|><|image|><|end_of_image|>
32
+ {%- elif item is mapping and (item.type == 'video' or 'video' in item) -%}
33
+ <|begin_of_video|><|video|><|end_of_video|>
34
+ {%- elif item is string -%}
35
+ {{- item }}
36
+ {%- endif -%}
37
+ {%- endfor -%}
38
+ {%- else -%}
39
+ {{- content }}
40
+ {%- endif -%}
41
+ {%- endmacro -%}
42
+ {%- set ns = namespace(last_user_index=-1) %}
43
+ {%- for m in messages %}
44
+ {%- if m.role == 'user' %}
45
+ {% set ns.last_user_index = loop.index0 -%}
46
+ {%- endif %}
47
+ {%- endfor %}
48
+ {% for m in messages %}
49
+ {%- if m.role == 'user' -%}<|user|>
50
+ {% if m.content is string %}
51
+ {{ m.content }}
52
+ {%- else %}
53
+ {%- for item in m.content %}
54
+ {% if item.type == 'video' or 'video' in item %}
55
+ <|begin_of_video|><|video|><|end_of_video|>{% elif item.type == 'image' or 'image' in item %}
56
+ <|begin_of_image|><|image|><|end_of_image|>{% elif item.type == 'text' %}
57
+ {{ item.text }}
58
+ {%- endif %}
59
+ {%- endfor %}
60
+ {%- endif %}
61
+ {{- '/nothink' if (enable_thinking is defined and not enable_thinking and not visible_text(m.content).endswith("/nothink")) else '' -}}
62
+ {%- elif m.role == 'assistant' -%}
63
+ <|assistant|>
64
+ {%- set reasoning_content = '' %}
65
+ {%- set content = visible_text(m.content) %}
66
+ {%- if m.reasoning_content is string %}
67
+ {%- set reasoning_content = m.reasoning_content %}
68
+ {%- else %}
69
+ {%- if '</think>' in content %}
70
+ {%- set reasoning_content = content.split('</think>')[0].rstrip('\n').split('<think>')[-1].lstrip('\n') %}
71
+ {%- set content = content.split('</think>')[-1].lstrip('\n') %}
72
+ {%- endif %}
73
+ {%- endif %}
74
+ {%- if loop.index0 > ns.last_user_index and reasoning_content -%}
75
+ {{ '\n<think>' + reasoning_content.strip() + '</think>'}}
76
+ {%- else -%}
77
+ {{ '\n<think></think>' }}
78
+ {%- endif -%}
79
+ {%- if content.strip() -%}
80
+ {{ '\n' + content.strip() }}
81
+ {%- endif -%}
82
+ {% if m.tool_calls %}
83
+ {% for tc in m.tool_calls %}
84
+ {%- if tc.function %}
85
+ {%- set tc = tc.function %}
86
+ {%- endif %}
87
+ {{ '\n<tool_call>' + tc.name }}
88
+ {% set _args = tc.arguments %}
89
+ {% for k, v in _args.items() %}
90
+ <arg_key>{{ k }}</arg_key>
91
+ <arg_value>{{ v | tojson(ensure_ascii=False) if v is not string else v }}</arg_value>
92
+ {% endfor %}
93
+ </tool_call>{% endfor %}
94
+ {% endif %}
95
+ {%- elif m.role == 'tool' -%}
96
+ {%- if m.content is string -%}
97
+ {%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
98
+ {{- '<|observation|>' }}
99
+ {%- endif %}
100
+ {{- '\n<tool_response>\n' }}
101
+ {{- m.content }}
102
+ {{- '\n</tool_response>' }}
103
+ {%- else -%}
104
+ <|observation|>{% for tr in m.content %}
105
+
106
+ <tool_response>
107
+ {{ tr.output if tr.output is defined else tr }}
108
+ </tool_response>{% endfor -%}
109
+ {% endif -%}
110
+ {%- elif m.role == 'system' -%}
111
+ <|system|>
112
+ {{ visible_text(m.content) }}
113
+ {%- endif -%}
114
+ {%- endfor -%}
115
+ {%- if add_generation_prompt -%}
116
+ <|assistant|>
117
+ {{'<think></think>\n' if (enable_thinking is defined and not enable_thinking) else ''}}
118
+ {%- endif -%}
config.json ADDED
@@ -0,0 +1,239 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Glm4vMoeForConditionalGeneration"
4
+ ],
5
+ "image_end_token_id": 151340,
6
+ "image_start_token_id": 151339,
7
+ "image_token_id": 151363,
8
+ "model_type": "glm4v_moe",
9
+ "quantization_config": {
10
+ "config_groups": {
11
+ "group_0": {
12
+ "input_activations": null,
13
+ "output_activations": null,
14
+ "targets": [
15
+ "Linear"
16
+ ],
17
+ "weights": {
18
+ "actorder": null,
19
+ "block_structure": null,
20
+ "dynamic": false,
21
+ "group_size": 32,
22
+ "num_bits": 8,
23
+ "observer": "minmax",
24
+ "observer_kwargs": {},
25
+ "strategy": "group",
26
+ "symmetric": true,
27
+ "type": "int"
28
+ }
29
+ }
30
+ },
31
+ "format": "pack-quantized",
32
+ "global_compression_ratio": null,
33
+ "ignore": [
34
+ "model.visual.blocks.0.attn.qkv",
35
+ "model.visual.blocks.0.attn.proj",
36
+ "model.visual.blocks.0.mlp.gate_proj",
37
+ "model.visual.blocks.0.mlp.up_proj",
38
+ "model.visual.blocks.0.mlp.down_proj",
39
+ "model.visual.blocks.1.attn.qkv",
40
+ "model.visual.blocks.1.attn.proj",
41
+ "model.visual.blocks.1.mlp.gate_proj",
42
+ "model.visual.blocks.1.mlp.up_proj",
43
+ "model.visual.blocks.1.mlp.down_proj",
44
+ "model.visual.blocks.2.attn.qkv",
45
+ "model.visual.blocks.2.attn.proj",
46
+ "model.visual.blocks.2.mlp.gate_proj",
47
+ "model.visual.blocks.2.mlp.up_proj",
48
+ "model.visual.blocks.2.mlp.down_proj",
49
+ "model.visual.blocks.3.attn.qkv",
50
+ "model.visual.blocks.3.attn.proj",
51
+ "model.visual.blocks.3.mlp.gate_proj",
52
+ "model.visual.blocks.3.mlp.up_proj",
53
+ "model.visual.blocks.3.mlp.down_proj",
54
+ "model.visual.blocks.4.attn.qkv",
55
+ "model.visual.blocks.4.attn.proj",
56
+ "model.visual.blocks.4.mlp.gate_proj",
57
+ "model.visual.blocks.4.mlp.up_proj",
58
+ "model.visual.blocks.4.mlp.down_proj",
59
+ "model.visual.blocks.5.attn.qkv",
60
+ "model.visual.blocks.5.attn.proj",
61
+ "model.visual.blocks.5.mlp.gate_proj",
62
+ "model.visual.blocks.5.mlp.up_proj",
63
+ "model.visual.blocks.5.mlp.down_proj",
64
+ "model.visual.blocks.6.attn.qkv",
65
+ "model.visual.blocks.6.attn.proj",
66
+ "model.visual.blocks.6.mlp.gate_proj",
67
+ "model.visual.blocks.6.mlp.up_proj",
68
+ "model.visual.blocks.6.mlp.down_proj",
69
+ "model.visual.blocks.7.attn.qkv",
70
+ "model.visual.blocks.7.attn.proj",
71
+ "model.visual.blocks.7.mlp.gate_proj",
72
+ "model.visual.blocks.7.mlp.up_proj",
73
+ "model.visual.blocks.7.mlp.down_proj",
74
+ "model.visual.blocks.8.attn.qkv",
75
+ "model.visual.blocks.8.attn.proj",
76
+ "model.visual.blocks.8.mlp.gate_proj",
77
+ "model.visual.blocks.8.mlp.up_proj",
78
+ "model.visual.blocks.8.mlp.down_proj",
79
+ "model.visual.blocks.9.attn.qkv",
80
+ "model.visual.blocks.9.attn.proj",
81
+ "model.visual.blocks.9.mlp.gate_proj",
82
+ "model.visual.blocks.9.mlp.up_proj",
83
+ "model.visual.blocks.9.mlp.down_proj",
84
+ "model.visual.blocks.10.attn.qkv",
85
+ "model.visual.blocks.10.attn.proj",
86
+ "model.visual.blocks.10.mlp.gate_proj",
87
+ "model.visual.blocks.10.mlp.up_proj",
88
+ "model.visual.blocks.10.mlp.down_proj",
89
+ "model.visual.blocks.11.attn.qkv",
90
+ "model.visual.blocks.11.attn.proj",
91
+ "model.visual.blocks.11.mlp.gate_proj",
92
+ "model.visual.blocks.11.mlp.up_proj",
93
+ "model.visual.blocks.11.mlp.down_proj",
94
+ "model.visual.blocks.12.attn.qkv",
95
+ "model.visual.blocks.12.attn.proj",
96
+ "model.visual.blocks.12.mlp.gate_proj",
97
+ "model.visual.blocks.12.mlp.up_proj",
98
+ "model.visual.blocks.12.mlp.down_proj",
99
+ "model.visual.blocks.13.attn.qkv",
100
+ "model.visual.blocks.13.attn.proj",
101
+ "model.visual.blocks.13.mlp.gate_proj",
102
+ "model.visual.blocks.13.mlp.up_proj",
103
+ "model.visual.blocks.13.mlp.down_proj",
104
+ "model.visual.blocks.14.attn.qkv",
105
+ "model.visual.blocks.14.attn.proj",
106
+ "model.visual.blocks.14.mlp.gate_proj",
107
+ "model.visual.blocks.14.mlp.up_proj",
108
+ "model.visual.blocks.14.mlp.down_proj",
109
+ "model.visual.blocks.15.attn.qkv",
110
+ "model.visual.blocks.15.attn.proj",
111
+ "model.visual.blocks.15.mlp.gate_proj",
112
+ "model.visual.blocks.15.mlp.up_proj",
113
+ "model.visual.blocks.15.mlp.down_proj",
114
+ "model.visual.blocks.16.attn.qkv",
115
+ "model.visual.blocks.16.attn.proj",
116
+ "model.visual.blocks.16.mlp.gate_proj",
117
+ "model.visual.blocks.16.mlp.up_proj",
118
+ "model.visual.blocks.16.mlp.down_proj",
119
+ "model.visual.blocks.17.attn.qkv",
120
+ "model.visual.blocks.17.attn.proj",
121
+ "model.visual.blocks.17.mlp.gate_proj",
122
+ "model.visual.blocks.17.mlp.up_proj",
123
+ "model.visual.blocks.17.mlp.down_proj",
124
+ "model.visual.blocks.18.attn.qkv",
125
+ "model.visual.blocks.18.attn.proj",
126
+ "model.visual.blocks.18.mlp.gate_proj",
127
+ "model.visual.blocks.18.mlp.up_proj",
128
+ "model.visual.blocks.18.mlp.down_proj",
129
+ "model.visual.blocks.19.attn.qkv",
130
+ "model.visual.blocks.19.attn.proj",
131
+ "model.visual.blocks.19.mlp.gate_proj",
132
+ "model.visual.blocks.19.mlp.up_proj",
133
+ "model.visual.blocks.19.mlp.down_proj",
134
+ "model.visual.blocks.20.attn.qkv",
135
+ "model.visual.blocks.20.attn.proj",
136
+ "model.visual.blocks.20.mlp.gate_proj",
137
+ "model.visual.blocks.20.mlp.up_proj",
138
+ "model.visual.blocks.20.mlp.down_proj",
139
+ "model.visual.blocks.21.attn.qkv",
140
+ "model.visual.blocks.21.attn.proj",
141
+ "model.visual.blocks.21.mlp.gate_proj",
142
+ "model.visual.blocks.21.mlp.up_proj",
143
+ "model.visual.blocks.21.mlp.down_proj",
144
+ "model.visual.blocks.22.attn.qkv",
145
+ "model.visual.blocks.22.attn.proj",
146
+ "model.visual.blocks.22.mlp.gate_proj",
147
+ "model.visual.blocks.22.mlp.up_proj",
148
+ "model.visual.blocks.22.mlp.down_proj",
149
+ "model.visual.blocks.23.attn.qkv",
150
+ "model.visual.blocks.23.attn.proj",
151
+ "model.visual.blocks.23.mlp.gate_proj",
152
+ "model.visual.blocks.23.mlp.up_proj",
153
+ "model.visual.blocks.23.mlp.down_proj",
154
+ "model.visual.merger.proj",
155
+ "model.visual.merger.gate_proj",
156
+ "model.visual.merger.up_proj",
157
+ "model.visual.merger.down_proj",
158
+ "lm_head"
159
+ ],
160
+ "kv_cache_scheme": null,
161
+ "quant_method": "compressed-tensors",
162
+ "quantization_status": "compressed",
163
+ "sparsity_config": {},
164
+ "transform_config": {},
165
+ "version": "0.10.3.dev33+g33c52de"
166
+ },
167
+ "text_config": {
168
+ "attention_bias": true,
169
+ "attention_dropout": 0.0,
170
+ "eos_token_id": [
171
+ 151329,
172
+ 151336,
173
+ 151338
174
+ ],
175
+ "first_k_dense_replace": 1,
176
+ "head_dim": 128,
177
+ "hidden_act": "silu",
178
+ "hidden_size": 4096,
179
+ "image_end_token_id": 151340,
180
+ "image_start_token_id": 151339,
181
+ "image_token_id": 151363,
182
+ "initializer_range": 0.02,
183
+ "intermediate_size": 10944,
184
+ "max_position_embeddings": 65536,
185
+ "model_type": "Glm4vMoe_text",
186
+ "moe_intermediate_size": 1408,
187
+ "n_group": 1,
188
+ "n_routed_experts": 128,
189
+ "n_shared_experts": 1,
190
+ "norm_topk_prob": true,
191
+ "num_attention_heads": 96,
192
+ "num_experts_per_tok": 8,
193
+ "num_hidden_layers": 46,
194
+ "num_key_value_heads": 8,
195
+ "pad_token_id": 151329,
196
+ "partial_rotary_factor": 0.5,
197
+ "rms_norm_eps": 1e-05,
198
+ "rope_scaling": {
199
+ "mrope_section": [
200
+ 8,
201
+ 12,
202
+ 12
203
+ ],
204
+ "rope_type": "default"
205
+ },
206
+ "rope_theta": 10000.0,
207
+ "routed_scaling_factor": 1.0,
208
+ "topk_group": 1,
209
+ "torch_dtype": "bfloat16",
210
+ "use_cache": true,
211
+ "use_qk_norm": false,
212
+ "vocab_size": 151552
213
+ },
214
+ "tie_word_embeddings": false,
215
+ "torch_dtype": "bfloat16",
216
+ "transformers_version": "4.56.0.dev0",
217
+ "video_end_token_id": 151342,
218
+ "video_start_token_id": 151341,
219
+ "video_token_id": 151364,
220
+ "vision_config": {
221
+ "attention_bias": false,
222
+ "attention_dropout": 0.0,
223
+ "depth": 24,
224
+ "hidden_act": "silu",
225
+ "hidden_size": 1536,
226
+ "image_size": 336,
227
+ "in_channels": 3,
228
+ "initializer_range": 0.02,
229
+ "intermediate_size": 10944,
230
+ "model_type": "glm4v_moe",
231
+ "num_heads": 12,
232
+ "out_hidden_size": 4096,
233
+ "patch_size": 14,
234
+ "rms_norm_eps": 1e-05,
235
+ "spatial_merge_size": 2,
236
+ "temporal_patch_size": 2,
237
+ "torch_dtype": "bfloat16"
238
+ }
239
+ }
generation_config.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "do_sample": true,
4
+ "eos_token_id": [
5
+ 151329,
6
+ 151336,
7
+ 151338
8
+ ],
9
+ "pad_token_id": 151329,
10
+ "top_k": 1,
11
+ "top_p": 0.0001,
12
+ "transformers_version": "4.56.0.dev0"
13
+ }
model-00001-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4af634a24a7bbb8ffe3801229bd173e60672c90c0d1cbc07af8fe064391d4ab9
3
+ size 5000065472
model-00002-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:071c6c7db5d3289f43209d0243417da94aa1e5e13854cca3e7ab27240f6b77b5
3
+ size 4995780816
model-00003-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7be376e95eb6f8e0d92f8c86dd291ab5b7e21a0c88f9c5e7c8ec5d4339a1f5d4
3
+ size 4995420080
model-00004-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1079fd5326b0e92f826548d8e06ae0414853453d9cf4014946ac2b23f8965725
3
+ size 4995420080
model-00005-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d63f8eb02e26034491471c12227709d35c710c1468e154fe0eb321bbe9823367
3
+ size 4995420080
model-00006-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d49d5e08e475bcce61afba51f2d18fe64166ad331fc601623d840301a0bd5b64
3
+ size 4995422136
model-00007-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d9c4cbc830cddb63aa5e0429a65c593e714b1bac790220e272b69fc450994563
3
+ size 4995422448
model-00008-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6fcb44b866c59787c334bcf482a1c2f236152342ada87744c279d2104b23582f
3
+ size 4995422448
model-00009-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d1c890a0dfd0a0ef4d5c35a6129ccf8e79c0e98560b2b6a490721e4529627faf
3
+ size 4995422448
model-00010-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:77c6c9461c3405c201711ff5602d231b2f53756cd5f1697daa29558eb8309453
3
+ size 4995422448
model-00011-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:edfd6545515f250d6b8a0b06047de1292995675af7a9fc5f814db692440a4ba6
3
+ size 4995422456
model-00012-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:abf07d08a6ed048fdf86bd79637043c8c2de23af8d37fbccbb3d99f067270084
3
+ size 4995422456
model-00013-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:17ca90b56d3949df38ec2d74e98cba64d6bec7f292ce77c364d0f409f4d30f3d
3
+ size 4995422456
model-00014-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fca185b318df2e4da2180e3efd73c84fa0f4a3460b2cab84e11642f58c149a24
3
+ size 4995422456
model-00015-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:85dbaaa40968888f850de9aaa7c4123ee73d99a46be215ef28c06c475ca6eb82
3
+ size 4995422456
model-00016-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4e20b4c80643409bf21ea0f7580b390ede72ce2a3fb3458b16fef721d3999fb4
3
+ size 4995422456
model-00017-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:144f54a85cfb25bfdf0ed4ff2ad708070549ad256775a7903e05ba20754df59b
3
+ size 4995422456
model-00018-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca74bff729d0643b00b1fa32b314cbeb2890d85a35bbc9c8dea70a2d77913ffe
3
+ size 4995422456
model-00019-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1dd4f9f951bcfe19a74b1af84e6ecd54690b74d0562f3ebe591fe40ae9d913e9
3
+ size 4995422456
model-00020-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e07a807efc7f8713a15c7809c49d7c305a554a804aa6f76ff25806caa3216514
3
+ size 4995422456
model-00021-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4e8e58e0ca8713cd22ea0326e46ece1497cf5d896b40c5196a0d58c3b047f163
3
+ size 4995422456
model-00022-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9a95bef18e2ee6135b72b55203cdb53148474f420e221963c87221056d247101
3
+ size 4995422456
model-00023-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3118f4ae1482eccf9c6494391810be382b97f6374a36afb3a1c7b3b65df75bce
3
+ size 4995422456
model-00024-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dce5357d7a88fc54d12c0f6dbdcff4b85f4d8725a399387b965761e2ad23f52c
3
+ size 1542862552
model.safetensors.index.json ADDED
The diff for this file is too large to render. See raw diff
 
preprocessor_config.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "do_convert_rgb": true,
3
+ "do_normalize": true,
4
+ "do_rescale": true,
5
+ "do_resize": true,
6
+ "image_mean": [
7
+ 0.48145466,
8
+ 0.4578275,
9
+ 0.40821073
10
+ ],
11
+ "image_processor_type": "Glm4vImageProcessor",
12
+ "image_std": [
13
+ 0.26862954,
14
+ 0.26130258,
15
+ 0.27577711
16
+ ],
17
+ "merge_size": 2,
18
+ "patch_size": 14,
19
+ "processor_class": "Glm4vProcessor",
20
+ "resample": 3,
21
+ "rescale_factor": 0.00392156862745098,
22
+ "size": {
23
+ "longest_edge": 9633792,
24
+ "shortest_edge": 12544
25
+ },
26
+ "temporal_patch_size": 2
27
+ }
recipe.yaml ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ default_stage:
2
+ default_modifiers:
3
+ AWQModifier:
4
+ config_groups:
5
+ group_0:
6
+ targets: [Linear]
7
+ weights:
8
+ num_bits: 8
9
+ type: int
10
+ symmetric: true
11
+ group_size: 32
12
+ strategy: group
13
+ block_structure: null
14
+ dynamic: false
15
+ actorder: null
16
+ observer: minmax
17
+ observer_kwargs: {}
18
+ input_activations: null
19
+ output_activations: null
20
+ targets: [Linear]
21
+ ignore: [lm_head, 're:.*visual.*']
22
+ mappings:
23
+ - smooth_layer: re:.*input_layernorm$
24
+ balance_layers: ['re:.*q_proj$', 're:.*k_proj$', 're:.*v_proj$']
25
+ - smooth_layer: re:.*v_proj$
26
+ balance_layers: ['re:.*o_proj$']
27
+ - smooth_layer: re:.*post_attention_layernorm$
28
+ balance_layers: ['re:.*gate_proj$', 're:.*up_proj$']
29
+ - smooth_layer: re:.*up_proj$
30
+ balance_layers: ['re:.*down_proj$']
31
+ duo_scaling: true
special_tokens_map.json ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|endoftext|>",
4
+ "[MASK]",
5
+ "[gMASK]",
6
+ "[sMASK]",
7
+ "<sop>",
8
+ "<eop>",
9
+ "<|system|>",
10
+ "<|user|>",
11
+ "<|assistant|>",
12
+ "<|observation|>",
13
+ "<|begin_of_image|>",
14
+ "<|end_of_image|>",
15
+ "<|begin_of_video|>",
16
+ "<|end_of_video|>",
17
+ "<|begin_of_audio|>",
18
+ "<|end_of_audio|>",
19
+ "<|begin_of_transcription|>",
20
+ "<|end_of_transcription|>",
21
+ "<|code_prefix|>",
22
+ "<|code_middle|>",
23
+ "<|code_suffix|>",
24
+ "/nothink"
25
+ ],
26
+ "eos_token": {
27
+ "content": "<|endoftext|>",
28
+ "lstrip": false,
29
+ "normalized": false,
30
+ "rstrip": false,
31
+ "single_word": false
32
+ },
33
+ "pad_token": {
34
+ "content": "<|endoftext|>",
35
+ "lstrip": false,
36
+ "normalized": false,
37
+ "rstrip": false,
38
+ "single_word": false
39
+ }
40
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bda8e2146c3bb7b7e0fc96dcc4f0aeff041c6c27952e3ace0665663ebff346ba
3
+ size 19970700
tokenizer_config.json ADDED
@@ -0,0 +1,326 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "151329": {
4
+ "content": "<|endoftext|>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "151330": {
12
+ "content": "[MASK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "151331": {
20
+ "content": "[gMASK]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "151332": {
28
+ "content": "[sMASK]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "151333": {
36
+ "content": "<sop>",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ },
43
+ "151334": {
44
+ "content": "<eop>",
45
+ "lstrip": false,
46
+ "normalized": false,
47
+ "rstrip": false,
48
+ "single_word": false,
49
+ "special": true
50
+ },
51
+ "151335": {
52
+ "content": "<|system|>",
53
+ "lstrip": false,
54
+ "normalized": false,
55
+ "rstrip": false,
56
+ "single_word": false,
57
+ "special": true
58
+ },
59
+ "151336": {
60
+ "content": "<|user|>",
61
+ "lstrip": false,
62
+ "normalized": false,
63
+ "rstrip": false,
64
+ "single_word": false,
65
+ "special": true
66
+ },
67
+ "151337": {
68
+ "content": "<|assistant|>",
69
+ "lstrip": false,
70
+ "normalized": false,
71
+ "rstrip": false,
72
+ "single_word": false,
73
+ "special": true
74
+ },
75
+ "151338": {
76
+ "content": "<|observation|>",
77
+ "lstrip": false,
78
+ "normalized": false,
79
+ "rstrip": false,
80
+ "single_word": false,
81
+ "special": true
82
+ },
83
+ "151339": {
84
+ "content": "<|begin_of_image|>",
85
+ "lstrip": false,
86
+ "normalized": false,
87
+ "rstrip": false,
88
+ "single_word": false,
89
+ "special": true
90
+ },
91
+ "151340": {
92
+ "content": "<|end_of_image|>",
93
+ "lstrip": false,
94
+ "normalized": false,
95
+ "rstrip": false,
96
+ "single_word": false,
97
+ "special": true
98
+ },
99
+ "151341": {
100
+ "content": "<|begin_of_video|>",
101
+ "lstrip": false,
102
+ "normalized": false,
103
+ "rstrip": false,
104
+ "single_word": false,
105
+ "special": true
106
+ },
107
+ "151342": {
108
+ "content": "<|end_of_video|>",
109
+ "lstrip": false,
110
+ "normalized": false,
111
+ "rstrip": false,
112
+ "single_word": false,
113
+ "special": true
114
+ },
115
+ "151343": {
116
+ "content": "<|begin_of_audio|>",
117
+ "lstrip": false,
118
+ "normalized": false,
119
+ "rstrip": false,
120
+ "single_word": false,
121
+ "special": true
122
+ },
123
+ "151344": {
124
+ "content": "<|end_of_audio|>",
125
+ "lstrip": false,
126
+ "normalized": false,
127
+ "rstrip": false,
128
+ "single_word": false,
129
+ "special": true
130
+ },
131
+ "151345": {
132
+ "content": "<|begin_of_transcription|>",
133
+ "lstrip": false,
134
+ "normalized": false,
135
+ "rstrip": false,
136
+ "single_word": false,
137
+ "special": true
138
+ },
139
+ "151346": {
140
+ "content": "<|end_of_transcription|>",
141
+ "lstrip": false,
142
+ "normalized": false,
143
+ "rstrip": false,
144
+ "single_word": false,
145
+ "special": true
146
+ },
147
+ "151347": {
148
+ "content": "<|code_prefix|>",
149
+ "lstrip": false,
150
+ "normalized": false,
151
+ "rstrip": false,
152
+ "single_word": false,
153
+ "special": true
154
+ },
155
+ "151348": {
156
+ "content": "<|code_middle|>",
157
+ "lstrip": false,
158
+ "normalized": false,
159
+ "rstrip": false,
160
+ "single_word": false,
161
+ "special": true
162
+ },
163
+ "151349": {
164
+ "content": "<|code_suffix|>",
165
+ "lstrip": false,
166
+ "normalized": false,
167
+ "rstrip": false,
168
+ "single_word": false,
169
+ "special": true
170
+ },
171
+ "151350": {
172
+ "content": "<think>",
173
+ "lstrip": false,
174
+ "normalized": false,
175
+ "rstrip": false,
176
+ "single_word": false,
177
+ "special": false
178
+ },
179
+ "151351": {
180
+ "content": "</think>",
181
+ "lstrip": false,
182
+ "normalized": false,
183
+ "rstrip": false,
184
+ "single_word": false,
185
+ "special": false
186
+ },
187
+ "151352": {
188
+ "content": "<tool_call>",
189
+ "lstrip": false,
190
+ "normalized": false,
191
+ "rstrip": false,
192
+ "single_word": false,
193
+ "special": false
194
+ },
195
+ "151353": {
196
+ "content": "</tool_call>",
197
+ "lstrip": false,
198
+ "normalized": false,
199
+ "rstrip": false,
200
+ "single_word": false,
201
+ "special": false
202
+ },
203
+ "151354": {
204
+ "content": "<tool_response>",
205
+ "lstrip": false,
206
+ "normalized": false,
207
+ "rstrip": false,
208
+ "single_word": false,
209
+ "special": false
210
+ },
211
+ "151355": {
212
+ "content": "</tool_response>",
213
+ "lstrip": false,
214
+ "normalized": false,
215
+ "rstrip": false,
216
+ "single_word": false,
217
+ "special": false
218
+ },
219
+ "151356": {
220
+ "content": "<arg_key>",
221
+ "lstrip": false,
222
+ "normalized": false,
223
+ "rstrip": false,
224
+ "single_word": false,
225
+ "special": false
226
+ },
227
+ "151357": {
228
+ "content": "</arg_key>",
229
+ "lstrip": false,
230
+ "normalized": false,
231
+ "rstrip": false,
232
+ "single_word": false,
233
+ "special": false
234
+ },
235
+ "151358": {
236
+ "content": "<arg_value>",
237
+ "lstrip": false,
238
+ "normalized": false,
239
+ "rstrip": false,
240
+ "single_word": false,
241
+ "special": false
242
+ },
243
+ "151359": {
244
+ "content": "</arg_value>",
245
+ "lstrip": false,
246
+ "normalized": false,
247
+ "rstrip": false,
248
+ "single_word": false,
249
+ "special": false
250
+ },
251
+ "151360": {
252
+ "content": "/nothink",
253
+ "lstrip": false,
254
+ "normalized": false,
255
+ "rstrip": false,
256
+ "single_word": false,
257
+ "special": true
258
+ },
259
+ "151361": {
260
+ "content": "<|begin_of_box|>",
261
+ "lstrip": false,
262
+ "normalized": false,
263
+ "rstrip": false,
264
+ "single_word": false,
265
+ "special": false
266
+ },
267
+ "151362": {
268
+ "content": "<|end_of_box|>",
269
+ "lstrip": false,
270
+ "normalized": false,
271
+ "rstrip": false,
272
+ "single_word": false,
273
+ "special": false
274
+ },
275
+ "151363": {
276
+ "content": "<|image|>",
277
+ "lstrip": false,
278
+ "normalized": false,
279
+ "rstrip": false,
280
+ "single_word": false,
281
+ "special": false
282
+ },
283
+ "151364": {
284
+ "content": "<|video|>",
285
+ "lstrip": false,
286
+ "normalized": false,
287
+ "rstrip": false,
288
+ "single_word": false,
289
+ "special": false
290
+ }
291
+ },
292
+ "additional_special_tokens": [
293
+ "<|endoftext|>",
294
+ "[MASK]",
295
+ "[gMASK]",
296
+ "[sMASK]",
297
+ "<sop>",
298
+ "<eop>",
299
+ "<|system|>",
300
+ "<|user|>",
301
+ "<|assistant|>",
302
+ "<|observation|>",
303
+ "<|begin_of_image|>",
304
+ "<|end_of_image|>",
305
+ "<|begin_of_video|>",
306
+ "<|end_of_video|>",
307
+ "<|begin_of_audio|>",
308
+ "<|end_of_audio|>",
309
+ "<|begin_of_transcription|>",
310
+ "<|end_of_transcription|>",
311
+ "<|code_prefix|>",
312
+ "<|code_middle|>",
313
+ "<|code_suffix|>",
314
+ "/nothink"
315
+ ],
316
+ "clean_up_tokenization_spaces": false,
317
+ "do_lower_case": false,
318
+ "eos_token": "<|endoftext|>",
319
+ "extra_special_tokens": {},
320
+ "model_max_length": 128000,
321
+ "pad_token": "<|endoftext|>",
322
+ "padding_side": "left",
323
+ "processor_class": "Glm4vProcessor",
324
+ "remove_space": false,
325
+ "tokenizer_class": "PreTrainedTokenizerFast"
326
+ }
video_preprocessor_config.json ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "crop_size": null,
3
+ "data_format": "channels_first",
4
+ "default_to_square": true,
5
+ "device": null,
6
+ "do_center_crop": null,
7
+ "do_convert_rgb": true,
8
+ "do_normalize": true,
9
+ "do_pad": null,
10
+ "do_rescale": true,
11
+ "do_resize": true,
12
+ "do_sample_frames": true,
13
+ "fps": 2,
14
+ "image_mean": [
15
+ 0.48145466,
16
+ 0.4578275,
17
+ 0.40821073
18
+ ],
19
+ "image_std": [
20
+ 0.26862954,
21
+ 0.26130258,
22
+ 0.27577711
23
+ ],
24
+ "input_data_format": null,
25
+ "max_image_size": {
26
+ "longest_edge": 47040000
27
+ },
28
+ "merge_size": 2,
29
+ "num_frames": 16,
30
+ "patch_size": 14,
31
+ "processor_class": "Glm4vProcessor",
32
+ "resample": 3,
33
+ "rescale_factor": 0.00392156862745098,
34
+ "size": {
35
+ "longest_edge": 47040000,
36
+ "shortest_edge": 12544
37
+ },
38
+ "size_divisor": null,
39
+ "temporal_patch_size": 2,
40
+ "video_metadata": null,
41
+ "video_processor_type": "Glm4vVideoProcessor"
42
+ }