izhx commited on
Commit
8d3029f
·
verified ·
1 Parent(s): 73bda90
.gitattributes CHANGED
@@ -33,3 +33,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ framework-crop.png filter=lfs diff=lfs merge=lfs -text
37
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,236 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model:
4
+ - Qwen/Qwen2.5-1.5B-Instruct
5
+ pipeline_tag: text-ranking
6
+ ---
7
+
8
+ <a href="https://github.com/vec-ai/lychee-embed">
9
+ <img src="https://img.shields.io/badge/GitHub-%23121011.svg?logo=github&logoColor=white">
10
+ </a>
11
+ <a href="https://openreview.net/pdf?id=NC6G1KCxlt">
12
+ <img src="https://img.shields.io/badge/Paper-Openreview-red">
13
+ </a>
14
+
15
+
16
+ # Lychee Rerank
17
+
18
+ `Lychee-rerank` is the latest generalist text embedding model based on the `Qwen2.5` model. It is suitable for reranking of various text retrieval tasks, and supports multiple languages of `Qwen2.5`.
19
+ `Lychee-rerank` is jointly developed by the NLP Team of Harbin Institute of Technology, Shenzhen and is built based on an innovative multi-stage training framework (warm-up, task-learning, model merging, annealing).
20
+ The first batch of open source is 1.5B parameter version.
21
+
22
+ ![The multi-stage training framework](framework-crop.png)
23
+
24
+
25
+ **Lychee-embed**:
26
+
27
+ - Model Type: Text Reranking
28
+ - Language Support: 100+ Languages
29
+ - Param Size: 1.5B
30
+ - Context Length: 32k
31
+ - Model Precision: BF16
32
+
33
+ For more details, please refer to our [paper](https://openreview.net/pdf?id=NC6G1KCxlt).
34
+
35
+
36
+ ### Model List
37
+
38
+ | Model Type | Models | Size | Layers | Sequence Length | Embedding Dimension | MRL Support | Instruction Aware |
39
+ |------------------|----------------------|------|--------|-----------------|---------------------|-------------|----------------|
40
+ | Text Embedding | [lychee-embed](https://huggingface.co/vec-ai/lychee-embed) | 1.5B | 28 | 8K | 1636 | Yes | Yes |
41
+ | Text Reranking | [lychee-rerank](https://huggingface.co/vec-ai/lychee-rerank) | 1.5B | 28 | 8K | - | - | Yes |
42
+
43
+
44
+ > **Note**:
45
+ > - `MRL Support` indicates whether the embedding model supports custom dimensions for the final embedding.
46
+ > - `Instruction Aware` notes whether the embedding or reranking model supports customizing the input instruction according to different tasks.
47
+ > - Like most models, for most downstream tasks, using instructions (instruct) typically yields an improvement of 1% to 5% compared to not using them. Therefore, we recommend that developers create tailored instructions specific to their tasks and scenarios. In multilingual contexts, we also advise users to write their instructions in English, as most instructions utilized during the model training process were originally written in English.
48
+
49
+
50
+ ## Model Usage
51
+
52
+ 📌 **Tips**: We recommend that developers customize the `instruct` according to their specific scenarios, tasks, and languages. Our tests have shown that in most retrieval scenarios, not using an `instruct` on the `query` side can lead to a drop in retrieval performance by approximately 1% to 5%.
53
+
54
+
55
+ ### Transformers Usage
56
+
57
+ ```python
58
+ # Requires transformers>=4.51.0
59
+ import torch
60
+ from transformers import AutoTokenizer, AutoModelForCausalLM
61
+
62
+ def format_instruction(instruction, query, doc):
63
+ if instruction is None:
64
+ instruction = 'Given a web search query, retrieve relevant passages that answer the query'
65
+ output = "<Instruct>: {instruction}\n<Query>: {query}\n<Document>: {doc}".format(instruction=instruction,query=query, doc=doc)
66
+ return output
67
+
68
+ def process_inputs(pairs):
69
+ inputs = tokenizer(
70
+ pairs, padding=False, truncation='longest_first',
71
+ return_attention_mask=False, max_length=max_length - len(prefix_tokens) - len(suffix_tokens)
72
+ )
73
+ for i, ele in enumerate(inputs['input_ids']):
74
+ inputs['input_ids'][i] = prefix_tokens + ele + suffix_tokens
75
+ inputs = tokenizer.pad(inputs, padding=True, return_tensors="pt", max_length=max_length)
76
+ for key in inputs:
77
+ inputs[key] = inputs[key].to(model.device)
78
+ return inputs
79
+
80
+ @torch.no_grad()
81
+ def compute_logits(inputs, **kwargs):
82
+ batch_scores = model(**inputs).logits[:, -1, :]
83
+ true_vector = batch_scores[:, token_true_id]
84
+ false_vector = batch_scores[:, token_false_id]
85
+ batch_scores = torch.stack([false_vector, true_vector], dim=1)
86
+ batch_scores = torch.nn.functional.log_softmax(batch_scores, dim=1)
87
+ scores = batch_scores[:, 1].exp().tolist()
88
+ return scores
89
+
90
+ tokenizer = AutoTokenizer.from_pretrained("vec-ai/lychee-rerank", padding_side='left')
91
+ model = AutoModelForCausalLM.from_pretrained("vec-ai/lychee-rerank").eval()
92
+
93
+ # We recommend enabling flash_attention_2 for better acceleration and memory saving.
94
+ # model = AutoModelForCausalLM.from_pretrained("vec-ai/lychee-rerank", torch_dtype=torch.float16, attn_implementation="flash_attention_2").cuda().eval()
95
+
96
+ token_false_id = tokenizer.convert_tokens_to_ids("no")
97
+ token_true_id = tokenizer.convert_tokens_to_ids("yes")
98
+ max_length = 8192
99
+
100
+ prefix = "<|im_start|>system\nJudge whether the Document meets the requirements based on the Query and the Instruct provided. Note that the answer can only be \"yes\" or \"no\".<|im_end|>\n<|im_start|>user\n"
101
+ suffix = "<|im_end|>\n<|im_start|>assistant\n"
102
+ prefix_tokens = tokenizer.encode(prefix, add_special_tokens=False)
103
+ suffix_tokens = tokenizer.encode(suffix, add_special_tokens=False)
104
+
105
+ task = 'Given a web search query, retrieve relevant passages that answer the query'
106
+
107
+ queries = [
108
+ "What is the capital of China?",
109
+ "Explain gravity",
110
+ ]
111
+
112
+ documents = [
113
+ "The capital of China is Beijing.",
114
+ "Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun.",
115
+ ]
116
+
117
+ pairs = [format_instruction(task, query, doc) for query, doc in zip(queries, documents)]
118
+
119
+ # Tokenize the input texts
120
+ inputs = process_inputs(pairs)
121
+ scores = compute_logits(inputs)
122
+
123
+ print("scores: ", scores)
124
+ # [0.9398471117019653, 0.5553759336471558]
125
+ ```
126
+
127
+ ### vLLM Usage
128
+
129
+ ```python
130
+ # Requires vllm>=0.8.5
131
+ import math
132
+
133
+ import torch
134
+ from transformers import AutoTokenizer, is_torch_npu_available
135
+ from vllm import LLM, SamplingParams
136
+ from vllm.inputs.data import TokensPrompt
137
+
138
+
139
+ def format_instruction(instruction, query, doc):
140
+ text = [
141
+ {"role": "system", "content": "Judge whether the Document meets the requirements based on the Query and the Instruct provided. Note that the answer can only be \"yes\" or \"no\"."},
142
+ {"role": "user", "content": f"<Instruct>: {instruction}\n\n<Query>: {query}\n\n<Document>: {doc}"}
143
+ ]
144
+ return text
145
+
146
+ def process_inputs(pairs, instruction, max_length, suffix_tokens):
147
+ messages = [format_instruction(instruction, query, doc) for query, doc in pairs]
148
+ messages = tokenizer.apply_chat_template(
149
+ messages, tokenize=True, add_generation_prompt=False, enable_thinking=False
150
+ )
151
+ messages = [ele[:max_length] + suffix_tokens for ele in messages]
152
+ messages = [TokensPrompt(prompt_token_ids=ele) for ele in messages]
153
+ return messages
154
+
155
+ def compute_logits(model, messages, sampling_params, true_token, false_token):
156
+ outputs = model.generate(messages, sampling_params, use_tqdm=False)
157
+ scores = []
158
+ for i in range(len(outputs)):
159
+ final_logits = outputs[i].outputs[0].logprobs[-1]
160
+ token_count = len(outputs[i].outputs[0].token_ids)
161
+ if true_token not in final_logits:
162
+ true_logit = -10
163
+ else:
164
+ true_logit = final_logits[true_token].logprob
165
+ if false_token not in final_logits:
166
+ false_logit = -10
167
+ else:
168
+ false_logit = final_logits[false_token].logprob
169
+ true_score = math.exp(true_logit)
170
+ false_score = math.exp(false_logit)
171
+ score = true_score / (true_score + false_score)
172
+ scores.append(score)
173
+ return scores
174
+
175
+ number_of_gpu = torch.cuda.device_count()
176
+ tokenizer = AutoTokenizer.from_pretrained('vec-ai/lychee-rerank')
177
+ model = LLM(model='vec-ai/lychee-rerank', max_model_len=10000, enable_prefix_caching=True)
178
+ tokenizer.padding_side = "left"
179
+ tokenizer.pad_token = tokenizer.eos_token
180
+ suffix = "<|im_end|>\n<|im_start|>assistant\n"
181
+ max_length = 8192
182
+ suffix_tokens = tokenizer.encode(suffix, add_special_tokens=False)
183
+ true_token = tokenizer("yes", add_special_tokens=False).input_ids[0]
184
+ false_token = tokenizer("no", add_special_tokens=False).input_ids[0]
185
+ sampling_params = SamplingParams(temperature=0,
186
+ max_tokens=1,
187
+ logprobs=20,
188
+ allowed_token_ids=[true_token, false_token],
189
+ )
190
+
191
+
192
+ task = 'Given a web search query, retrieve relevant passages that answer the query'
193
+ queries = [
194
+ "What is the capital of China?",
195
+ "Explain gravity",
196
+ ]
197
+ documents = [
198
+ "The capital of China is Beijing.",
199
+ "Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun.",
200
+ ]
201
+
202
+ pairs = list(zip(queries, documents))
203
+ inputs = process_inputs(pairs, task, max_length-len(suffix_tokens), suffix_tokens)
204
+ scores = compute_logits(model, inputs, sampling_params, true_token, false_token)
205
+ print('scores', scores)
206
+ # TODO
207
+ ```
208
+
209
+
210
+ ## Evaluation
211
+
212
+ | Model | Param | MTEB-R | CMTEB-R | MMTEB-R | MLDR | MTEB-Code | ToolBench | FollowIR | BRIGHT |
213
+ |---|---|---|---|---|---|---|---|---|---|
214
+ | **Lychee-embed** | 1.54B | 68.39 |69.77 | 58.43 | 53.85 | 72.54 | 86.35 | 5.74 | 19.47 |
215
+ ||
216
+ | Jina-multilingual-reranker-v2-base | 278M | 54.61 | 70.18 | 54.43 | 50.32 | 46.32 | 67.80 | -0.69 | 16.69 |
217
+ | mGTE-reranker | 304M | 55.71 | 72.01 | 56.61 | 61.40 | 45.92 | 67.58 | -1.14 | 10.76 |
218
+ | BGE-reranker-v2-m3 | 568M | 55.36 | 71.82 | 57.13 | 60.80 | 50.81 | 62.52 | -0.06 | 15.87 |
219
+ | BGE-reranker-v2-gemma | 9.24B | 60.81 | 71.74 | 69.80 | 49.10 | 68.63 | 68.14 | -2.13 | 17.68 |
220
+ | **Lychee-rerank** | 1.54B | 59.56 | 76.37 | 62.47 | 64.09 | 78.03 | 90.82 | 7.38 | 16.92 |
221
+
222
+ For more details, please refer to our [paper](assets/colm25-paper.pdf).
223
+
224
+ ## Citation
225
+
226
+ If you find our work helpful, feel free to give us a cite.
227
+
228
+ ```
229
+ @inproceedings{zhang2025phased,
230
+ title={Phased Training for LLM-powered Text Retrieval Models Beyond Data Scaling},
231
+ author={Xin Zhang and Yanzhao Zhang and Wen Xie and Dingkun Long and Mingxin Li and Pengjun Xie and Meishan Zhang and Wenjie Li and Min Zhang},
232
+ booktitle={Second Conference on Language Modeling},
233
+ year={2025},
234
+ url={https://openreview.net/forum?id=NC6G1KCxlt}
235
+ }
236
+ ```
added_tokens.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "</tool_call>": 151658,
3
+ "<tool_call>": 151657,
4
+ "<|box_end|>": 151649,
5
+ "<|box_start|>": 151648,
6
+ "<|endoftext|>": 151643,
7
+ "<|file_sep|>": 151664,
8
+ "<|fim_middle|>": 151660,
9
+ "<|fim_pad|>": 151662,
10
+ "<|fim_prefix|>": 151659,
11
+ "<|fim_suffix|>": 151661,
12
+ "<|im_end|>": 151645,
13
+ "<|im_start|>": 151644,
14
+ "<|image_pad|>": 151655,
15
+ "<|object_ref_end|>": 151647,
16
+ "<|object_ref_start|>": 151646,
17
+ "<|quad_end|>": 151651,
18
+ "<|quad_start|>": 151650,
19
+ "<|repo_name|>": 151663,
20
+ "<|video_pad|>": 151656,
21
+ "<|vision_end|>": 151653,
22
+ "<|vision_pad|>": 151654,
23
+ "<|vision_start|>": 151652
24
+ }
config.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Qwen2ForCausalLM"
4
+ ],
5
+ "attention_dropout": 0.0,
6
+ "bos_token_id": 151643,
7
+ "eos_token_id": 151645,
8
+ "hidden_act": "silu",
9
+ "hidden_size": 1536,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 8960,
12
+ "max_position_embeddings": 32768,
13
+ "max_window_layers": 21,
14
+ "model_type": "qwen2",
15
+ "num_attention_heads": 12,
16
+ "num_hidden_layers": 28,
17
+ "num_key_value_heads": 2,
18
+ "rms_norm_eps": 1e-06,
19
+ "rope_theta": 1000000.0,
20
+ "sliding_window": null,
21
+ "tie_word_embeddings": true,
22
+ "torch_dtype": "bfloat16",
23
+ "transformers_version": "4.45.0.dev0",
24
+ "use_cache": true,
25
+ "use_sliding_window": false,
26
+ "vocab_size": 151665
27
+ }
framework-crop.png ADDED

Git LFS Details

  • SHA256: eade35b0f8eca610087da421fef85075547a7cd0e5636a4b364d61fa3e092341
  • Pointer size: 131 Bytes
  • Size of remote file: 225 kB
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ee16931b22b5181c52e3aa11a2f0e1383d44b9b7605d98122ecc9417e14fa902
3
+ size 3086634632
special_tokens_map.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>",
5
+ "<|object_ref_start|>",
6
+ "<|object_ref_end|>",
7
+ "<|box_start|>",
8
+ "<|box_end|>",
9
+ "<|quad_start|>",
10
+ "<|quad_end|>",
11
+ "<|vision_start|>",
12
+ "<|vision_end|>",
13
+ "<|vision_pad|>",
14
+ "<|image_pad|>",
15
+ "<|video_pad|>"
16
+ ],
17
+ "eos_token": {
18
+ "content": "<|im_end|>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ },
24
+ "pad_token": {
25
+ "content": "<|endoftext|>",
26
+ "lstrip": false,
27
+ "normalized": false,
28
+ "rstrip": false,
29
+ "single_word": false
30
+ }
31
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a8506e7111b80c6d8635951a02eab0f4e1a8e4e5772da83846579e97b16f61bf
3
+ size 7031673
tokenizer_config.json ADDED
@@ -0,0 +1,207 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "151643": {
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "151644": {
14
+ "content": "<|im_start|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "151645": {
22
+ "content": "<|im_end|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "151646": {
30
+ "content": "<|object_ref_start|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "151647": {
38
+ "content": "<|object_ref_end|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "151648": {
46
+ "content": "<|box_start|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "151649": {
54
+ "content": "<|box_end|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "151650": {
62
+ "content": "<|quad_start|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "151651": {
70
+ "content": "<|quad_end|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "151652": {
78
+ "content": "<|vision_start|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "151653": {
86
+ "content": "<|vision_end|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "151654": {
94
+ "content": "<|vision_pad|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": false,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "151655": {
102
+ "content": "<|image_pad|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "151656": {
110
+ "content": "<|video_pad|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": false,
114
+ "single_word": false,
115
+ "special": true
116
+ },
117
+ "151657": {
118
+ "content": "<tool_call>",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": false,
122
+ "single_word": false,
123
+ "special": false
124
+ },
125
+ "151658": {
126
+ "content": "</tool_call>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false,
131
+ "special": false
132
+ },
133
+ "151659": {
134
+ "content": "<|fim_prefix|>",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": false,
138
+ "single_word": false,
139
+ "special": false
140
+ },
141
+ "151660": {
142
+ "content": "<|fim_middle|>",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": false,
146
+ "single_word": false,
147
+ "special": false
148
+ },
149
+ "151661": {
150
+ "content": "<|fim_suffix|>",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": false,
154
+ "single_word": false,
155
+ "special": false
156
+ },
157
+ "151662": {
158
+ "content": "<|fim_pad|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false,
163
+ "special": false
164
+ },
165
+ "151663": {
166
+ "content": "<|repo_name|>",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": false,
170
+ "single_word": false,
171
+ "special": false
172
+ },
173
+ "151664": {
174
+ "content": "<|file_sep|>",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": false,
178
+ "single_word": false,
179
+ "special": false
180
+ }
181
+ },
182
+ "additional_special_tokens": [
183
+ "<|im_start|>",
184
+ "<|im_end|>",
185
+ "<|object_ref_start|>",
186
+ "<|object_ref_end|>",
187
+ "<|box_start|>",
188
+ "<|box_end|>",
189
+ "<|quad_start|>",
190
+ "<|quad_end|>",
191
+ "<|vision_start|>",
192
+ "<|vision_end|>",
193
+ "<|vision_pad|>",
194
+ "<|image_pad|>",
195
+ "<|video_pad|>"
196
+ ],
197
+ "bos_token": null,
198
+ "chat_template": "{%- if tools %}\n {{- '<|im_start|>system\\n' }}\n {%- if messages[0]['role'] == 'system' %}\n {{- messages[0]['content'] }}\n {%- else %}\n {{- 'You are Qwen, created by Alibaba Cloud. You are a helpful assistant.' }}\n {%- endif %}\n {{- \"\\n\\n# Tools\\n\\nYou may call one or more functions to assist with the user query.\\n\\nYou are provided with function signatures within <tools></tools> XML tags:\\n<tools>\" }}\n {%- for tool in tools %}\n {{- \"\\n\" }}\n {{- tool | tojson }}\n {%- endfor %}\n {{- \"\\n</tools>\\n\\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\\n<tool_call>\\n{\\\"name\\\": <function-name>, \\\"arguments\\\": <args-json-object>}\\n</tool_call><|im_end|>\\n\" }}\n{%- else %}\n {%- if messages[0]['role'] == 'system' %}\n {{- '<|im_start|>system\\n' + messages[0]['content'] + '<|im_end|>\\n' }}\n {%- else %}\n {{- '<|im_start|>system\\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\\n' }}\n {%- endif %}\n{%- endif %}\n{%- for message in messages %}\n {%- if (message.role == \"user\") or (message.role == \"system\" and not loop.first) or (message.role == \"assistant\" and not message.tool_calls) %}\n {{- '<|im_start|>' + message.role + '\\n' + message.content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" %}\n {{- '<|im_start|>' + message.role }}\n {%- if message.content %}\n {{- '\\n' + message.content }}\n {%- endif %}\n {%- for tool_call in message.tool_calls %}\n {%- if tool_call.function is defined %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {{- '\\n<tool_call>\\n{\"name\": \"' }}\n {{- tool_call.name }}\n {{- '\", \"arguments\": ' }}\n {{- tool_call.arguments | tojson }}\n {{- '}\\n</tool_call>' }}\n {%- endfor %}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == \"tool\" %}\n {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != \"tool\") %}\n {{- '<|im_start|>user' }}\n {%- endif %}\n {{- '\\n<tool_response>\\n' }}\n {{- message.content }}\n {{- '\\n</tool_response>' }}\n {%- if loop.last or (messages[loop.index0 + 1].role != \"tool\") %}\n {{- '<|im_end|>\\n' }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n{%- endif %}\n",
199
+ "clean_up_tokenization_spaces": false,
200
+ "eos_token": "<|im_end|>",
201
+ "errors": "replace",
202
+ "model_max_length": 131072,
203
+ "pad_token": "<|endoftext|>",
204
+ "split_special_tokens": false,
205
+ "tokenizer_class": "Qwen2Tokenizer",
206
+ "unk_token": null
207
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff