qianguo sun commited on
Commit
102bb9f
·
0 Parent(s):
.gitattributes ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ g_00204000 filter=lfs diff=lfs merge=lfs -text
37
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,243 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ {}
3
+ ---
4
+ ## UniTTS
5
+
6
+ ### Overview
7
+ we introduce UniTTS and [DistilCodec](https://github.com/IDEA-Emdoor-Lab/DistilCodec) . DistilCodec is a single-codebook audio codec, which has 32768 codes, and the utilization of the codebook achieves nearly 100\%. UniTTS leverages DistilCodec for audio discretization, while its backbone network adopts Qwen2.5-7B to model relationships between audio tokens.
8
+
9
+ Our main contributions are summarized as follows:
10
+
11
+ - DistilCodec: We propose a training methodology that enables the distillation of multi-codebook Neural Audio Codecs(NAC) into single-codebook NAC. Through this approach, we have developed DistilCodec - a single-codebook NAC containing 32,768 codes that achieves 100\% utilization with balanced code distribution. Notably, DistilCodec employs universal audio data for training rather than being restricted to speech-specific datasets.
12
+ - UniTTS: We present UniTTS, a novel TTS system trained on QWen2.5-7B and DistilCodec. Leveraging DistilCodec's comprehensive audio modeling capabilities, UniTTS achieves end-to-end speech synthesis with full-spectrum audio input/output. The system demonstrates enhanced naturalness in emotional expressiveness compared to conventional TTS systems, particularly in capturing subtle prosodic variations and affective nuances during audio generation.
13
+ - Novel Audio Language Model Paradigm: We establish a dual-phase Audio Language Model (ALM) training framework, which comprises (i) Audio Perceptual Modeling (DistilCodec) focusing purely on acoustic discretization, and (ii) Audio Cognitive Modeling (UniTTS) implemented via pretraining (incorporating universal audio autoregressive tasks), supervised fine-tuning (evaluating text-audio interleaved prompts' impact), and alignment (employing direct preference optimization for speech refinement) - enabled by UniTTS's complete end-to-end integration within the LLM.
14
+
15
+ ## Training data distribution and application scope
16
+ The model architecture was augmented with cross-lingual text-speech paired datasets (English and Chinese) alongside text-associated instruction corpora during pretraining. Subsequent SFT and alignment phases systematically incorporated three datasets: text instructions dataset, long-CoT dataset, and Chinese TTS dataset. Consequently, the model demonstrates robust capabilities in text-based conversational, long-CoT conversational, and Chinese TTS.
17
+
18
+ The distribution of the pretraining training data is as follows:
19
+ | Data Type | Data Size (B) |
20
+ |----------------------------|---------------|
21
+ | Text Data | 140 |
22
+ | Text-Audio Alignment Data | 82 |
23
+ | Audio Data | 100 |
24
+ | **Total** | **322** |
25
+
26
+ The distribution of the sft training data is as follows:
27
+
28
+ | Data Type | Number of Samples |
29
+ |----------------------------|-------------------|
30
+ | Text Data | 181K |
31
+ | Long-cot Dataset | 55K |
32
+ | Chinese Text-Audio Alignment Data | 401K |
33
+ | Total | 637K |
34
+
35
+ The distribution of the lpo training data is as follows:
36
+
37
+ | Data Type | Number of Samples |
38
+ |----------------------------|-------------------|
39
+ | General SFT Data | 100K |
40
+ | Long-cot Dataset | 45K |
41
+ | Chinese Text-Audio Alignment Data | 300K |
42
+ | Total | 445K |
43
+
44
+ The proposed model supports the following capabilities
45
+
46
+ | Application Type | Support Status |
47
+ |----------------------------|-------------------|
48
+ | Text conversation | Supported |
49
+ | Long-cot conversation | Supported |
50
+ | Chinese TTS | Supported |
51
+
52
+ ## Install
53
+ **Clone and Install**
54
+
55
+ - Clone the repo
56
+ ``` sh
57
+ git clone [email protected]:IDEA-Emdoor-Lab/UniTTS.git
58
+
59
+ git clone [email protected]:IDEA-Emdoor-Lab/DistilCodec.git
60
+
61
+ cd UniTTS
62
+ ```
63
+
64
+ - Installation environment
65
+ ``` sh
66
+ conda create -n unitts -y python=3.10
67
+ conda activate unitts
68
+ pip install -r requirements.txt
69
+ ```
70
+
71
+
72
+
73
+ **Model Download**
74
+
75
+ Download via git clone:
76
+ ```sh
77
+ mkdir -p pretrained_models
78
+
79
+ # Make sure you have git-lfs installed (https://git-lfs.com)
80
+ git lfs install
81
+
82
+ # clone UniTTS model
83
+ git clone [email protected]:IDEA-Emdoor/UniTTS-mixed-v0.1
84
+ ```
85
+
86
+
87
+ ## Inference Usage
88
+ ### TTS Inference Usage
89
+
90
+ ```
91
+ #### Step 1: Init model
92
+
93
+ from cli.tokenizer import QWenTokenizer
94
+ from cli.tts_tool import enocde_audio, tts_prompt_ref_text
95
+ import soundfile as sf
96
+ import librosa
97
+ from vllm import LLM, SamplingParams
98
+
99
+ import sys
100
+ sys.path.append('../DistilCodec/') # set DistilCodec code path
101
+ from distil_codec import DistilCodec # type: ignore
102
+
103
+ #init model
104
+ model_name="IDEA-Emdoor/UniTTS-mixed-v0.1"
105
+ model_config="IDEA-Emdoor/UniTTS-mixed-v0.1/codec_config.json"
106
+ ckpt_config="IDEA-Emdoor/UniTTS-mixed-v0.1"
107
+
108
+ ref_audio_path='cli/ref.mp3'
109
+ ref_text='求求你,再给我一次机会,我保证不会让你失望……'
110
+ infer_text='天啊!这竟然是真的?我简直不敢相信!'
111
+
112
+
113
+ llm = LLM(model=model_name, dtype='auto', gpu_memory_utilization=0.8, seed=0)
114
+ codec = DistilCodec.from_pretrained(
115
+ config_path=model_config,
116
+ model_path=ckpt_config,
117
+ use_generator=True,
118
+ is_debug=False,
119
+ local_rank=0).eval()
120
+
121
+ tokenizer: QWenTokenizer = QWenTokenizer(model_name)
122
+ stop_tokens = ["<|endoftext|>", "<|endofaudio|>", "<|im_end|>"]
123
+ stop_ids = tokenizer.tokenizer.convert_tokens_to_ids(stop_tokens)
124
+
125
+ #### Step 2: format prompt
126
+
127
+ ref_audio_text = enocde_audio(codec, tokenizer, ref_audio_path)
128
+ ref_audio_text = f'<|inter_audio_begin|>{ref_audio_text}<|inter_audio_end|>'
129
+ prompt = tts_prompt_ref_text.format(content=infer_text, example_voice=ref_audio_text, example_text=ref_text)
130
+
131
+ #### Step 3: inference speech token
132
+ sampling_params = SamplingParams(temperature=0.9, top_p=0.9, stop_token_ids=stop_ids, max_tokens=6000)
133
+ output = llm.generate([prompt], sampling_params)
134
+
135
+ #### step 4: decode speech token
136
+
137
+ output_dir='./' # save path
138
+ tokens = tokenizer.tokenizer.encode(output[0].outputs[0].text)[1: -2]
139
+ utt = 'infer'
140
+ y_gen = codec.decode_from_codes(
141
+ tokens,
142
+ minus_token_offset=True # if the 'plus_llm_offset' of method demo_for_generate_audio_codes is set to True, then minus_token_offset must be True.
143
+ )
144
+ codec.save_wav(
145
+ audio_gen_batch=y_gen,
146
+ nhop_lengths=[y_gen.shape[-1]],
147
+ save_path=output_dir,
148
+ name_tag=utt
149
+ )
150
+
151
+ ```
152
+
153
+ ### Long-cot Inference Usage
154
+ ```
155
+ #### Step 1: Init model
156
+
157
+ from cli.tokenizer import QWenTokenizer
158
+ from cli.tts_tool import enocde_audio, long_cot_prompt_template
159
+ from vllm import LLM, SamplingParams
160
+
161
+
162
+ #init model
163
+ model_name="IDEA-Emdoor/UniTTS-mixed-v0.1"
164
+ infer_text="给我写一首春天的作文"
165
+ llm = LLM(model=model_name, dtype='auto', gpu_memory_utilization=0.8, seed=0)
166
+
167
+ tokenizer: QWenTokenizer = QWenTokenizer(model_name)
168
+ stop_tokens = ["<|endoftext|>", "<|endofaudio|>", "<|im_end|>"]
169
+ stop_ids = tokenizer.tokenizer.convert_tokens_to_ids(stop_tokens)
170
+
171
+ #### Step 2: format prompt
172
+
173
+ prompt = long_cot_prompt_template.format(question=infer_text)
174
+
175
+ #### Step 3: inference speech token
176
+ sampling_params = SamplingParams(temperature=0.8, top_p=0.8, stop_token_ids=stop_ids, max_tokens=6000)
177
+ output = llm.generate([prompt], sampling_params)
178
+
179
+ print(output[0].outputs[0].text)
180
+
181
+ ```
182
+
183
+ ### Text conversation Inference Usage
184
+
185
+ ```
186
+ #### Step 1: Init model
187
+
188
+ from cli.tokenizer import QWenTokenizer
189
+ from cli.tts_tool import enocde_audio, text_conversation_prompt_template
190
+ from vllm import LLM, SamplingParams
191
+
192
+
193
+ #init model
194
+ model_name="IDEA-Emdoor/UniTTS-mixed-v0.1"
195
+
196
+ infer_text="天空为什么是蓝色的?"
197
+ llm = LLM(model=model_name, dtype='auto', gpu_memory_utilization=0.8, seed=0)
198
+
199
+ tokenizer: QWenTokenizer = QWenTokenizer(model_name)
200
+ stop_tokens = ["<|endoftext|>", "<|endofaudio|>", "<|im_end|>"]
201
+ stop_ids = tokenizer.tokenizer.convert_tokens_to_ids(stop_tokens)
202
+
203
+ #### Step 2: format prompt
204
+
205
+ prompt = text_conversation_prompt_template.format(question=infer_text)
206
+
207
+ #### Step 3: inference speech token
208
+ sampling_params = SamplingParams(temperature=0.75, top_p=0.75, stop_token_ids=stop_ids, max_tokens=6000)
209
+ output = llm.generate([prompt], sampling_params)
210
+
211
+ print(output[0].outputs[0].text)
212
+
213
+ ```
214
+
215
+ ## Citation
216
+ ```
217
+ @misc{wang2025unittsendtoendttsdecoupling,
218
+ title={UniTTS: An end-to-end TTS system without decoupling of acoustic and semantic information},
219
+ author={Rui Wang and Qianguo Sun and Tianrong Chen and Zhiyun Zeng and Junlong Wu and Jiaxing Zhang},
220
+ year={2025},
221
+ eprint={2505.17426},
222
+ archivePrefix={arXiv},
223
+ primaryClass={cs.SD},
224
+ url={https://arxiv.org/abs/2505.17426},
225
+ }
226
+ ```
227
+
228
+
229
+ ## Disclaimer
230
+
231
+ Our model provides zero-shot voice cloning only for academic research purposes. We encourage the community to uphold safety and ethical principles in AI research and applications.
232
+
233
+ Important Notes:
234
+
235
+ - Compliance with the model's open-source license is mandatory.
236
+
237
+ - Unauthorized voice replication applications are strictly prohibited.
238
+
239
+ - Developers bear no responsibility for any misuse of this model.
240
+
241
+
242
+ ## License
243
+ <a href="https://arxiv.org/abs/2505.17426">UniTTS: An end-to-end TTS system without decoupling of acoustic and semantic information</a> © 2025 by <a href="https://creativecommons.org">Rui Wang, Qianguo Sun, Tianrong Chen, Zhiyun Zeng, Junlong Wu, Jiaxing Zhang</a> is licensed under <a href="https://creativecommons.org/licenses/by-nc-nd/4.0/">CC BY-NC-ND 4.0</a><img src="https://mirrors.creativecommons.org/presskit/icons/cc.svg" style="max-width: 1em;max-height:1em;margin-left: .2em;"><img src="https://mirrors.creativecommons.org/presskit/icons/by.svg" style="max-width: 1em;max-height:1em;margin-left: .2em;"><img src="https://mirrors.creativecommons.org/presskit/icons/nc.svg" style="max-width: 1em;max-height:1em;margin-left: .2em;"><img src="https://mirrors.creativecommons.org/presskit/icons/nd.svg" style="max-width: 1em;max-height:1em;margin-left: .2em;">
codec_config.json ADDED
@@ -0,0 +1,209 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "summary": {
3
+ "quantizer_dim": 3584,
4
+ "codebook_per_group_per_residual": 3584,
5
+ "group": 1,
6
+ "residual": 1,
7
+ "original_residual_dim": 1024,
8
+ "codebook_upsample": 3.5,
9
+ "codebook_dim": 3584
10
+ },
11
+ "base_model": "QWen2.5-7B-Pretrain",
12
+ "token_id_offset": 152064,
13
+ "spec_transform": {
14
+ "sampling_rate": 24000,
15
+ "segment_size": 72000,
16
+ "num_mels": 128,
17
+ "n_fft": 1024,
18
+ "hop_size": 256,
19
+ "win_size": 1024,
20
+ "fmin": 0,
21
+ "fmax": 12000,
22
+ "fmax_loss": null
23
+ },
24
+ "encoder": {
25
+ "input_channels": 128,
26
+ "depths": [
27
+ 3,
28
+ 3,
29
+ 9,
30
+ 3
31
+ ],
32
+ "dims": [
33
+ 256,
34
+ 512,
35
+ 768,
36
+ 1024
37
+ ],
38
+ "drop_path_rate": 0.2,
39
+ "kernel_size": 7
40
+ },
41
+ "decoder": {
42
+ "hop_length": 256,
43
+ "upsample_rates": [
44
+ 8,
45
+ 4,
46
+ 2,
47
+ 2,
48
+ 2
49
+ ],
50
+ "upsample_kernel_sizes": [
51
+ 16,
52
+ 12,
53
+ 4,
54
+ 4,
55
+ 4
56
+ ],
57
+ "resblock_kernel_sizes": [
58
+ 3,
59
+ 7,
60
+ 11
61
+ ],
62
+ "resblock_dilation_sizes": [
63
+ [
64
+ 1,
65
+ 3,
66
+ 5
67
+ ],
68
+ [
69
+ 1,
70
+ 3,
71
+ 5
72
+ ],
73
+ [
74
+ 1,
75
+ 3,
76
+ 5
77
+ ]
78
+ ],
79
+ "num_mels": 1024,
80
+ "upsample_initial_channel": 1024,
81
+ "use_template": false,
82
+ "pre_conv_kernel_size": 13,
83
+ "post_conv_kernel_size": 13
84
+ },
85
+ "quantizer": {
86
+ "quantizer_type": "grvq",
87
+ "input_dim": 1024,
88
+ "n_groups": 1,
89
+ "n_codebooks": 1,
90
+ "codebook_size": 32768,
91
+ "codebook_dim": 3584,
92
+ "levels": [
93
+ 8,
94
+ 5,
95
+ 5,
96
+ 5
97
+ ],
98
+ "downsample_factor": [
99
+ 1
100
+ ],
101
+ "ema_decay": 0.8,
102
+ "codebook_diversity_loss_weight": 1.0,
103
+ "codebook_diversity_temperature": 100.0
104
+ },
105
+ "teacher_quantizer": {
106
+ "quantizer_type": "grvq",
107
+ "input_dim": 1024,
108
+ "n_groups": 2,
109
+ "n_codebooks": 1,
110
+ "codebook_size": 32768,
111
+ "codebook_dim": 3584,
112
+ "levels": [
113
+ 8,
114
+ 5,
115
+ 5,
116
+ 5
117
+ ],
118
+ "downsample_factor": [
119
+ 2
120
+ ],
121
+ "ema_decay": 0.8,
122
+ "codebook_diversity_loss_weight": 1.0,
123
+ "codebook_diversity_temperature": 100.0
124
+ },
125
+ "descriminators": {
126
+ "MultiPeriodDiscriminator": {
127
+ "periods": [
128
+ 5,
129
+ 8,
130
+ 13,
131
+ 19,
132
+ 30
133
+ ],
134
+ "kernal_size": 5,
135
+ "stride": 3
136
+ },
137
+ "MultiScaleDiscriminator": {
138
+ "avg_poolings": {
139
+ "kernal_sizes": [
140
+ 6,
141
+ 6
142
+ ],
143
+ "stridess": [
144
+ 3,
145
+ 3
146
+ ],
147
+ "paddings": [
148
+ 3,
149
+ 3
150
+ ]
151
+ },
152
+ "DiscriminatorS": {
153
+ "kernal_sizes": [
154
+ 21,
155
+ 61,
156
+ 61,
157
+ 61,
158
+ 61,
159
+ 61,
160
+ 7
161
+ ],
162
+ "strides": [
163
+ 1,
164
+ 3,
165
+ 3,
166
+ 6,
167
+ 6,
168
+ 1,
169
+ 1
170
+ ],
171
+ "paddings": [
172
+ 10,
173
+ 30,
174
+ 30,
175
+ 30,
176
+ 30,
177
+ 30,
178
+ 3
179
+ ]
180
+ }
181
+ },
182
+ "MultiScaleSTFTDiscriminator": {
183
+ "n_ffts": [
184
+ 1024,
185
+ 2048,
186
+ 512,
187
+ 256,
188
+ 128
189
+ ],
190
+ "hop_lengths": [
191
+ 256,
192
+ 512,
193
+ 128,
194
+ 64,
195
+ 32
196
+ ],
197
+ "win_lengths": [
198
+ 1024,
199
+ 2048,
200
+ 512,
201
+ 256,
202
+ 128
203
+ ],
204
+ "filters": 32,
205
+ "in_channels": 1,
206
+ "out_channels": 1
207
+ }
208
+ }
209
+ }
config.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/cognitive_comp/ccnl_common_data/wangrui/alm_sft_training/20250410/train/checkpoint/xpo-mcore-qwen2.5-7B-lr-8e-7-minlr-5e-7-bs-6-gbs-120-seqlen-4096-pr-bf16-tp-2-pp-4-cp-1-ac-false-do-true-sp-true-ti-18000-wi-66/iter_00020500_hf",
3
+ "architectures": [
4
+ "Qwen2Model"
5
+ ],
6
+ "attention_dropout": 0.0,
7
+ "bos_token_id": 151643,
8
+ "eos_token_id": 151643,
9
+ "hidden_act": "silu",
10
+ "hidden_size": 3584,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 18944,
13
+ "max_position_embeddings": 8192,
14
+ "max_window_layers": 28,
15
+ "model_type": "qwen2",
16
+ "num_attention_heads": 28,
17
+ "num_hidden_layers": 28,
18
+ "num_key_value_heads": 4,
19
+ "rms_norm_eps": 1e-06,
20
+ "rope_scaling": null,
21
+ "rope_theta": 1000000.0,
22
+ "sliding_window": null,
23
+ "tie_word_embeddings": false,
24
+ "torch_dtype": "float32",
25
+ "transformers_version": "4.48.3",
26
+ "use_cache": true,
27
+ "use_mrope": false,
28
+ "use_sliding_window": false,
29
+ "vocab_size": 184840
30
+ }
g_00204000 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:097fcfee379183ce7d02a610bb1a8eba080d7ea4972c104f608e5c561940d44f
3
+ size 1625057395
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 151643,
4
+ "eos_token_id": 151643,
5
+ "transformers_version": "4.46.3"
6
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model-00001-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cd6a3131b068db092f33df8a85ff132b074272e3b80a1eb70a72a8a66e2b3b5f
3
+ size 2649866376
model-00002-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8ce3bb32bc2c6bdee06ed879df19537ae855a81f436fd87c00cfde6d5fd9a3c1
3
+ size 1981924776
model-00003-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bcb0bafca04c2c2bcbecd6086fa9b6c0ec3319610e1f8e10da7de2032d0676ea
3
+ size 1864465064
model-00004-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:876397c2d0d7241776c25cdc747c233b041559b3a80202ddeb0d8c11c5969a20
3
+ size 1864465064
model-00005-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a88f7f74224d730593ac65ad35d2939ac8374f9c3c3ff7620de49f877b8352af
3
+ size 1864465064
model-00006-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea87aa68c358bfe729d02d56520cf52c71409294a772eb8ab6e7c67168dde60f
3
+ size 1864465048
model-00007-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eda6d74cb0830a1a5f2dff89fd5ea1fa625d05a62bb4962dde8b99bc985d1a24
3
+ size 1864465088
model-00008-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b4b6f7ed16611d1db372087fc4df20500a9694db8d4768eb21fb20f4b4b61bf5
3
+ size 1864465088
model-00009-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:192d5320e2cd7f490d57b73c67e49f58a266bd4e6f0bed325a236e1cde55a069
3
+ size 1864465088
model-00010-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b8661439722241cd0dfa2c64c74c44f63a400edc55e6eb7fe4b66bdedfc66e48
3
+ size 1864465088
model-00011-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a389f6ccc791b5109e68506e58403c626a2286821a607902f4e5e2f512f359f4
3
+ size 1864465088
model-00012-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dfac48d7d98a8cf3151155ea98df1342b36fecbbf7daf9117a92bfe55eef9e13
3
+ size 1864465088
model-00013-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bd6d6dfa8697e7faa6ad9d61ae5af6da6be75dbb947aef015125bb2c77c7232a
3
+ size 1864465088
model-00014-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9fde867d7cd6b080ed431854c96c2f0bf1fb52dbea7a7f26f509d256cbdbd52d
3
+ size 1864465088
model-00015-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f9de00d1d0c4de487a9dee1ad5dcd4c5abec6fdc5079ee2a5633e450be42f427
3
+ size 2649866368
model-00016-of-00016.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:730dc6f35bc065723b196e5bda034943db4fdb2d1bcee8fa55101277062ef0a5
3
+ size 1747019776
model.safetensors.index.json ADDED
@@ -0,0 +1,346 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 31402219520
4
+ },
5
+ "weight_map": {
6
+ "lm_head.weight": "model-00015-of-00016.safetensors",
7
+ "model.embed_tokens.weight": "model-00001-of-00016.safetensors",
8
+ "model.layers.0.input_layernorm.weight": "model-00002-of-00016.safetensors",
9
+ "model.layers.0.mlp.down_proj.weight": "model-00002-of-00016.safetensors",
10
+ "model.layers.0.mlp.gate_proj.weight": "model-00002-of-00016.safetensors",
11
+ "model.layers.0.mlp.up_proj.weight": "model-00002-of-00016.safetensors",
12
+ "model.layers.0.post_attention_layernorm.weight": "model-00002-of-00016.safetensors",
13
+ "model.layers.0.self_attn.k_proj.bias": "model-00002-of-00016.safetensors",
14
+ "model.layers.0.self_attn.k_proj.weight": "model-00002-of-00016.safetensors",
15
+ "model.layers.0.self_attn.o_proj.weight": "model-00002-of-00016.safetensors",
16
+ "model.layers.0.self_attn.q_proj.bias": "model-00002-of-00016.safetensors",
17
+ "model.layers.0.self_attn.q_proj.weight": "model-00002-of-00016.safetensors",
18
+ "model.layers.0.self_attn.v_proj.bias": "model-00002-of-00016.safetensors",
19
+ "model.layers.0.self_attn.v_proj.weight": "model-00002-of-00016.safetensors",
20
+ "model.layers.1.input_layernorm.weight": "model-00002-of-00016.safetensors",
21
+ "model.layers.1.mlp.down_proj.weight": "model-00002-of-00016.safetensors",
22
+ "model.layers.1.mlp.gate_proj.weight": "model-00002-of-00016.safetensors",
23
+ "model.layers.1.mlp.up_proj.weight": "model-00002-of-00016.safetensors",
24
+ "model.layers.1.post_attention_layernorm.weight": "model-00002-of-00016.safetensors",
25
+ "model.layers.1.self_attn.k_proj.bias": "model-00002-of-00016.safetensors",
26
+ "model.layers.1.self_attn.k_proj.weight": "model-00002-of-00016.safetensors",
27
+ "model.layers.1.self_attn.o_proj.weight": "model-00002-of-00016.safetensors",
28
+ "model.layers.1.self_attn.q_proj.bias": "model-00002-of-00016.safetensors",
29
+ "model.layers.1.self_attn.q_proj.weight": "model-00002-of-00016.safetensors",
30
+ "model.layers.1.self_attn.v_proj.bias": "model-00002-of-00016.safetensors",
31
+ "model.layers.1.self_attn.v_proj.weight": "model-00002-of-00016.safetensors",
32
+ "model.layers.10.input_layernorm.weight": "model-00007-of-00016.safetensors",
33
+ "model.layers.10.mlp.down_proj.weight": "model-00007-of-00016.safetensors",
34
+ "model.layers.10.mlp.gate_proj.weight": "model-00007-of-00016.safetensors",
35
+ "model.layers.10.mlp.up_proj.weight": "model-00007-of-00016.safetensors",
36
+ "model.layers.10.post_attention_layernorm.weight": "model-00007-of-00016.safetensors",
37
+ "model.layers.10.self_attn.k_proj.bias": "model-00006-of-00016.safetensors",
38
+ "model.layers.10.self_attn.k_proj.weight": "model-00006-of-00016.safetensors",
39
+ "model.layers.10.self_attn.o_proj.weight": "model-00006-of-00016.safetensors",
40
+ "model.layers.10.self_attn.q_proj.bias": "model-00006-of-00016.safetensors",
41
+ "model.layers.10.self_attn.q_proj.weight": "model-00006-of-00016.safetensors",
42
+ "model.layers.10.self_attn.v_proj.bias": "model-00006-of-00016.safetensors",
43
+ "model.layers.10.self_attn.v_proj.weight": "model-00006-of-00016.safetensors",
44
+ "model.layers.11.input_layernorm.weight": "model-00007-of-00016.safetensors",
45
+ "model.layers.11.mlp.down_proj.weight": "model-00007-of-00016.safetensors",
46
+ "model.layers.11.mlp.gate_proj.weight": "model-00007-of-00016.safetensors",
47
+ "model.layers.11.mlp.up_proj.weight": "model-00007-of-00016.safetensors",
48
+ "model.layers.11.post_attention_layernorm.weight": "model-00007-of-00016.safetensors",
49
+ "model.layers.11.self_attn.k_proj.bias": "model-00007-of-00016.safetensors",
50
+ "model.layers.11.self_attn.k_proj.weight": "model-00007-of-00016.safetensors",
51
+ "model.layers.11.self_attn.o_proj.weight": "model-00007-of-00016.safetensors",
52
+ "model.layers.11.self_attn.q_proj.bias": "model-00007-of-00016.safetensors",
53
+ "model.layers.11.self_attn.q_proj.weight": "model-00007-of-00016.safetensors",
54
+ "model.layers.11.self_attn.v_proj.bias": "model-00007-of-00016.safetensors",
55
+ "model.layers.11.self_attn.v_proj.weight": "model-00007-of-00016.safetensors",
56
+ "model.layers.12.input_layernorm.weight": "model-00008-of-00016.safetensors",
57
+ "model.layers.12.mlp.down_proj.weight": "model-00008-of-00016.safetensors",
58
+ "model.layers.12.mlp.gate_proj.weight": "model-00008-of-00016.safetensors",
59
+ "model.layers.12.mlp.up_proj.weight": "model-00008-of-00016.safetensors",
60
+ "model.layers.12.post_attention_layernorm.weight": "model-00008-of-00016.safetensors",
61
+ "model.layers.12.self_attn.k_proj.bias": "model-00007-of-00016.safetensors",
62
+ "model.layers.12.self_attn.k_proj.weight": "model-00007-of-00016.safetensors",
63
+ "model.layers.12.self_attn.o_proj.weight": "model-00007-of-00016.safetensors",
64
+ "model.layers.12.self_attn.q_proj.bias": "model-00007-of-00016.safetensors",
65
+ "model.layers.12.self_attn.q_proj.weight": "model-00007-of-00016.safetensors",
66
+ "model.layers.12.self_attn.v_proj.bias": "model-00007-of-00016.safetensors",
67
+ "model.layers.12.self_attn.v_proj.weight": "model-00007-of-00016.safetensors",
68
+ "model.layers.13.input_layernorm.weight": "model-00008-of-00016.safetensors",
69
+ "model.layers.13.mlp.down_proj.weight": "model-00008-of-00016.safetensors",
70
+ "model.layers.13.mlp.gate_proj.weight": "model-00008-of-00016.safetensors",
71
+ "model.layers.13.mlp.up_proj.weight": "model-00008-of-00016.safetensors",
72
+ "model.layers.13.post_attention_layernorm.weight": "model-00008-of-00016.safetensors",
73
+ "model.layers.13.self_attn.k_proj.bias": "model-00008-of-00016.safetensors",
74
+ "model.layers.13.self_attn.k_proj.weight": "model-00008-of-00016.safetensors",
75
+ "model.layers.13.self_attn.o_proj.weight": "model-00008-of-00016.safetensors",
76
+ "model.layers.13.self_attn.q_proj.bias": "model-00008-of-00016.safetensors",
77
+ "model.layers.13.self_attn.q_proj.weight": "model-00008-of-00016.safetensors",
78
+ "model.layers.13.self_attn.v_proj.bias": "model-00008-of-00016.safetensors",
79
+ "model.layers.13.self_attn.v_proj.weight": "model-00008-of-00016.safetensors",
80
+ "model.layers.14.input_layernorm.weight": "model-00009-of-00016.safetensors",
81
+ "model.layers.14.mlp.down_proj.weight": "model-00009-of-00016.safetensors",
82
+ "model.layers.14.mlp.gate_proj.weight": "model-00009-of-00016.safetensors",
83
+ "model.layers.14.mlp.up_proj.weight": "model-00009-of-00016.safetensors",
84
+ "model.layers.14.post_attention_layernorm.weight": "model-00009-of-00016.safetensors",
85
+ "model.layers.14.self_attn.k_proj.bias": "model-00008-of-00016.safetensors",
86
+ "model.layers.14.self_attn.k_proj.weight": "model-00008-of-00016.safetensors",
87
+ "model.layers.14.self_attn.o_proj.weight": "model-00008-of-00016.safetensors",
88
+ "model.layers.14.self_attn.q_proj.bias": "model-00008-of-00016.safetensors",
89
+ "model.layers.14.self_attn.q_proj.weight": "model-00008-of-00016.safetensors",
90
+ "model.layers.14.self_attn.v_proj.bias": "model-00008-of-00016.safetensors",
91
+ "model.layers.14.self_attn.v_proj.weight": "model-00008-of-00016.safetensors",
92
+ "model.layers.15.input_layernorm.weight": "model-00009-of-00016.safetensors",
93
+ "model.layers.15.mlp.down_proj.weight": "model-00009-of-00016.safetensors",
94
+ "model.layers.15.mlp.gate_proj.weight": "model-00009-of-00016.safetensors",
95
+ "model.layers.15.mlp.up_proj.weight": "model-00009-of-00016.safetensors",
96
+ "model.layers.15.post_attention_layernorm.weight": "model-00009-of-00016.safetensors",
97
+ "model.layers.15.self_attn.k_proj.bias": "model-00009-of-00016.safetensors",
98
+ "model.layers.15.self_attn.k_proj.weight": "model-00009-of-00016.safetensors",
99
+ "model.layers.15.self_attn.o_proj.weight": "model-00009-of-00016.safetensors",
100
+ "model.layers.15.self_attn.q_proj.bias": "model-00009-of-00016.safetensors",
101
+ "model.layers.15.self_attn.q_proj.weight": "model-00009-of-00016.safetensors",
102
+ "model.layers.15.self_attn.v_proj.bias": "model-00009-of-00016.safetensors",
103
+ "model.layers.15.self_attn.v_proj.weight": "model-00009-of-00016.safetensors",
104
+ "model.layers.16.input_layernorm.weight": "model-00010-of-00016.safetensors",
105
+ "model.layers.16.mlp.down_proj.weight": "model-00010-of-00016.safetensors",
106
+ "model.layers.16.mlp.gate_proj.weight": "model-00010-of-00016.safetensors",
107
+ "model.layers.16.mlp.up_proj.weight": "model-00010-of-00016.safetensors",
108
+ "model.layers.16.post_attention_layernorm.weight": "model-00010-of-00016.safetensors",
109
+ "model.layers.16.self_attn.k_proj.bias": "model-00009-of-00016.safetensors",
110
+ "model.layers.16.self_attn.k_proj.weight": "model-00009-of-00016.safetensors",
111
+ "model.layers.16.self_attn.o_proj.weight": "model-00009-of-00016.safetensors",
112
+ "model.layers.16.self_attn.q_proj.bias": "model-00009-of-00016.safetensors",
113
+ "model.layers.16.self_attn.q_proj.weight": "model-00009-of-00016.safetensors",
114
+ "model.layers.16.self_attn.v_proj.bias": "model-00009-of-00016.safetensors",
115
+ "model.layers.16.self_attn.v_proj.weight": "model-00009-of-00016.safetensors",
116
+ "model.layers.17.input_layernorm.weight": "model-00010-of-00016.safetensors",
117
+ "model.layers.17.mlp.down_proj.weight": "model-00010-of-00016.safetensors",
118
+ "model.layers.17.mlp.gate_proj.weight": "model-00010-of-00016.safetensors",
119
+ "model.layers.17.mlp.up_proj.weight": "model-00010-of-00016.safetensors",
120
+ "model.layers.17.post_attention_layernorm.weight": "model-00010-of-00016.safetensors",
121
+ "model.layers.17.self_attn.k_proj.bias": "model-00010-of-00016.safetensors",
122
+ "model.layers.17.self_attn.k_proj.weight": "model-00010-of-00016.safetensors",
123
+ "model.layers.17.self_attn.o_proj.weight": "model-00010-of-00016.safetensors",
124
+ "model.layers.17.self_attn.q_proj.bias": "model-00010-of-00016.safetensors",
125
+ "model.layers.17.self_attn.q_proj.weight": "model-00010-of-00016.safetensors",
126
+ "model.layers.17.self_attn.v_proj.bias": "model-00010-of-00016.safetensors",
127
+ "model.layers.17.self_attn.v_proj.weight": "model-00010-of-00016.safetensors",
128
+ "model.layers.18.input_layernorm.weight": "model-00011-of-00016.safetensors",
129
+ "model.layers.18.mlp.down_proj.weight": "model-00011-of-00016.safetensors",
130
+ "model.layers.18.mlp.gate_proj.weight": "model-00011-of-00016.safetensors",
131
+ "model.layers.18.mlp.up_proj.weight": "model-00011-of-00016.safetensors",
132
+ "model.layers.18.post_attention_layernorm.weight": "model-00011-of-00016.safetensors",
133
+ "model.layers.18.self_attn.k_proj.bias": "model-00010-of-00016.safetensors",
134
+ "model.layers.18.self_attn.k_proj.weight": "model-00010-of-00016.safetensors",
135
+ "model.layers.18.self_attn.o_proj.weight": "model-00010-of-00016.safetensors",
136
+ "model.layers.18.self_attn.q_proj.bias": "model-00010-of-00016.safetensors",
137
+ "model.layers.18.self_attn.q_proj.weight": "model-00010-of-00016.safetensors",
138
+ "model.layers.18.self_attn.v_proj.bias": "model-00010-of-00016.safetensors",
139
+ "model.layers.18.self_attn.v_proj.weight": "model-00010-of-00016.safetensors",
140
+ "model.layers.19.input_layernorm.weight": "model-00011-of-00016.safetensors",
141
+ "model.layers.19.mlp.down_proj.weight": "model-00011-of-00016.safetensors",
142
+ "model.layers.19.mlp.gate_proj.weight": "model-00011-of-00016.safetensors",
143
+ "model.layers.19.mlp.up_proj.weight": "model-00011-of-00016.safetensors",
144
+ "model.layers.19.post_attention_layernorm.weight": "model-00011-of-00016.safetensors",
145
+ "model.layers.19.self_attn.k_proj.bias": "model-00011-of-00016.safetensors",
146
+ "model.layers.19.self_attn.k_proj.weight": "model-00011-of-00016.safetensors",
147
+ "model.layers.19.self_attn.o_proj.weight": "model-00011-of-00016.safetensors",
148
+ "model.layers.19.self_attn.q_proj.bias": "model-00011-of-00016.safetensors",
149
+ "model.layers.19.self_attn.q_proj.weight": "model-00011-of-00016.safetensors",
150
+ "model.layers.19.self_attn.v_proj.bias": "model-00011-of-00016.safetensors",
151
+ "model.layers.19.self_attn.v_proj.weight": "model-00011-of-00016.safetensors",
152
+ "model.layers.2.input_layernorm.weight": "model-00003-of-00016.safetensors",
153
+ "model.layers.2.mlp.down_proj.weight": "model-00003-of-00016.safetensors",
154
+ "model.layers.2.mlp.gate_proj.weight": "model-00003-of-00016.safetensors",
155
+ "model.layers.2.mlp.up_proj.weight": "model-00003-of-00016.safetensors",
156
+ "model.layers.2.post_attention_layernorm.weight": "model-00003-of-00016.safetensors",
157
+ "model.layers.2.self_attn.k_proj.bias": "model-00002-of-00016.safetensors",
158
+ "model.layers.2.self_attn.k_proj.weight": "model-00002-of-00016.safetensors",
159
+ "model.layers.2.self_attn.o_proj.weight": "model-00002-of-00016.safetensors",
160
+ "model.layers.2.self_attn.q_proj.bias": "model-00002-of-00016.safetensors",
161
+ "model.layers.2.self_attn.q_proj.weight": "model-00002-of-00016.safetensors",
162
+ "model.layers.2.self_attn.v_proj.bias": "model-00002-of-00016.safetensors",
163
+ "model.layers.2.self_attn.v_proj.weight": "model-00002-of-00016.safetensors",
164
+ "model.layers.20.input_layernorm.weight": "model-00012-of-00016.safetensors",
165
+ "model.layers.20.mlp.down_proj.weight": "model-00012-of-00016.safetensors",
166
+ "model.layers.20.mlp.gate_proj.weight": "model-00012-of-00016.safetensors",
167
+ "model.layers.20.mlp.up_proj.weight": "model-00012-of-00016.safetensors",
168
+ "model.layers.20.post_attention_layernorm.weight": "model-00012-of-00016.safetensors",
169
+ "model.layers.20.self_attn.k_proj.bias": "model-00011-of-00016.safetensors",
170
+ "model.layers.20.self_attn.k_proj.weight": "model-00011-of-00016.safetensors",
171
+ "model.layers.20.self_attn.o_proj.weight": "model-00011-of-00016.safetensors",
172
+ "model.layers.20.self_attn.q_proj.bias": "model-00011-of-00016.safetensors",
173
+ "model.layers.20.self_attn.q_proj.weight": "model-00011-of-00016.safetensors",
174
+ "model.layers.20.self_attn.v_proj.bias": "model-00011-of-00016.safetensors",
175
+ "model.layers.20.self_attn.v_proj.weight": "model-00011-of-00016.safetensors",
176
+ "model.layers.21.input_layernorm.weight": "model-00012-of-00016.safetensors",
177
+ "model.layers.21.mlp.down_proj.weight": "model-00012-of-00016.safetensors",
178
+ "model.layers.21.mlp.gate_proj.weight": "model-00012-of-00016.safetensors",
179
+ "model.layers.21.mlp.up_proj.weight": "model-00012-of-00016.safetensors",
180
+ "model.layers.21.post_attention_layernorm.weight": "model-00012-of-00016.safetensors",
181
+ "model.layers.21.self_attn.k_proj.bias": "model-00012-of-00016.safetensors",
182
+ "model.layers.21.self_attn.k_proj.weight": "model-00012-of-00016.safetensors",
183
+ "model.layers.21.self_attn.o_proj.weight": "model-00012-of-00016.safetensors",
184
+ "model.layers.21.self_attn.q_proj.bias": "model-00012-of-00016.safetensors",
185
+ "model.layers.21.self_attn.q_proj.weight": "model-00012-of-00016.safetensors",
186
+ "model.layers.21.self_attn.v_proj.bias": "model-00012-of-00016.safetensors",
187
+ "model.layers.21.self_attn.v_proj.weight": "model-00012-of-00016.safetensors",
188
+ "model.layers.22.input_layernorm.weight": "model-00013-of-00016.safetensors",
189
+ "model.layers.22.mlp.down_proj.weight": "model-00013-of-00016.safetensors",
190
+ "model.layers.22.mlp.gate_proj.weight": "model-00013-of-00016.safetensors",
191
+ "model.layers.22.mlp.up_proj.weight": "model-00013-of-00016.safetensors",
192
+ "model.layers.22.post_attention_layernorm.weight": "model-00013-of-00016.safetensors",
193
+ "model.layers.22.self_attn.k_proj.bias": "model-00012-of-00016.safetensors",
194
+ "model.layers.22.self_attn.k_proj.weight": "model-00012-of-00016.safetensors",
195
+ "model.layers.22.self_attn.o_proj.weight": "model-00012-of-00016.safetensors",
196
+ "model.layers.22.self_attn.q_proj.bias": "model-00012-of-00016.safetensors",
197
+ "model.layers.22.self_attn.q_proj.weight": "model-00012-of-00016.safetensors",
198
+ "model.layers.22.self_attn.v_proj.bias": "model-00012-of-00016.safetensors",
199
+ "model.layers.22.self_attn.v_proj.weight": "model-00012-of-00016.safetensors",
200
+ "model.layers.23.input_layernorm.weight": "model-00013-of-00016.safetensors",
201
+ "model.layers.23.mlp.down_proj.weight": "model-00013-of-00016.safetensors",
202
+ "model.layers.23.mlp.gate_proj.weight": "model-00013-of-00016.safetensors",
203
+ "model.layers.23.mlp.up_proj.weight": "model-00013-of-00016.safetensors",
204
+ "model.layers.23.post_attention_layernorm.weight": "model-00013-of-00016.safetensors",
205
+ "model.layers.23.self_attn.k_proj.bias": "model-00013-of-00016.safetensors",
206
+ "model.layers.23.self_attn.k_proj.weight": "model-00013-of-00016.safetensors",
207
+ "model.layers.23.self_attn.o_proj.weight": "model-00013-of-00016.safetensors",
208
+ "model.layers.23.self_attn.q_proj.bias": "model-00013-of-00016.safetensors",
209
+ "model.layers.23.self_attn.q_proj.weight": "model-00013-of-00016.safetensors",
210
+ "model.layers.23.self_attn.v_proj.bias": "model-00013-of-00016.safetensors",
211
+ "model.layers.23.self_attn.v_proj.weight": "model-00013-of-00016.safetensors",
212
+ "model.layers.24.input_layernorm.weight": "model-00014-of-00016.safetensors",
213
+ "model.layers.24.mlp.down_proj.weight": "model-00014-of-00016.safetensors",
214
+ "model.layers.24.mlp.gate_proj.weight": "model-00014-of-00016.safetensors",
215
+ "model.layers.24.mlp.up_proj.weight": "model-00014-of-00016.safetensors",
216
+ "model.layers.24.post_attention_layernorm.weight": "model-00014-of-00016.safetensors",
217
+ "model.layers.24.self_attn.k_proj.bias": "model-00013-of-00016.safetensors",
218
+ "model.layers.24.self_attn.k_proj.weight": "model-00013-of-00016.safetensors",
219
+ "model.layers.24.self_attn.o_proj.weight": "model-00013-of-00016.safetensors",
220
+ "model.layers.24.self_attn.q_proj.bias": "model-00013-of-00016.safetensors",
221
+ "model.layers.24.self_attn.q_proj.weight": "model-00013-of-00016.safetensors",
222
+ "model.layers.24.self_attn.v_proj.bias": "model-00013-of-00016.safetensors",
223
+ "model.layers.24.self_attn.v_proj.weight": "model-00013-of-00016.safetensors",
224
+ "model.layers.25.input_layernorm.weight": "model-00014-of-00016.safetensors",
225
+ "model.layers.25.mlp.down_proj.weight": "model-00014-of-00016.safetensors",
226
+ "model.layers.25.mlp.gate_proj.weight": "model-00014-of-00016.safetensors",
227
+ "model.layers.25.mlp.up_proj.weight": "model-00014-of-00016.safetensors",
228
+ "model.layers.25.post_attention_layernorm.weight": "model-00014-of-00016.safetensors",
229
+ "model.layers.25.self_attn.k_proj.bias": "model-00014-of-00016.safetensors",
230
+ "model.layers.25.self_attn.k_proj.weight": "model-00014-of-00016.safetensors",
231
+ "model.layers.25.self_attn.o_proj.weight": "model-00014-of-00016.safetensors",
232
+ "model.layers.25.self_attn.q_proj.bias": "model-00014-of-00016.safetensors",
233
+ "model.layers.25.self_attn.q_proj.weight": "model-00014-of-00016.safetensors",
234
+ "model.layers.25.self_attn.v_proj.bias": "model-00014-of-00016.safetensors",
235
+ "model.layers.25.self_attn.v_proj.weight": "model-00014-of-00016.safetensors",
236
+ "model.layers.26.input_layernorm.weight": "model-00016-of-00016.safetensors",
237
+ "model.layers.26.mlp.down_proj.weight": "model-00016-of-00016.safetensors",
238
+ "model.layers.26.mlp.gate_proj.weight": "model-00016-of-00016.safetensors",
239
+ "model.layers.26.mlp.up_proj.weight": "model-00016-of-00016.safetensors",
240
+ "model.layers.26.post_attention_layernorm.weight": "model-00016-of-00016.safetensors",
241
+ "model.layers.26.self_attn.k_proj.bias": "model-00014-of-00016.safetensors",
242
+ "model.layers.26.self_attn.k_proj.weight": "model-00014-of-00016.safetensors",
243
+ "model.layers.26.self_attn.o_proj.weight": "model-00014-of-00016.safetensors",
244
+ "model.layers.26.self_attn.q_proj.bias": "model-00014-of-00016.safetensors",
245
+ "model.layers.26.self_attn.q_proj.weight": "model-00014-of-00016.safetensors",
246
+ "model.layers.26.self_attn.v_proj.bias": "model-00014-of-00016.safetensors",
247
+ "model.layers.26.self_attn.v_proj.weight": "model-00014-of-00016.safetensors",
248
+ "model.layers.27.input_layernorm.weight": "model-00016-of-00016.safetensors",
249
+ "model.layers.27.mlp.down_proj.weight": "model-00016-of-00016.safetensors",
250
+ "model.layers.27.mlp.gate_proj.weight": "model-00016-of-00016.safetensors",
251
+ "model.layers.27.mlp.up_proj.weight": "model-00016-of-00016.safetensors",
252
+ "model.layers.27.post_attention_layernorm.weight": "model-00016-of-00016.safetensors",
253
+ "model.layers.27.self_attn.k_proj.bias": "model-00016-of-00016.safetensors",
254
+ "model.layers.27.self_attn.k_proj.weight": "model-00016-of-00016.safetensors",
255
+ "model.layers.27.self_attn.o_proj.weight": "model-00016-of-00016.safetensors",
256
+ "model.layers.27.self_attn.q_proj.bias": "model-00016-of-00016.safetensors",
257
+ "model.layers.27.self_attn.q_proj.weight": "model-00016-of-00016.safetensors",
258
+ "model.layers.27.self_attn.v_proj.bias": "model-00016-of-00016.safetensors",
259
+ "model.layers.27.self_attn.v_proj.weight": "model-00016-of-00016.safetensors",
260
+ "model.layers.3.input_layernorm.weight": "model-00003-of-00016.safetensors",
261
+ "model.layers.3.mlp.down_proj.weight": "model-00003-of-00016.safetensors",
262
+ "model.layers.3.mlp.gate_proj.weight": "model-00003-of-00016.safetensors",
263
+ "model.layers.3.mlp.up_proj.weight": "model-00003-of-00016.safetensors",
264
+ "model.layers.3.post_attention_layernorm.weight": "model-00003-of-00016.safetensors",
265
+ "model.layers.3.self_attn.k_proj.bias": "model-00003-of-00016.safetensors",
266
+ "model.layers.3.self_attn.k_proj.weight": "model-00003-of-00016.safetensors",
267
+ "model.layers.3.self_attn.o_proj.weight": "model-00003-of-00016.safetensors",
268
+ "model.layers.3.self_attn.q_proj.bias": "model-00003-of-00016.safetensors",
269
+ "model.layers.3.self_attn.q_proj.weight": "model-00003-of-00016.safetensors",
270
+ "model.layers.3.self_attn.v_proj.bias": "model-00003-of-00016.safetensors",
271
+ "model.layers.3.self_attn.v_proj.weight": "model-00003-of-00016.safetensors",
272
+ "model.layers.4.input_layernorm.weight": "model-00004-of-00016.safetensors",
273
+ "model.layers.4.mlp.down_proj.weight": "model-00004-of-00016.safetensors",
274
+ "model.layers.4.mlp.gate_proj.weight": "model-00004-of-00016.safetensors",
275
+ "model.layers.4.mlp.up_proj.weight": "model-00004-of-00016.safetensors",
276
+ "model.layers.4.post_attention_layernorm.weight": "model-00004-of-00016.safetensors",
277
+ "model.layers.4.self_attn.k_proj.bias": "model-00003-of-00016.safetensors",
278
+ "model.layers.4.self_attn.k_proj.weight": "model-00003-of-00016.safetensors",
279
+ "model.layers.4.self_attn.o_proj.weight": "model-00003-of-00016.safetensors",
280
+ "model.layers.4.self_attn.q_proj.bias": "model-00003-of-00016.safetensors",
281
+ "model.layers.4.self_attn.q_proj.weight": "model-00003-of-00016.safetensors",
282
+ "model.layers.4.self_attn.v_proj.bias": "model-00003-of-00016.safetensors",
283
+ "model.layers.4.self_attn.v_proj.weight": "model-00003-of-00016.safetensors",
284
+ "model.layers.5.input_layernorm.weight": "model-00004-of-00016.safetensors",
285
+ "model.layers.5.mlp.down_proj.weight": "model-00004-of-00016.safetensors",
286
+ "model.layers.5.mlp.gate_proj.weight": "model-00004-of-00016.safetensors",
287
+ "model.layers.5.mlp.up_proj.weight": "model-00004-of-00016.safetensors",
288
+ "model.layers.5.post_attention_layernorm.weight": "model-00004-of-00016.safetensors",
289
+ "model.layers.5.self_attn.k_proj.bias": "model-00004-of-00016.safetensors",
290
+ "model.layers.5.self_attn.k_proj.weight": "model-00004-of-00016.safetensors",
291
+ "model.layers.5.self_attn.o_proj.weight": "model-00004-of-00016.safetensors",
292
+ "model.layers.5.self_attn.q_proj.bias": "model-00004-of-00016.safetensors",
293
+ "model.layers.5.self_attn.q_proj.weight": "model-00004-of-00016.safetensors",
294
+ "model.layers.5.self_attn.v_proj.bias": "model-00004-of-00016.safetensors",
295
+ "model.layers.5.self_attn.v_proj.weight": "model-00004-of-00016.safetensors",
296
+ "model.layers.6.input_layernorm.weight": "model-00005-of-00016.safetensors",
297
+ "model.layers.6.mlp.down_proj.weight": "model-00005-of-00016.safetensors",
298
+ "model.layers.6.mlp.gate_proj.weight": "model-00005-of-00016.safetensors",
299
+ "model.layers.6.mlp.up_proj.weight": "model-00005-of-00016.safetensors",
300
+ "model.layers.6.post_attention_layernorm.weight": "model-00005-of-00016.safetensors",
301
+ "model.layers.6.self_attn.k_proj.bias": "model-00004-of-00016.safetensors",
302
+ "model.layers.6.self_attn.k_proj.weight": "model-00004-of-00016.safetensors",
303
+ "model.layers.6.self_attn.o_proj.weight": "model-00004-of-00016.safetensors",
304
+ "model.layers.6.self_attn.q_proj.bias": "model-00004-of-00016.safetensors",
305
+ "model.layers.6.self_attn.q_proj.weight": "model-00004-of-00016.safetensors",
306
+ "model.layers.6.self_attn.v_proj.bias": "model-00004-of-00016.safetensors",
307
+ "model.layers.6.self_attn.v_proj.weight": "model-00004-of-00016.safetensors",
308
+ "model.layers.7.input_layernorm.weight": "model-00005-of-00016.safetensors",
309
+ "model.layers.7.mlp.down_proj.weight": "model-00005-of-00016.safetensors",
310
+ "model.layers.7.mlp.gate_proj.weight": "model-00005-of-00016.safetensors",
311
+ "model.layers.7.mlp.up_proj.weight": "model-00005-of-00016.safetensors",
312
+ "model.layers.7.post_attention_layernorm.weight": "model-00005-of-00016.safetensors",
313
+ "model.layers.7.self_attn.k_proj.bias": "model-00005-of-00016.safetensors",
314
+ "model.layers.7.self_attn.k_proj.weight": "model-00005-of-00016.safetensors",
315
+ "model.layers.7.self_attn.o_proj.weight": "model-00005-of-00016.safetensors",
316
+ "model.layers.7.self_attn.q_proj.bias": "model-00005-of-00016.safetensors",
317
+ "model.layers.7.self_attn.q_proj.weight": "model-00005-of-00016.safetensors",
318
+ "model.layers.7.self_attn.v_proj.bias": "model-00005-of-00016.safetensors",
319
+ "model.layers.7.self_attn.v_proj.weight": "model-00005-of-00016.safetensors",
320
+ "model.layers.8.input_layernorm.weight": "model-00006-of-00016.safetensors",
321
+ "model.layers.8.mlp.down_proj.weight": "model-00006-of-00016.safetensors",
322
+ "model.layers.8.mlp.gate_proj.weight": "model-00006-of-00016.safetensors",
323
+ "model.layers.8.mlp.up_proj.weight": "model-00006-of-00016.safetensors",
324
+ "model.layers.8.post_attention_layernorm.weight": "model-00006-of-00016.safetensors",
325
+ "model.layers.8.self_attn.k_proj.bias": "model-00005-of-00016.safetensors",
326
+ "model.layers.8.self_attn.k_proj.weight": "model-00005-of-00016.safetensors",
327
+ "model.layers.8.self_attn.o_proj.weight": "model-00005-of-00016.safetensors",
328
+ "model.layers.8.self_attn.q_proj.bias": "model-00005-of-00016.safetensors",
329
+ "model.layers.8.self_attn.q_proj.weight": "model-00005-of-00016.safetensors",
330
+ "model.layers.8.self_attn.v_proj.bias": "model-00005-of-00016.safetensors",
331
+ "model.layers.8.self_attn.v_proj.weight": "model-00005-of-00016.safetensors",
332
+ "model.layers.9.input_layernorm.weight": "model-00006-of-00016.safetensors",
333
+ "model.layers.9.mlp.down_proj.weight": "model-00006-of-00016.safetensors",
334
+ "model.layers.9.mlp.gate_proj.weight": "model-00006-of-00016.safetensors",
335
+ "model.layers.9.mlp.up_proj.weight": "model-00006-of-00016.safetensors",
336
+ "model.layers.9.post_attention_layernorm.weight": "model-00006-of-00016.safetensors",
337
+ "model.layers.9.self_attn.k_proj.bias": "model-00006-of-00016.safetensors",
338
+ "model.layers.9.self_attn.k_proj.weight": "model-00006-of-00016.safetensors",
339
+ "model.layers.9.self_attn.o_proj.weight": "model-00006-of-00016.safetensors",
340
+ "model.layers.9.self_attn.q_proj.bias": "model-00006-of-00016.safetensors",
341
+ "model.layers.9.self_attn.q_proj.weight": "model-00006-of-00016.safetensors",
342
+ "model.layers.9.self_attn.v_proj.bias": "model-00006-of-00016.safetensors",
343
+ "model.layers.9.self_attn.v_proj.weight": "model-00006-of-00016.safetensors",
344
+ "model.norm.weight": "model-00016-of-00016.safetensors"
345
+ }
346
+ }
pytorch_model.bin.index.json ADDED
@@ -0,0 +1,346 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 15701109760
4
+ },
5
+ "weight_map": {
6
+ "lm_head.weight": "pytorch_model-00004-of-00004.bin",
7
+ "model.embed_tokens.weight": "pytorch_model-00001-of-00004.bin",
8
+ "model.layers.0.input_layernorm.weight": "pytorch_model-00001-of-00004.bin",
9
+ "model.layers.0.mlp.down_proj.weight": "pytorch_model-00001-of-00004.bin",
10
+ "model.layers.0.mlp.gate_proj.weight": "pytorch_model-00001-of-00004.bin",
11
+ "model.layers.0.mlp.up_proj.weight": "pytorch_model-00001-of-00004.bin",
12
+ "model.layers.0.post_attention_layernorm.weight": "pytorch_model-00001-of-00004.bin",
13
+ "model.layers.0.self_attn.k_proj.bias": "pytorch_model-00001-of-00004.bin",
14
+ "model.layers.0.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
15
+ "model.layers.0.self_attn.o_proj.weight": "pytorch_model-00001-of-00004.bin",
16
+ "model.layers.0.self_attn.q_proj.bias": "pytorch_model-00001-of-00004.bin",
17
+ "model.layers.0.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
18
+ "model.layers.0.self_attn.v_proj.bias": "pytorch_model-00001-of-00004.bin",
19
+ "model.layers.0.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
20
+ "model.layers.1.input_layernorm.weight": "pytorch_model-00001-of-00004.bin",
21
+ "model.layers.1.mlp.down_proj.weight": "pytorch_model-00001-of-00004.bin",
22
+ "model.layers.1.mlp.gate_proj.weight": "pytorch_model-00001-of-00004.bin",
23
+ "model.layers.1.mlp.up_proj.weight": "pytorch_model-00001-of-00004.bin",
24
+ "model.layers.1.post_attention_layernorm.weight": "pytorch_model-00001-of-00004.bin",
25
+ "model.layers.1.self_attn.k_proj.bias": "pytorch_model-00001-of-00004.bin",
26
+ "model.layers.1.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
27
+ "model.layers.1.self_attn.o_proj.weight": "pytorch_model-00001-of-00004.bin",
28
+ "model.layers.1.self_attn.q_proj.bias": "pytorch_model-00001-of-00004.bin",
29
+ "model.layers.1.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
30
+ "model.layers.1.self_attn.v_proj.bias": "pytorch_model-00001-of-00004.bin",
31
+ "model.layers.1.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
32
+ "model.layers.10.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
33
+ "model.layers.10.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
34
+ "model.layers.10.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
35
+ "model.layers.10.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
36
+ "model.layers.10.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
37
+ "model.layers.10.self_attn.k_proj.bias": "pytorch_model-00002-of-00004.bin",
38
+ "model.layers.10.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
39
+ "model.layers.10.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
40
+ "model.layers.10.self_attn.q_proj.bias": "pytorch_model-00002-of-00004.bin",
41
+ "model.layers.10.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
42
+ "model.layers.10.self_attn.v_proj.bias": "pytorch_model-00002-of-00004.bin",
43
+ "model.layers.10.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
44
+ "model.layers.11.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
45
+ "model.layers.11.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
46
+ "model.layers.11.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
47
+ "model.layers.11.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
48
+ "model.layers.11.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
49
+ "model.layers.11.self_attn.k_proj.bias": "pytorch_model-00002-of-00004.bin",
50
+ "model.layers.11.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
51
+ "model.layers.11.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
52
+ "model.layers.11.self_attn.q_proj.bias": "pytorch_model-00002-of-00004.bin",
53
+ "model.layers.11.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
54
+ "model.layers.11.self_attn.v_proj.bias": "pytorch_model-00002-of-00004.bin",
55
+ "model.layers.11.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
56
+ "model.layers.12.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
57
+ "model.layers.12.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
58
+ "model.layers.12.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
59
+ "model.layers.12.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
60
+ "model.layers.12.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
61
+ "model.layers.12.self_attn.k_proj.bias": "pytorch_model-00002-of-00004.bin",
62
+ "model.layers.12.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
63
+ "model.layers.12.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
64
+ "model.layers.12.self_attn.q_proj.bias": "pytorch_model-00002-of-00004.bin",
65
+ "model.layers.12.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
66
+ "model.layers.12.self_attn.v_proj.bias": "pytorch_model-00002-of-00004.bin",
67
+ "model.layers.12.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
68
+ "model.layers.13.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
69
+ "model.layers.13.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
70
+ "model.layers.13.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
71
+ "model.layers.13.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
72
+ "model.layers.13.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
73
+ "model.layers.13.self_attn.k_proj.bias": "pytorch_model-00002-of-00004.bin",
74
+ "model.layers.13.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
75
+ "model.layers.13.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
76
+ "model.layers.13.self_attn.q_proj.bias": "pytorch_model-00002-of-00004.bin",
77
+ "model.layers.13.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
78
+ "model.layers.13.self_attn.v_proj.bias": "pytorch_model-00002-of-00004.bin",
79
+ "model.layers.13.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
80
+ "model.layers.14.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
81
+ "model.layers.14.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
82
+ "model.layers.14.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
83
+ "model.layers.14.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
84
+ "model.layers.14.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
85
+ "model.layers.14.self_attn.k_proj.bias": "pytorch_model-00002-of-00004.bin",
86
+ "model.layers.14.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
87
+ "model.layers.14.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
88
+ "model.layers.14.self_attn.q_proj.bias": "pytorch_model-00002-of-00004.bin",
89
+ "model.layers.14.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
90
+ "model.layers.14.self_attn.v_proj.bias": "pytorch_model-00002-of-00004.bin",
91
+ "model.layers.14.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
92
+ "model.layers.15.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
93
+ "model.layers.15.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
94
+ "model.layers.15.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
95
+ "model.layers.15.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
96
+ "model.layers.15.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
97
+ "model.layers.15.self_attn.k_proj.bias": "pytorch_model-00002-of-00004.bin",
98
+ "model.layers.15.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
99
+ "model.layers.15.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
100
+ "model.layers.15.self_attn.q_proj.bias": "pytorch_model-00002-of-00004.bin",
101
+ "model.layers.15.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
102
+ "model.layers.15.self_attn.v_proj.bias": "pytorch_model-00002-of-00004.bin",
103
+ "model.layers.15.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
104
+ "model.layers.16.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
105
+ "model.layers.16.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
106
+ "model.layers.16.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
107
+ "model.layers.16.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
108
+ "model.layers.16.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
109
+ "model.layers.16.self_attn.k_proj.bias": "pytorch_model-00002-of-00004.bin",
110
+ "model.layers.16.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
111
+ "model.layers.16.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
112
+ "model.layers.16.self_attn.q_proj.bias": "pytorch_model-00002-of-00004.bin",
113
+ "model.layers.16.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
114
+ "model.layers.16.self_attn.v_proj.bias": "pytorch_model-00002-of-00004.bin",
115
+ "model.layers.16.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
116
+ "model.layers.17.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
117
+ "model.layers.17.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
118
+ "model.layers.17.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
119
+ "model.layers.17.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
120
+ "model.layers.17.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
121
+ "model.layers.17.self_attn.k_proj.bias": "pytorch_model-00002-of-00004.bin",
122
+ "model.layers.17.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
123
+ "model.layers.17.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
124
+ "model.layers.17.self_attn.q_proj.bias": "pytorch_model-00002-of-00004.bin",
125
+ "model.layers.17.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
126
+ "model.layers.17.self_attn.v_proj.bias": "pytorch_model-00002-of-00004.bin",
127
+ "model.layers.17.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
128
+ "model.layers.18.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
129
+ "model.layers.18.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
130
+ "model.layers.18.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
131
+ "model.layers.18.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
132
+ "model.layers.18.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
133
+ "model.layers.18.self_attn.k_proj.bias": "pytorch_model-00002-of-00004.bin",
134
+ "model.layers.18.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
135
+ "model.layers.18.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
136
+ "model.layers.18.self_attn.q_proj.bias": "pytorch_model-00002-of-00004.bin",
137
+ "model.layers.18.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
138
+ "model.layers.18.self_attn.v_proj.bias": "pytorch_model-00002-of-00004.bin",
139
+ "model.layers.18.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
140
+ "model.layers.19.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
141
+ "model.layers.19.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
142
+ "model.layers.19.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
143
+ "model.layers.19.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
144
+ "model.layers.19.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
145
+ "model.layers.19.self_attn.k_proj.bias": "pytorch_model-00003-of-00004.bin",
146
+ "model.layers.19.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
147
+ "model.layers.19.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
148
+ "model.layers.19.self_attn.q_proj.bias": "pytorch_model-00003-of-00004.bin",
149
+ "model.layers.19.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
150
+ "model.layers.19.self_attn.v_proj.bias": "pytorch_model-00003-of-00004.bin",
151
+ "model.layers.19.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
152
+ "model.layers.2.input_layernorm.weight": "pytorch_model-00001-of-00004.bin",
153
+ "model.layers.2.mlp.down_proj.weight": "pytorch_model-00001-of-00004.bin",
154
+ "model.layers.2.mlp.gate_proj.weight": "pytorch_model-00001-of-00004.bin",
155
+ "model.layers.2.mlp.up_proj.weight": "pytorch_model-00001-of-00004.bin",
156
+ "model.layers.2.post_attention_layernorm.weight": "pytorch_model-00001-of-00004.bin",
157
+ "model.layers.2.self_attn.k_proj.bias": "pytorch_model-00001-of-00004.bin",
158
+ "model.layers.2.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
159
+ "model.layers.2.self_attn.o_proj.weight": "pytorch_model-00001-of-00004.bin",
160
+ "model.layers.2.self_attn.q_proj.bias": "pytorch_model-00001-of-00004.bin",
161
+ "model.layers.2.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
162
+ "model.layers.2.self_attn.v_proj.bias": "pytorch_model-00001-of-00004.bin",
163
+ "model.layers.2.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
164
+ "model.layers.20.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
165
+ "model.layers.20.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
166
+ "model.layers.20.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
167
+ "model.layers.20.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
168
+ "model.layers.20.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
169
+ "model.layers.20.self_attn.k_proj.bias": "pytorch_model-00003-of-00004.bin",
170
+ "model.layers.20.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
171
+ "model.layers.20.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
172
+ "model.layers.20.self_attn.q_proj.bias": "pytorch_model-00003-of-00004.bin",
173
+ "model.layers.20.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
174
+ "model.layers.20.self_attn.v_proj.bias": "pytorch_model-00003-of-00004.bin",
175
+ "model.layers.20.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
176
+ "model.layers.21.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
177
+ "model.layers.21.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
178
+ "model.layers.21.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
179
+ "model.layers.21.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
180
+ "model.layers.21.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
181
+ "model.layers.21.self_attn.k_proj.bias": "pytorch_model-00003-of-00004.bin",
182
+ "model.layers.21.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
183
+ "model.layers.21.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
184
+ "model.layers.21.self_attn.q_proj.bias": "pytorch_model-00003-of-00004.bin",
185
+ "model.layers.21.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
186
+ "model.layers.21.self_attn.v_proj.bias": "pytorch_model-00003-of-00004.bin",
187
+ "model.layers.21.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
188
+ "model.layers.22.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
189
+ "model.layers.22.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
190
+ "model.layers.22.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
191
+ "model.layers.22.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
192
+ "model.layers.22.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
193
+ "model.layers.22.self_attn.k_proj.bias": "pytorch_model-00003-of-00004.bin",
194
+ "model.layers.22.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
195
+ "model.layers.22.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
196
+ "model.layers.22.self_attn.q_proj.bias": "pytorch_model-00003-of-00004.bin",
197
+ "model.layers.22.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
198
+ "model.layers.22.self_attn.v_proj.bias": "pytorch_model-00003-of-00004.bin",
199
+ "model.layers.22.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
200
+ "model.layers.23.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
201
+ "model.layers.23.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
202
+ "model.layers.23.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
203
+ "model.layers.23.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
204
+ "model.layers.23.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
205
+ "model.layers.23.self_attn.k_proj.bias": "pytorch_model-00003-of-00004.bin",
206
+ "model.layers.23.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
207
+ "model.layers.23.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
208
+ "model.layers.23.self_attn.q_proj.bias": "pytorch_model-00003-of-00004.bin",
209
+ "model.layers.23.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
210
+ "model.layers.23.self_attn.v_proj.bias": "pytorch_model-00003-of-00004.bin",
211
+ "model.layers.23.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
212
+ "model.layers.24.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
213
+ "model.layers.24.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
214
+ "model.layers.24.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
215
+ "model.layers.24.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
216
+ "model.layers.24.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
217
+ "model.layers.24.self_attn.k_proj.bias": "pytorch_model-00003-of-00004.bin",
218
+ "model.layers.24.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
219
+ "model.layers.24.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
220
+ "model.layers.24.self_attn.q_proj.bias": "pytorch_model-00003-of-00004.bin",
221
+ "model.layers.24.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
222
+ "model.layers.24.self_attn.v_proj.bias": "pytorch_model-00003-of-00004.bin",
223
+ "model.layers.24.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
224
+ "model.layers.25.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
225
+ "model.layers.25.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
226
+ "model.layers.25.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
227
+ "model.layers.25.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
228
+ "model.layers.25.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
229
+ "model.layers.25.self_attn.k_proj.bias": "pytorch_model-00003-of-00004.bin",
230
+ "model.layers.25.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
231
+ "model.layers.25.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
232
+ "model.layers.25.self_attn.q_proj.bias": "pytorch_model-00003-of-00004.bin",
233
+ "model.layers.25.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
234
+ "model.layers.25.self_attn.v_proj.bias": "pytorch_model-00003-of-00004.bin",
235
+ "model.layers.25.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
236
+ "model.layers.26.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
237
+ "model.layers.26.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
238
+ "model.layers.26.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
239
+ "model.layers.26.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
240
+ "model.layers.26.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
241
+ "model.layers.26.self_attn.k_proj.bias": "pytorch_model-00003-of-00004.bin",
242
+ "model.layers.26.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
243
+ "model.layers.26.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
244
+ "model.layers.26.self_attn.q_proj.bias": "pytorch_model-00003-of-00004.bin",
245
+ "model.layers.26.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
246
+ "model.layers.26.self_attn.v_proj.bias": "pytorch_model-00003-of-00004.bin",
247
+ "model.layers.26.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
248
+ "model.layers.27.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
249
+ "model.layers.27.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
250
+ "model.layers.27.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
251
+ "model.layers.27.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
252
+ "model.layers.27.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
253
+ "model.layers.27.self_attn.k_proj.bias": "pytorch_model-00003-of-00004.bin",
254
+ "model.layers.27.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
255
+ "model.layers.27.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
256
+ "model.layers.27.self_attn.q_proj.bias": "pytorch_model-00003-of-00004.bin",
257
+ "model.layers.27.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
258
+ "model.layers.27.self_attn.v_proj.bias": "pytorch_model-00003-of-00004.bin",
259
+ "model.layers.27.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
260
+ "model.layers.3.input_layernorm.weight": "pytorch_model-00001-of-00004.bin",
261
+ "model.layers.3.mlp.down_proj.weight": "pytorch_model-00001-of-00004.bin",
262
+ "model.layers.3.mlp.gate_proj.weight": "pytorch_model-00001-of-00004.bin",
263
+ "model.layers.3.mlp.up_proj.weight": "pytorch_model-00001-of-00004.bin",
264
+ "model.layers.3.post_attention_layernorm.weight": "pytorch_model-00001-of-00004.bin",
265
+ "model.layers.3.self_attn.k_proj.bias": "pytorch_model-00001-of-00004.bin",
266
+ "model.layers.3.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
267
+ "model.layers.3.self_attn.o_proj.weight": "pytorch_model-00001-of-00004.bin",
268
+ "model.layers.3.self_attn.q_proj.bias": "pytorch_model-00001-of-00004.bin",
269
+ "model.layers.3.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
270
+ "model.layers.3.self_attn.v_proj.bias": "pytorch_model-00001-of-00004.bin",
271
+ "model.layers.3.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
272
+ "model.layers.4.input_layernorm.weight": "pytorch_model-00001-of-00004.bin",
273
+ "model.layers.4.mlp.down_proj.weight": "pytorch_model-00001-of-00004.bin",
274
+ "model.layers.4.mlp.gate_proj.weight": "pytorch_model-00001-of-00004.bin",
275
+ "model.layers.4.mlp.up_proj.weight": "pytorch_model-00001-of-00004.bin",
276
+ "model.layers.4.post_attention_layernorm.weight": "pytorch_model-00001-of-00004.bin",
277
+ "model.layers.4.self_attn.k_proj.bias": "pytorch_model-00001-of-00004.bin",
278
+ "model.layers.4.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
279
+ "model.layers.4.self_attn.o_proj.weight": "pytorch_model-00001-of-00004.bin",
280
+ "model.layers.4.self_attn.q_proj.bias": "pytorch_model-00001-of-00004.bin",
281
+ "model.layers.4.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
282
+ "model.layers.4.self_attn.v_proj.bias": "pytorch_model-00001-of-00004.bin",
283
+ "model.layers.4.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
284
+ "model.layers.5.input_layernorm.weight": "pytorch_model-00001-of-00004.bin",
285
+ "model.layers.5.mlp.down_proj.weight": "pytorch_model-00001-of-00004.bin",
286
+ "model.layers.5.mlp.gate_proj.weight": "pytorch_model-00001-of-00004.bin",
287
+ "model.layers.5.mlp.up_proj.weight": "pytorch_model-00001-of-00004.bin",
288
+ "model.layers.5.post_attention_layernorm.weight": "pytorch_model-00001-of-00004.bin",
289
+ "model.layers.5.self_attn.k_proj.bias": "pytorch_model-00001-of-00004.bin",
290
+ "model.layers.5.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
291
+ "model.layers.5.self_attn.o_proj.weight": "pytorch_model-00001-of-00004.bin",
292
+ "model.layers.5.self_attn.q_proj.bias": "pytorch_model-00001-of-00004.bin",
293
+ "model.layers.5.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
294
+ "model.layers.5.self_attn.v_proj.bias": "pytorch_model-00001-of-00004.bin",
295
+ "model.layers.5.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
296
+ "model.layers.6.input_layernorm.weight": "pytorch_model-00001-of-00004.bin",
297
+ "model.layers.6.mlp.down_proj.weight": "pytorch_model-00001-of-00004.bin",
298
+ "model.layers.6.mlp.gate_proj.weight": "pytorch_model-00001-of-00004.bin",
299
+ "model.layers.6.mlp.up_proj.weight": "pytorch_model-00001-of-00004.bin",
300
+ "model.layers.6.post_attention_layernorm.weight": "pytorch_model-00001-of-00004.bin",
301
+ "model.layers.6.self_attn.k_proj.bias": "pytorch_model-00001-of-00004.bin",
302
+ "model.layers.6.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
303
+ "model.layers.6.self_attn.o_proj.weight": "pytorch_model-00001-of-00004.bin",
304
+ "model.layers.6.self_attn.q_proj.bias": "pytorch_model-00001-of-00004.bin",
305
+ "model.layers.6.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
306
+ "model.layers.6.self_attn.v_proj.bias": "pytorch_model-00001-of-00004.bin",
307
+ "model.layers.6.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
308
+ "model.layers.7.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
309
+ "model.layers.7.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
310
+ "model.layers.7.mlp.gate_proj.weight": "pytorch_model-00001-of-00004.bin",
311
+ "model.layers.7.mlp.up_proj.weight": "pytorch_model-00001-of-00004.bin",
312
+ "model.layers.7.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
313
+ "model.layers.7.self_attn.k_proj.bias": "pytorch_model-00001-of-00004.bin",
314
+ "model.layers.7.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
315
+ "model.layers.7.self_attn.o_proj.weight": "pytorch_model-00001-of-00004.bin",
316
+ "model.layers.7.self_attn.q_proj.bias": "pytorch_model-00001-of-00004.bin",
317
+ "model.layers.7.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
318
+ "model.layers.7.self_attn.v_proj.bias": "pytorch_model-00001-of-00004.bin",
319
+ "model.layers.7.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
320
+ "model.layers.8.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
321
+ "model.layers.8.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
322
+ "model.layers.8.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
323
+ "model.layers.8.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
324
+ "model.layers.8.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
325
+ "model.layers.8.self_attn.k_proj.bias": "pytorch_model-00002-of-00004.bin",
326
+ "model.layers.8.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
327
+ "model.layers.8.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
328
+ "model.layers.8.self_attn.q_proj.bias": "pytorch_model-00002-of-00004.bin",
329
+ "model.layers.8.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
330
+ "model.layers.8.self_attn.v_proj.bias": "pytorch_model-00002-of-00004.bin",
331
+ "model.layers.8.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
332
+ "model.layers.9.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
333
+ "model.layers.9.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
334
+ "model.layers.9.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
335
+ "model.layers.9.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
336
+ "model.layers.9.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
337
+ "model.layers.9.self_attn.k_proj.bias": "pytorch_model-00002-of-00004.bin",
338
+ "model.layers.9.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
339
+ "model.layers.9.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
340
+ "model.layers.9.self_attn.q_proj.bias": "pytorch_model-00002-of-00004.bin",
341
+ "model.layers.9.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
342
+ "model.layers.9.self_attn.v_proj.bias": "pytorch_model-00002-of-00004.bin",
343
+ "model.layers.9.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
344
+ "model.norm.weight": "pytorch_model-00003-of-00004.bin"
345
+ }
346
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:acabcaa15a4db2a61dcd09647ef99bdf3f9c2e11410648cad5392bf4b2fbcebe
3
+ size 17790582
tokenizer_config.json ADDED
The diff for this file is too large to render. See raw diff
 
vocab.json ADDED
The diff for this file is too large to render. See raw diff