diff --git a/README.md b/README.md index c25d6b1c6e911c4a232a0cd8e7f66d42c1576dd7..9b073bdf9eb16abb53c86b8b9ed347fd62ceb4d0 100644 --- a/README.md +++ b/README.md @@ -1,168 +1,29 @@ --- +base_model: +- openai/gpt-oss-120b license: apache-2.0 pipeline_tag: text-generation library_name: transformers tags: -- vllm +- openai --- +

- gpt-oss-120b + FriendliAI Logo

+ -

- Try gpt-oss · - Guides · - Model card · - OpenAI blog -

- -
- -Welcome to the gpt-oss series, [OpenAI’s open-weight models](https://openai.com/open-models) designed for powerful reasoning, agentic tasks, and versatile developer use cases. - -We’re releasing two flavors of these open models: -- `gpt-oss-120b` — for production, general purpose, high reasoning use cases that fit into a single 80GB GPU (like NVIDIA H100 or AMD MI300X) (117B parameters with 5.1B active parameters) -- `gpt-oss-20b` — for lower latency, and local or specialized use cases (21B parameters with 3.6B active parameters) - -Both models were trained on our [harmony response format](https://github.com/openai/harmony) and should only be used with the harmony format as it will not work correctly otherwise. - - -> [!NOTE] -> This model card is dedicated to the larger `gpt-oss-120b` model. Check out [`gpt-oss-20b`](https://huggingface.co/openai/gpt-oss-20b) for the smaller model. - -# Highlights - -* **Permissive Apache 2.0 license:** Build freely without copyleft restrictions or patent risk—ideal for experimentation, customization, and commercial deployment. -* **Configurable reasoning effort:** Easily adjust the reasoning effort (low, medium, high) based on your specific use case and latency needs. -* **Full chain-of-thought:** Gain complete access to the model’s reasoning process, facilitating easier debugging and increased trust in outputs. It’s not intended to be shown to end users. -* **Fine-tunable:** Fully customize models to your specific use case through parameter fine-tuning. -* **Agentic capabilities:** Use the models’ native capabilities for function calling, [web browsing](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#browser), [Python code execution](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#python), and Structured Outputs. -* **MXFP4 quantization:** The models were post-trained with MXFP4 quantization of the MoE weights, making `gpt-oss-120b` run on a single 80GB GPU (like NVIDIA H100 or AMD MI300X) and the `gpt-oss-20b` model run within 16GB of memory. All evals were performed with the same MXFP4 quantization. - ---- - -# Inference examples - -## Transformers - -You can use `gpt-oss-120b` and `gpt-oss-20b` with Transformers. If you use the Transformers chat template, it will automatically apply the [harmony response format](https://github.com/openai/harmony). If you use `model.generate` directly, you need to apply the harmony format manually using the chat template or use our [openai-harmony](https://github.com/openai/harmony) package. - -To get started, install the necessary dependencies to setup your environment: - -``` -pip install -U transformers kernels torch -``` - -Once, setup you can proceed to run the model by running the snippet below: - -```py -from transformers import pipeline -import torch - -model_id = "openai/gpt-oss-120b" - -pipe = pipeline( - "text-generation", - model=model_id, - torch_dtype="auto", - device_map="auto", -) - -messages = [ - {"role": "user", "content": "Explain quantum mechanics clearly and concisely."}, -] - -outputs = pipe( - messages, - max_new_tokens=256, -) -print(outputs[0]["generated_text"][-1]) -``` - -Alternatively, you can run the model via [`Transformers Serve`](https://huggingface.co/docs/transformers/main/serving) to spin up a OpenAI-compatible webserver: - -``` -transformers serve -transformers chat localhost:8000 --model-name-or-path openai/gpt-oss-120b -``` - -[Learn more about how to use gpt-oss with Transformers.](https://cookbook.openai.com/articles/gpt-oss/run-transformers) - -## vLLM - -vLLM recommends using [uv](https://docs.astral.sh/uv/) for Python dependency management. You can use vLLM to spin up an OpenAI-compatible webserver. The following command will automatically download the model and start the server. - -```bash -uv pip install --pre vllm==0.10.1+gptoss \ - --extra-index-url https://wheels.vllm.ai/gpt-oss/ \ - --extra-index-url https://download.pytorch.org/whl/nightly/cu128 \ - --index-strategy unsafe-best-match - -vllm serve openai/gpt-oss-120b -``` - -[Learn more about how to use gpt-oss with vLLM.](https://cookbook.openai.com/articles/gpt-oss/run-vllm) - -## PyTorch / Triton - -To learn about how to use this model with PyTorch and Triton, check out our [reference implementations in the gpt-oss repository](https://github.com/openai/gpt-oss?tab=readme-ov-file#reference-pytorch-implementation). - -## Ollama - -If you are trying to run gpt-oss on consumer hardware, you can use Ollama by running the following commands after [installing Ollama](https://ollama.com/download). - -```bash -# gpt-oss-120b -ollama pull gpt-oss:120b -ollama run gpt-oss:120b -``` - -[Learn more about how to use gpt-oss with Ollama.](https://cookbook.openai.com/articles/gpt-oss/run-locally-ollama) - -#### LM Studio - -If you are using [LM Studio](https://lmstudio.ai/) you can use the following commands to download. - -```bash -# gpt-oss-120b -lms get openai/gpt-oss-120b -``` - -Check out our [awesome list](https://github.com/openai/gpt-oss/blob/main/awesome-gpt-oss.md) for a broader collection of gpt-oss resources and inference partners. - ---- - -# Download the model - -You can download the model weights from the [Hugging Face Hub](https://huggingface.co/collections/openai/gpt-oss-68911959590a1634ba11c7a4) directly from Hugging Face CLI: - -```shell -# gpt-oss-120b -huggingface-cli download openai/gpt-oss-120b --include "original/*" --local-dir gpt-oss-120b/ -pip install gpt-oss -python -m gpt_oss.chat model/ -``` - -# Reasoning levels - -You can adjust the reasoning level that suits your task across three levels: - -* **Low:** Fast responses for general dialogue. -* **Medium:** Balanced speed and detail. -* **High:** Deep and detailed analysis. -The reasoning level can be set in the system prompts, e.g., "Reasoning: high". +# openai/gpt-oss-120b -# Tool use +* Model creator: [openai](https://huggingface.co/openai) +* Original model: [gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b) -The gpt-oss models are excellent for: -* Web browsing (using built-in browsing tools) -* Function calling with defined schemas -* Agentic operations like browser tasks +## Differences -# Fine-tuning +* Model weights converted to bfloat16. -Both gpt-oss models can be fine-tuned for a variety of specialized use cases. +## License -This larger model `gpt-oss-120b` can be fine-tuned on a single H100 node, whereas the smaller [`gpt-oss-20b`](https://huggingface.co/openai/gpt-oss-20b) can even be fine-tuned on consumer hardware. +Refer to the license of the original model card. \ No newline at end of file diff --git a/config.json b/config.json index 42300b8993060d365b1d138a35998eb70c745285..10c36a4601cbbd02a51ca8a81635cdb9d093b86f 100644 --- a/config.json +++ b/config.json @@ -59,15 +59,6 @@ "num_local_experts": 128, "output_router_logits": false, "pad_token_id": 199999, - "quantization_config": { - "modules_to_not_convert": [ - "model.layers.*.self_attn", - "model.layers.*.mlp.router", - "model.embed_tokens", - "lm_head" - ], - "quant_method": "mxfp4" - }, "rms_norm_eps": 1e-05, "rope_scaling": { "beta_fast": 32.0, @@ -82,7 +73,8 @@ "sliding_window": 128, "swiglu_limit": 7.0, "tie_word_embeddings": false, - "transformers_version": "4.55.0.dev0", + "torch_dtype": "bfloat16", + "transformers_version": "4.56.0.dev0", "use_cache": true, "vocab_size": 201088 } diff --git a/generation_config.json b/generation_config.json index 86f91466555bd40e3de0b1edee3d5d82f4ccdbfe..78317cfc44866df4ad74d80917d2de9ec763d414 100644 --- a/generation_config.json +++ b/generation_config.json @@ -3,9 +3,8 @@ "do_sample": true, "eos_token_id": [ 200002, - 199999, - 200012 + 199999 ], "pad_token_id": 199999, - "transformers_version": "4.55.0.dev0" + "transformers_version": "4.56.0.dev0" } diff --git a/metal/model.bin b/metal/model.bin deleted file mode 100644 index c89c5068e11011c31b3f128e21055a15bfa76df6..0000000000000000000000000000000000000000 --- a/metal/model.bin +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:0f3d5b8a213f146ec29296dad5c3370844d95ba9d21e8dc49f7e61e6e9ee9042 -size 65238253568 diff --git a/model-00000-of-00014.safetensors b/model-00000-of-00014.safetensors deleted file mode 100644 index 8615e64cd507449d0ff278269f6f30f6d7046047..0000000000000000000000000000000000000000 --- a/model-00000-of-00014.safetensors +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:695218884684c611fe08a74751ee443f971e9bd9bc062edba822da3fe45969b7 -size 4625017896 diff --git a/model-00001-of-00014.safetensors b/model-00001-of-00014.safetensors deleted file mode 100644 index f446c6f97f7d27a0efbfc027362a808b24a9e2c5..0000000000000000000000000000000000000000 --- a/model-00001-of-00014.safetensors +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:a881aa5f561b26a22b14a8262aa61849ace349ffd73d74769e030ac90a1fcf8a -size 4115586736 diff --git a/model-00001-of-00073.safetensors b/model-00001-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..ddd8a9ed06fd52c69aa47296fd1baa1d837bb822 --- /dev/null +++ b/model-00001-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:51b5b69e7feecbc961896afbe7be9a02fce772d236eb5d14d98bf9f93565e872 +size 1212106064 diff --git a/model-00002-of-00014.safetensors b/model-00002-of-00014.safetensors deleted file mode 100644 index 5cdecd0ac50e5a996b93321274c91789dd91b3cb..0000000000000000000000000000000000000000 --- a/model-00002-of-00014.safetensors +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:022478dd04398c5bdb545a5be0a6437ecc2eb53d1dbd29edafcfff4b3ddf0a41 -size 4625017888 diff --git a/model-00002-of-00073.safetensors b/model-00002-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..ffc0cdeabd7728fd42611ee0398fa4b6378779da --- /dev/null +++ b/model-00002-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:66f8d893fc4fbd27a07d2454114d5cc84821fa18d7575b625aa184b31ef0a572 +size 4248207640 diff --git a/model-00003-of-00014.safetensors b/model-00003-of-00014.safetensors deleted file mode 100644 index b6c1b637b64b5e1a4e84895543a2a5e9cc4567b7..0000000000000000000000000000000000000000 --- a/model-00003-of-00014.safetensors +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:47aee9e7b9d5bedb215042c01ccededd9bd9c30b0dddea862dc2506b9d6c74de -size 4115586752 diff --git a/model-00003-of-00073.safetensors b/model-00003-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..26d291d507c85ca8471d240dd86d0e672cb8c397 --- /dev/null +++ b/model-00003-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:26fb7705dba6ce529158ecea8863e20dc2c197790c159f3ec15b865b8474c97a +size 2177954736 diff --git a/model-00004-of-00014.safetensors b/model-00004-of-00014.safetensors deleted file mode 100644 index afc330d317da75954840a29baede067bbbbc20f4..0000000000000000000000000000000000000000 --- a/model-00004-of-00014.safetensors +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:f6c2752acda607b1d5ca52df9e75c1b9b2761e6875ff10c9bd6ddac473c0262e -size 4625017896 diff --git a/model-00004-of-00073.safetensors b/model-00004-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..83f6cf600c5f0c8b103e33df6643999950836fb8 --- /dev/null +++ b/model-00004-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dda6a71a050e68fe98ddfc12c0e3e86234b29d2ee5a6ba9083cabcc9e173563f +size 4248207640 diff --git a/model-00005-of-00014.safetensors b/model-00005-of-00014.safetensors deleted file mode 100644 index f798bf13d63288c04d9ae46104d2b161baa4d298..0000000000000000000000000000000000000000 --- a/model-00005-of-00014.safetensors +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:0c8dd401544c31cb93b8459eee7da20ea2a07626a59455d7d92b85257df9b46c -size 4115586696 diff --git a/model-00005-of-00073.safetensors b/model-00005-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..cecacd34bd0264855ff18dbe2c90a5d7bf028f40 --- /dev/null +++ b/model-00005-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7a37774982420254a3c7141b981aa37878713d6bc6f78a756295cd9e41c1ab88 +size 2177954736 diff --git a/model-00006-of-00014.safetensors b/model-00006-of-00014.safetensors deleted file mode 100644 index 3dbd730581a2514b111231c9c09df5c2aac375c6..0000000000000000000000000000000000000000 --- a/model-00006-of-00014.safetensors +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:28d839f2e027985a8b14e45f2323798862eddb7770ee9800ea6b7c803abee489 -size 4625017856 diff --git a/model-00006-of-00073.safetensors b/model-00006-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..4826abbdd222efc4dd73a42a68a3767db186b184 --- /dev/null +++ b/model-00006-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:01b81738668bce7b58ec8cef5e8c25b1e2b7afafcd1af7913da2788838263bd0 +size 4248207640 diff --git a/model-00007-of-00014.safetensors b/model-00007-of-00014.safetensors deleted file mode 100644 index b112bc3b69d95159bfad56dbcc1c152ce3482b10..0000000000000000000000000000000000000000 --- a/model-00007-of-00014.safetensors +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:c8958c5f183c04f6ea959cfd90562b5128124154b2bbf979b8a22b9405b30ed8 -size 4060267176 diff --git a/model-00007-of-00073.safetensors b/model-00007-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..4c7b7984e500cd312528d731e1a75da66ae9e5d0 --- /dev/null +++ b/model-00007-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:42c25b0c1f0d5f64594529628151ace3ae838869c744c361cd5421165442e26e +size 2177954736 diff --git a/model-00008-of-00014.safetensors b/model-00008-of-00014.safetensors deleted file mode 100644 index 265749f8e92661a91a55affc3ce6f2b6a3a56a99..0000000000000000000000000000000000000000 --- a/model-00008-of-00014.safetensors +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:bf1f2a88868ffc37d520dcf77d26f0e823710b5e682d473ff10f6974fa3b7517 -size 4625017896 diff --git a/model-00008-of-00073.safetensors b/model-00008-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..2420db70e716e40cceab702078683d15ae2a9c77 --- /dev/null +++ b/model-00008-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4b43d83c403e92882c79cdfbeb8b80e30d0d7fe58329ab8d3a8a13db0c1d7c5d +size 4248207640 diff --git a/model-00009-of-00014.safetensors b/model-00009-of-00014.safetensors deleted file mode 100644 index 51713b9ccebf39dc86d49fa4f8fcf54eedaf8464..0000000000000000000000000000000000000000 --- a/model-00009-of-00014.safetensors +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:f72d34a4004241b45c332b61f8ffa124e9a913bc1ab442b66e717d3e94e741ce -size 4170906304 diff --git a/model-00009-of-00073.safetensors b/model-00009-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..da1c422b997c24cbff193005405cbfa8b6b6afbe --- /dev/null +++ b/model-00009-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:58fc61132427d0002e559feca0a040ddcc2be59557bc9b713c5011702c25e5b6 +size 2177954736 diff --git a/model-00010-of-00014.safetensors b/model-00010-of-00014.safetensors deleted file mode 100644 index b990a8080b2660b011d2e505556163b90c80dd33..0000000000000000000000000000000000000000 --- a/model-00010-of-00014.safetensors +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:f48c867c2cb0a44bfc2f8768cb98e4aec9a350946fceacfebdcad5d32ad4a471 -size 4625017896 diff --git a/model-00010-of-00073.safetensors b/model-00010-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..e60d1990f00f619edf4f6cd8d8c70146e442bea3 --- /dev/null +++ b/model-00010-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3dc516fd1e058767eb5d566b81f2619fc9319aed76409628d94b9fda5e4994a0 +size 4248207640 diff --git a/model-00011-of-00014.safetensors b/model-00011-of-00014.safetensors deleted file mode 100644 index d81ede9c1c10d560516d500500977997ce238a43..0000000000000000000000000000000000000000 --- a/model-00011-of-00014.safetensors +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:a06851b2cfd35f48722f823bc1ab8f7bcb4a878a5b8e975f4d3544f230454eeb -size 4115586752 diff --git a/model-00011-of-00073.safetensors b/model-00011-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..0f2ed9bd773f69fecb6fc117062ff3229f50a589 --- /dev/null +++ b/model-00011-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:153eef134702ea1df876220d0e3873e47b124b7764dabc8c64f25a8247761fec +size 2177954736 diff --git a/model-00012-of-00014.safetensors b/model-00012-of-00014.safetensors deleted file mode 100644 index 9263e23d9dbe806b366ff16ae6526d78a74b4b73..0000000000000000000000000000000000000000 --- a/model-00012-of-00014.safetensors +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:3af33667c307e20ae2a7648ea52653de46dd0171601ec5c696e47a2f5d5bf1e4 -size 4064660808 diff --git a/model-00012-of-00073.safetensors b/model-00012-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..972054b5252c2a5dfab20c4eac58bd0309ac6975 --- /dev/null +++ b/model-00012-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:39b2a45ac61f5b5d44e5a2df4b88643f8c7d57545b06c60121d954a9598599c1 +size 4248207640 diff --git a/model-00013-of-00014.safetensors b/model-00013-of-00014.safetensors deleted file mode 100644 index 42e0ce5ee76776d93ae36ae8bdc35040d7822f9e..0000000000000000000000000000000000000000 --- a/model-00013-of-00014.safetensors +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:bcbcb74b043e071d1e05471d500d74dcf661175e00878ed302ccdf1801a75aef -size 4625017896 diff --git a/model-00013-of-00073.safetensors b/model-00013-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..681fdac1d55250a1b510443b8d9ed07ddae77074 --- /dev/null +++ b/model-00013-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:11ceba841dfdd14f0e2b313e676c48c4d66b5824a2fdfa25aa327fabccb227f2 +size 2177954736 diff --git a/model-00014-of-00014.safetensors b/model-00014-of-00014.safetensors deleted file mode 100644 index 0cc0d7a6c234cd77a726881f98ddb0615ae95caa..0000000000000000000000000000000000000000 --- a/model-00014-of-00014.safetensors +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:54b1be1609696c307cc5ca117b1fa54feaddebffa04e9c2db117652a01964230 -size 4115586736 diff --git a/model-00014-of-00073.safetensors b/model-00014-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..b8fb665708d218c8c30bf3b87cbf7e0911c014c5 --- /dev/null +++ b/model-00014-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0a4ba78d4b340e08bcdb3fbfb48c26cc57abd581046b6a2a6620b9c63cca14aa +size 4248207640 diff --git a/model-00015-of-00073.safetensors b/model-00015-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..230f87e0b27b968cee20aafe3c7f1a1696abad94 --- /dev/null +++ b/model-00015-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c0c5e76d8bd303eba203bb00f9ab3dcb4cffa22efe3c400c63c4eb146dd1d744 +size 2177954736 diff --git a/model-00016-of-00073.safetensors b/model-00016-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..b231d9ea3cbea1a55578d11a9586d7536ac7c0f3 --- /dev/null +++ b/model-00016-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:925cd470b083f8ce05484d991c989419873eecf562080947d4c64649ab56677d +size 4248207640 diff --git a/model-00017-of-00073.safetensors b/model-00017-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..beba6665519353ef3a9627510a3410188e6d0133 --- /dev/null +++ b/model-00017-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f0ccd83bcd726ee1d758e4eda1d181fc9725e19dfd7ed6326e60b2403f394010 +size 2177954736 diff --git a/model-00018-of-00073.safetensors b/model-00018-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..284f05e627e9326343342712706cf4d4af78b817 --- /dev/null +++ b/model-00018-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f52b5fe061cefc784b2fb5a09ce6f30fe1fa6641103a141a5b8722df2a48aff8 +size 4248207640 diff --git a/model-00019-of-00073.safetensors b/model-00019-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..177ee272ae674afeef8d27ed88093bd22e7be298 --- /dev/null +++ b/model-00019-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:491cc447b9ac3010ddd8a5a1a2debfbf089a5fd1bececa2bca9589e23280b512 +size 2177954736 diff --git a/model-00020-of-00073.safetensors b/model-00020-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..0ca9839d90ea7c40c1c76454e1ede34b8f6b8480 --- /dev/null +++ b/model-00020-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3484899ce81bacff6cf71f3e225152d61b3f688906bdaca527c87c5eb35d7c1f +size 4248207640 diff --git a/model-00021-of-00073.safetensors b/model-00021-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..a8cfa2b32b8593d592a768bfaa2d5f66afcfae73 --- /dev/null +++ b/model-00021-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fd51c755fc6bf56bbe9eff6fe4045ec3b6e36880b88a385304ce9b8e8e1d348f +size 2177954688 diff --git a/model-00022-of-00073.safetensors b/model-00022-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..1046e178507ebf5cc9a623a26d45ffb1b6254d5f --- /dev/null +++ b/model-00022-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7d18647e59b8982622412fa33d9c8defbac9441670fedeb94bf216382f904e48 +size 4248207640 diff --git a/model-00023-of-00073.safetensors b/model-00023-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..b889ffe3d699ddc4e0c108d914ba37256c9064e9 --- /dev/null +++ b/model-00023-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a80156129505db89d74e0f54e3f2f4ce43f9355f92d19483179d66eaaab4db20 +size 2177954752 diff --git a/model-00024-of-00073.safetensors b/model-00024-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..cf61bb785574a6ebca04b516a646503ebc931f8c --- /dev/null +++ b/model-00024-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f75eeb2761915a28fd12ae35d2fa13b6b984d9c6829184dc6a81bb778d1811c2 +size 4248207640 diff --git a/model-00025-of-00073.safetensors b/model-00025-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..163e6f71eba6eb1ad7e4b544a52457e55d48162c --- /dev/null +++ b/model-00025-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8d06dffd73f797cb37e5490618d6b65e047525f6aa520220a26a26de6b831d31 +size 2177954752 diff --git a/model-00026-of-00073.safetensors b/model-00026-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..c0878898b2bbee01fece50654f41f0c4ce2ebba0 --- /dev/null +++ b/model-00026-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:78d69f06e29de3eeb3e8df6353d418cfca5a6abb432f90b95c750252e5d3b7e1 +size 4248207640 diff --git a/model-00027-of-00073.safetensors b/model-00027-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..1d358f0ba36c8c920c81b34f06d45298a333c77b --- /dev/null +++ b/model-00027-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f5661406e1ac416fc79107b63e0b73d86a7620e88da97910d98ad8e76af6e43a +size 2177954752 diff --git a/model-00028-of-00073.safetensors b/model-00028-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..15e444843fc91d96ca903cb006caf5d99846de73 --- /dev/null +++ b/model-00028-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2301da5402b74812725f99d892af657a421d7d3ff1fedf3309c28aa0fa0e4353 +size 4248207640 diff --git a/model-00029-of-00073.safetensors b/model-00029-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..4e66c574f07748e7d263bf381f0c811a17d447fa --- /dev/null +++ b/model-00029-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4a1abacc9e3da8fa1e9228657af1df9bb963ad5b943021dc2062b35de801f8da +size 2177954752 diff --git a/model-00030-of-00073.safetensors b/model-00030-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..934dcb5bfa155c571fe3ab8cdbabffce45ca1824 --- /dev/null +++ b/model-00030-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ad8c1f5337c4b372da5d49cfeca7996e85cf3f8af0323359abfc5a38539db4d1 +size 4248207640 diff --git a/model-00031-of-00073.safetensors b/model-00031-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..d3b6a40658663c3beaa7b640d1a212a395a49704 --- /dev/null +++ b/model-00031-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:706223001eb940b0d020019616a37d5701474474fed8488ae048103b68ef2f6e +size 2177954752 diff --git a/model-00032-of-00073.safetensors b/model-00032-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..3cc323a0b5cf83aeb072e6aeee414ddbb742a8e3 --- /dev/null +++ b/model-00032-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1a9dc8df6da9507d6f99547ed7b6ab17793cbcd5a81c0c2f3439ff8f6c1d10c9 +size 4248207640 diff --git a/model-00033-of-00073.safetensors b/model-00033-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..58701af7a5fe4c444949a98d8c207009717250e1 --- /dev/null +++ b/model-00033-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b2f451697d42df95e85c5b2126324d663589b3d8f0afdb050f9c1ec445b95c01 +size 2177954752 diff --git a/model-00034-of-00073.safetensors b/model-00034-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..088d89c3ce46312b5fb168b42c9b138975af0fdd --- /dev/null +++ b/model-00034-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2278f08494e6ef08a3e4b5794fc650970b01bb6652502c3bf861d9ee39024d52 +size 4248207640 diff --git a/model-00035-of-00073.safetensors b/model-00035-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..752e00bd385e6c9c2e59fa938400486aa27a05b8 --- /dev/null +++ b/model-00035-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1125afcc82b1f89b8f043e607fb5c5fe2a12eb3180704819c7ab53e34b8f161a +size 2177954752 diff --git a/model-00036-of-00073.safetensors b/model-00036-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..a8e95409219a417441b9a780689e1c0c90bf0d3c --- /dev/null +++ b/model-00036-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d28428c507c8b4e042c656f2db6c30fc9db7dd6f6c0157dcda930c2a3ea4d11c +size 4248207640 diff --git a/model-00037-of-00073.safetensors b/model-00037-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..f4241f256742a90a4193e0086716d17b5c7e65f9 --- /dev/null +++ b/model-00037-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:83a26fec1a0315c51fbf91511071a7fc660db628615935c3af8984ab3c4dd49e +size 2177954752 diff --git a/model-00038-of-00073.safetensors b/model-00038-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..849397898f8f12af6c3f25eb4035ca339cbf60ca --- /dev/null +++ b/model-00038-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9704e3b5209e51b7310530e1da1e31b4aafb290a21b1957990b59ee98c08312f +size 4248207640 diff --git a/model-00039-of-00073.safetensors b/model-00039-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..8357f22f8682213c98b8999493fe9fac17358733 --- /dev/null +++ b/model-00039-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fedddbbc0dcd1cb093728e8109c2a0755953d26b8fe05f013e2e773bd754a745 +size 2177954752 diff --git a/model-00040-of-00073.safetensors b/model-00040-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..87db2650c968dc085ff0badeaee1c644084e9842 --- /dev/null +++ b/model-00040-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:63132593ac1bd4883db66340d38ef61685065e53db5ac40276bbfc1d83fb8bb8 +size 4248207640 diff --git a/model-00041-of-00073.safetensors b/model-00041-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..629153afac3543ebee2edb53b1e4c6ffa1ae79e6 --- /dev/null +++ b/model-00041-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:feb429eb38116b41b155d787f55bf67d407f55f1e526b46b691a0689784935b6 +size 2177954752 diff --git a/model-00042-of-00073.safetensors b/model-00042-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..e077fbf39e8f59fb092cad7e9beb0f11a35d88a2 --- /dev/null +++ b/model-00042-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e39c7084ce12ee967b6dd15c8e1849fa550d50fd8e7d123064c3b3b5a233118a +size 4248207640 diff --git a/model-00043-of-00073.safetensors b/model-00043-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..d519d7c02963cc09c71134ef92a2f2f78e36f8ae --- /dev/null +++ b/model-00043-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5b95867dd99e0ba2d36288ec7fcd1bd7379294cc8afe41bfbd220fe6592f2de5 +size 2177954752 diff --git a/model-00044-of-00073.safetensors b/model-00044-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..dff09063452d8a0763e6ca7f2332d30c1df12eff --- /dev/null +++ b/model-00044-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9a3cfc8f6772d9ec8ccbbd1213069d0b8213d1f7bf4658011ac9fcb12504e941 +size 4248207640 diff --git a/model-00045-of-00073.safetensors b/model-00045-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..5785bdf79530ddd5ad97104b7250440f760bea9d --- /dev/null +++ b/model-00045-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8964545596d0aa661b9c9a91abfdda250bd17f6310670f130e0d08f6f64ea3b3 +size 2177954752 diff --git a/model-00046-of-00073.safetensors b/model-00046-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..0f594f984ea46eff3d7723c171a02dd4f2378589 --- /dev/null +++ b/model-00046-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a1939048181675b836a1f11862169988e54ecf788d4b74421f664052d565c07a +size 4248207640 diff --git a/model-00047-of-00073.safetensors b/model-00047-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..4bf8945f3b0bc6b331f90a0f912425547c1726f9 --- /dev/null +++ b/model-00047-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:38a17484c45aed703077f81d42f438491d080999fc12e45a3edd9bbf5f5230a9 +size 2177954752 diff --git a/model-00048-of-00073.safetensors b/model-00048-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..0683c5739356157326426e735f6e45b0ac3999c6 --- /dev/null +++ b/model-00048-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7b5ff6161f070bca5bf419c49123a327e601f1526c1256c0f9151d3e1f53b4fa +size 4248207640 diff --git a/model-00049-of-00073.safetensors b/model-00049-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..4a43465bd509af1491363f26c4af3516577f80c6 --- /dev/null +++ b/model-00049-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6d5987ba46427a4dd4e19b61ba2ffbe7cb0b9944465f8b5f8a829e909951278e +size 2177954752 diff --git a/model-00050-of-00073.safetensors b/model-00050-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..ec92fb99018bb46f07247e1d0cb26ae87e64ccce --- /dev/null +++ b/model-00050-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:65373fb48bed15e6db585aec180e53098ef5d59368265be9aa54da55f9ae0e05 +size 4248207640 diff --git a/model-00051-of-00073.safetensors b/model-00051-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..e569ce96f0c33a60b355eeac6c883e16f1aa8969 --- /dev/null +++ b/model-00051-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fea6bde31599c4042dd4f1d628bc2d1e85f092336239e9884acfad9aef4c3673 +size 2177954752 diff --git a/model-00052-of-00073.safetensors b/model-00052-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..19291c6fd6b13eefbf7b721a2bf67b482efd69c6 --- /dev/null +++ b/model-00052-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fa3fbc51fb6592d3a61b1769c06c2a922fbe396432626191e2b036de3e507d80 +size 4248207640 diff --git a/model-00053-of-00073.safetensors b/model-00053-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..6da80ade5bffc04c7d136b5a56d670518d45f826 --- /dev/null +++ b/model-00053-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:11e64ca8874d03e65f572f14e98f4292ca8128fdc347b101e3aa4a9e87d8eb40 +size 2177954752 diff --git a/model-00054-of-00073.safetensors b/model-00054-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..d93e6623a9cb4e3de6bf754dc568c3d43496a8c0 --- /dev/null +++ b/model-00054-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b0531a970dd6c5d8e7cbae562255f6a3a9436546c311a29602e0fcc0fd8a84e3 +size 4248207640 diff --git a/model-00055-of-00073.safetensors b/model-00055-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..2865cd91fe491622c2f605abae434af258dbb633 --- /dev/null +++ b/model-00055-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5a036a3f27720e325441159ed148dcfe385dc97f944872261392187ba87ca50e +size 2177954752 diff --git a/model-00056-of-00073.safetensors b/model-00056-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..1e7113d9e7d57947433e98a77c58c13231e34c68 --- /dev/null +++ b/model-00056-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b05fa63d01ec993405b2926e45798aea2b185200c2f008aeeaea68d0a6945c98 +size 4248207640 diff --git a/model-00057-of-00073.safetensors b/model-00057-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..e398c8663aa69de7a9b7b432f478d01d8873a7de --- /dev/null +++ b/model-00057-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9c643a7f9f1352fddc2de0bf737b9f7d36fb50ce16c4cdcb95ee591b3466e3e9 +size 2177954752 diff --git a/model-00058-of-00073.safetensors b/model-00058-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..6bad9865380e13fc1ab55d7135f68a43edfa0b21 --- /dev/null +++ b/model-00058-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0b8641d3b84630260b190480647bc640fd122dbe24a6dd465b6812a17ba73989 +size 4248207640 diff --git a/model-00059-of-00073.safetensors b/model-00059-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..577d5dae86aafdb50a162a0b649f5355ec37e38a --- /dev/null +++ b/model-00059-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:426e353a74f42fca266411f5b637b9abe6469865993665a9cb646964458450bf +size 2177954752 diff --git a/model-00060-of-00073.safetensors b/model-00060-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..ff2ead735964c12ba27e21fd67e94edcc02baedc --- /dev/null +++ b/model-00060-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4e2351f41533aec995d9780f0c6a4b47279ab381e02709b46fe618487503602a +size 4248207640 diff --git a/model-00061-of-00073.safetensors b/model-00061-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..aa2407026cbd87a629e0c4ce7041a7ca67845ccf --- /dev/null +++ b/model-00061-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bfd9b620877bf033462ae33bd8055d738bff12ff52f2412225be5b21bb21eaa0 +size 2177954752 diff --git a/model-00062-of-00073.safetensors b/model-00062-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..c2d3cf4ccf45a0fc5ac5dae3961e74f5c2493d3d --- /dev/null +++ b/model-00062-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d21973e03b03ac1b40821ecfe3e5c184ad04e6ef527e0b0a44a2f1879414f3ef +size 4248207640 diff --git a/model-00063-of-00073.safetensors b/model-00063-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..51141c8e278ab43d3cbb568976c274cec96a891e --- /dev/null +++ b/model-00063-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2617143e41690b4d1a560bd0380568a310079de0d1c817afa2a9be8a812c1578 +size 2177954752 diff --git a/model-00064-of-00073.safetensors b/model-00064-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..6bd945a93777070b19bc0f2d1d125ad656effafc --- /dev/null +++ b/model-00064-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7435c449bd45bf5a50e92bc22767fb6114afaaa5f3d12f0948649ff3eaad2353 +size 4248207640 diff --git a/model-00065-of-00073.safetensors b/model-00065-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..22a1aecc7dfb8e61ad23c4abcc4736d652628538 --- /dev/null +++ b/model-00065-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b4f5f937977c9767685f90b2b0966c2b70751db6c0ccb030eeb89d174919f5b9 +size 2177954752 diff --git a/model-00066-of-00073.safetensors b/model-00066-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..db0d65e36e3aff15a2a3698ca42bc5af09df6b40 --- /dev/null +++ b/model-00066-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6bbcd434c09d76339b3985c613662226dae9548395066eed1fbde54062e74932 +size 4248207640 diff --git a/model-00067-of-00073.safetensors b/model-00067-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..d78e1003b555ce5d98ec2ceefe39750fc3dccc01 --- /dev/null +++ b/model-00067-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dc96a5dc2eeafbe02d1b64344c1eea7228146ed00f5e3ca0eaf1bdb597328b3f +size 2177954752 diff --git a/model-00068-of-00073.safetensors b/model-00068-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..d580dfb8ebe5560b1f300502947c2dd298896949 --- /dev/null +++ b/model-00068-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d1f91f706f2bb4eb1d88b80208fafd72879fb5501bedc2fa87f85c4e2e6aa0ee +size 4248207640 diff --git a/model-00069-of-00073.safetensors b/model-00069-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..e63f7240c2c55471515437c091eb63926516c21d --- /dev/null +++ b/model-00069-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aec6dc5b9590b10027656b0da9bc61e1c9ee28db547baeabf4f1f6bec92363d9 +size 2177954752 diff --git a/model-00070-of-00073.safetensors b/model-00070-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..12f4adbd05ea78a9774d4fb7818c600e720e62e5 --- /dev/null +++ b/model-00070-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:964669a60753219bb8f516f22d2985b609743436f665afa55dc9c95c40f0dd06 +size 4248207640 diff --git a/model-00071-of-00073.safetensors b/model-00071-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..550d9d6d0b8b736d965a9a4dbab389c091043a51 --- /dev/null +++ b/model-00071-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cde9252c0865f1096dd3192e8f974c7cc8833914c72054f3379626c4fce7fc40 +size 2177954752 diff --git a/model-00072-of-00073.safetensors b/model-00072-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..6379f048d14c4ed96435e6ce872ae048608ac57e --- /dev/null +++ b/model-00072-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:38d5e2198840177789319fc6ac995273dfe0b6b72325e40f641669206f49d8f3 +size 4248207640 diff --git a/model-00073-of-00073.safetensors b/model-00073-of-00073.safetensors new file mode 100644 index 0000000000000000000000000000000000000000..eaa45d8f37c71b0136457d89a39751cf452389ce --- /dev/null +++ b/model-00073-of-00073.safetensors @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e493d8c2d647686efbe51251c6c2bf7473dc98090b99bbe194eacfac3302c675 +size 3282388536 diff --git a/model.safetensors.index.json b/model.safetensors.index.json index 3977c021440bdb471353378871e78f8c694b445a..ac32376906551efb22fd6405ca90ba5c5c4e90ec 100644 --- a/model.safetensors.index.json +++ b/model.safetensors.index.json @@ -1 +1,623 @@ -{"metadata": {"total_size": 65248815744}, "weight_map": {"model.layers.25.mlp.experts.gate_up_proj_blocks": "model-00000-of-00014.safetensors", "model.layers.25.mlp.experts.gate_up_proj_scales": "model-00000-of-00014.safetensors", "model.layers.25.mlp.experts.down_proj_bias": "model-00000-of-00014.safetensors", "model.layers.25.mlp.experts.down_proj_blocks": "model-00000-of-00014.safetensors", "model.layers.25.mlp.experts.down_proj_scales": "model-00000-of-00014.safetensors", "model.layers.25.post_attention_layernorm.weight": "model-00000-of-00014.safetensors", "model.layers.26.input_layernorm.weight": "model-00000-of-00014.safetensors", "model.layers.26.self_attn.o_proj.bias": "model-00000-of-00014.safetensors", "model.layers.26.self_attn.o_proj.weight": "model-00000-of-00014.safetensors", "model.layers.26.self_attn.q_proj.bias": "model-00000-of-00014.safetensors", "model.layers.26.self_attn.k_proj.bias": "model-00000-of-00014.safetensors", "model.layers.26.self_attn.v_proj.bias": "model-00000-of-00014.safetensors", "model.layers.26.self_attn.q_proj.weight": "model-00000-of-00014.safetensors", "model.layers.26.self_attn.k_proj.weight": "model-00000-of-00014.safetensors", "model.layers.26.self_attn.v_proj.weight": "model-00000-of-00014.safetensors", "model.layers.26.self_attn.sinks": "model-00000-of-00014.safetensors", "model.layers.26.mlp.router.bias": "model-00000-of-00014.safetensors", "model.layers.26.mlp.router.weight": "model-00000-of-00014.safetensors", "model.layers.26.mlp.experts.gate_up_proj_bias": "model-00000-of-00014.safetensors", "model.layers.26.mlp.experts.gate_up_proj_blocks": "model-00000-of-00014.safetensors", "model.layers.26.mlp.experts.gate_up_proj_scales": "model-00000-of-00014.safetensors", "model.layers.26.mlp.experts.down_proj_bias": "model-00000-of-00014.safetensors", "model.layers.26.mlp.experts.down_proj_blocks": "model-00000-of-00014.safetensors", "model.layers.26.mlp.experts.down_proj_scales": "model-00000-of-00014.safetensors", "model.layers.26.post_attention_layernorm.weight": "model-00000-of-00014.safetensors", "model.layers.27.input_layernorm.weight": "model-00000-of-00014.safetensors", "model.layers.27.self_attn.o_proj.bias": "model-00000-of-00014.safetensors", "model.layers.27.self_attn.o_proj.weight": "model-00000-of-00014.safetensors", "model.layers.27.self_attn.q_proj.bias": "model-00000-of-00014.safetensors", "model.layers.27.self_attn.k_proj.bias": "model-00000-of-00014.safetensors", "model.layers.27.self_attn.v_proj.bias": "model-00000-of-00014.safetensors", "model.layers.27.self_attn.q_proj.weight": "model-00000-of-00014.safetensors", "model.layers.27.self_attn.k_proj.weight": "model-00000-of-00014.safetensors", "model.layers.27.self_attn.v_proj.weight": "model-00000-of-00014.safetensors", "model.layers.27.self_attn.sinks": "model-00000-of-00014.safetensors", "model.layers.27.mlp.router.bias": "model-00000-of-00014.safetensors", "model.layers.27.mlp.router.weight": "model-00000-of-00014.safetensors", "model.layers.27.mlp.experts.gate_up_proj_bias": "model-00000-of-00014.safetensors", "model.layers.27.mlp.experts.gate_up_proj_blocks": "model-00000-of-00014.safetensors", "model.layers.27.mlp.experts.gate_up_proj_scales": "model-00000-of-00014.safetensors", "model.layers.27.mlp.experts.down_proj_bias": "model-00000-of-00014.safetensors", "model.layers.27.mlp.experts.down_proj_blocks": "model-00001-of-00014.safetensors", "model.layers.27.mlp.experts.down_proj_scales": "model-00001-of-00014.safetensors", "model.layers.27.post_attention_layernorm.weight": "model-00001-of-00014.safetensors", "model.layers.28.input_layernorm.weight": "model-00001-of-00014.safetensors", "model.layers.28.self_attn.o_proj.bias": "model-00001-of-00014.safetensors", "model.layers.28.self_attn.o_proj.weight": "model-00001-of-00014.safetensors", "model.layers.28.self_attn.q_proj.bias": "model-00001-of-00014.safetensors", "model.layers.28.self_attn.k_proj.bias": "model-00001-of-00014.safetensors", "model.layers.28.self_attn.v_proj.bias": "model-00001-of-00014.safetensors", "model.layers.28.self_attn.q_proj.weight": "model-00001-of-00014.safetensors", "model.layers.28.self_attn.k_proj.weight": "model-00001-of-00014.safetensors", "model.layers.28.self_attn.v_proj.weight": "model-00001-of-00014.safetensors", "model.layers.28.self_attn.sinks": "model-00001-of-00014.safetensors", "model.layers.28.mlp.router.bias": "model-00001-of-00014.safetensors", "model.layers.28.mlp.router.weight": "model-00001-of-00014.safetensors", "model.layers.28.mlp.experts.gate_up_proj_bias": "model-00001-of-00014.safetensors", "model.layers.28.mlp.experts.gate_up_proj_blocks": "model-00001-of-00014.safetensors", "model.layers.28.mlp.experts.gate_up_proj_scales": "model-00001-of-00014.safetensors", "model.layers.28.mlp.experts.down_proj_bias": "model-00001-of-00014.safetensors", "model.layers.28.mlp.experts.down_proj_blocks": "model-00001-of-00014.safetensors", "model.layers.28.mlp.experts.down_proj_scales": "model-00001-of-00014.safetensors", "model.layers.28.post_attention_layernorm.weight": "model-00001-of-00014.safetensors", "model.layers.29.input_layernorm.weight": "model-00001-of-00014.safetensors", "model.layers.29.self_attn.o_proj.bias": "model-00001-of-00014.safetensors", "model.layers.29.self_attn.o_proj.weight": "model-00001-of-00014.safetensors", "model.layers.29.self_attn.q_proj.bias": "model-00001-of-00014.safetensors", "model.layers.29.self_attn.k_proj.bias": "model-00001-of-00014.safetensors", "model.layers.29.self_attn.v_proj.bias": "model-00001-of-00014.safetensors", "model.layers.29.self_attn.q_proj.weight": "model-00001-of-00014.safetensors", "model.layers.29.self_attn.k_proj.weight": "model-00001-of-00014.safetensors", "model.layers.29.self_attn.v_proj.weight": "model-00001-of-00014.safetensors", "model.layers.29.self_attn.sinks": "model-00001-of-00014.safetensors", "model.layers.29.mlp.router.bias": "model-00001-of-00014.safetensors", "model.layers.29.mlp.router.weight": "model-00001-of-00014.safetensors", "model.layers.29.mlp.experts.gate_up_proj_bias": "model-00001-of-00014.safetensors", "model.layers.29.mlp.experts.gate_up_proj_blocks": "model-00001-of-00014.safetensors", "model.layers.29.mlp.experts.gate_up_proj_scales": "model-00001-of-00014.safetensors", "model.layers.29.mlp.experts.down_proj_bias": "model-00001-of-00014.safetensors", "model.layers.29.mlp.experts.down_proj_blocks": "model-00001-of-00014.safetensors", "model.layers.29.mlp.experts.down_proj_scales": "model-00001-of-00014.safetensors", "model.layers.29.post_attention_layernorm.weight": "model-00001-of-00014.safetensors", "model.layers.3.input_layernorm.weight": "model-00001-of-00014.safetensors", "model.layers.3.self_attn.o_proj.bias": "model-00001-of-00014.safetensors", "model.layers.3.self_attn.o_proj.weight": "model-00001-of-00014.safetensors", "model.layers.3.self_attn.q_proj.bias": "model-00001-of-00014.safetensors", "model.layers.3.self_attn.k_proj.bias": "model-00001-of-00014.safetensors", "model.layers.3.self_attn.v_proj.bias": "model-00001-of-00014.safetensors", "model.layers.3.self_attn.q_proj.weight": "model-00001-of-00014.safetensors", "model.layers.3.self_attn.k_proj.weight": "model-00001-of-00014.safetensors", "model.layers.3.self_attn.v_proj.weight": "model-00001-of-00014.safetensors", "model.layers.3.self_attn.sinks": "model-00001-of-00014.safetensors", "model.layers.3.mlp.router.bias": "model-00001-of-00014.safetensors", "model.layers.3.mlp.router.weight": "model-00001-of-00014.safetensors", "model.layers.3.mlp.experts.gate_up_proj_bias": "model-00001-of-00014.safetensors", "model.layers.3.mlp.experts.gate_up_proj_blocks": "model-00002-of-00014.safetensors", "model.layers.3.mlp.experts.gate_up_proj_scales": "model-00002-of-00014.safetensors", "model.layers.3.mlp.experts.down_proj_bias": "model-00002-of-00014.safetensors", "model.layers.3.mlp.experts.down_proj_blocks": "model-00002-of-00014.safetensors", "model.layers.3.mlp.experts.down_proj_scales": "model-00002-of-00014.safetensors", "model.layers.3.post_attention_layernorm.weight": "model-00002-of-00014.safetensors", "model.layers.30.input_layernorm.weight": "model-00002-of-00014.safetensors", "model.layers.30.self_attn.o_proj.bias": "model-00002-of-00014.safetensors", "model.layers.30.self_attn.o_proj.weight": "model-00002-of-00014.safetensors", "model.layers.30.self_attn.q_proj.bias": "model-00002-of-00014.safetensors", "model.layers.30.self_attn.k_proj.bias": "model-00002-of-00014.safetensors", "model.layers.30.self_attn.v_proj.bias": "model-00002-of-00014.safetensors", "model.layers.30.self_attn.q_proj.weight": "model-00002-of-00014.safetensors", "model.layers.30.self_attn.k_proj.weight": "model-00002-of-00014.safetensors", "model.layers.30.self_attn.v_proj.weight": "model-00002-of-00014.safetensors", "model.layers.30.self_attn.sinks": "model-00002-of-00014.safetensors", "model.layers.30.mlp.router.bias": "model-00002-of-00014.safetensors", "model.layers.30.mlp.router.weight": "model-00002-of-00014.safetensors", "model.layers.30.mlp.experts.gate_up_proj_bias": "model-00002-of-00014.safetensors", "model.layers.2.mlp.experts.gate_up_proj_blocks": "model-00002-of-00014.safetensors", "model.layers.2.mlp.experts.gate_up_proj_scales": "model-00002-of-00014.safetensors", "model.layers.2.mlp.experts.down_proj_bias": "model-00002-of-00014.safetensors", "model.layers.2.mlp.experts.down_proj_blocks": "model-00002-of-00014.safetensors", "model.layers.2.mlp.experts.down_proj_scales": "model-00002-of-00014.safetensors", "model.layers.2.post_attention_layernorm.weight": "model-00002-of-00014.safetensors", "model.layers.20.input_layernorm.weight": "model-00002-of-00014.safetensors", "model.layers.20.self_attn.o_proj.bias": "model-00002-of-00014.safetensors", "model.layers.20.self_attn.o_proj.weight": "model-00002-of-00014.safetensors", "model.layers.20.self_attn.q_proj.bias": "model-00002-of-00014.safetensors", "model.layers.20.self_attn.k_proj.bias": "model-00002-of-00014.safetensors", "model.layers.20.self_attn.v_proj.bias": "model-00002-of-00014.safetensors", "model.layers.20.self_attn.q_proj.weight": "model-00002-of-00014.safetensors", "model.layers.20.self_attn.k_proj.weight": "model-00002-of-00014.safetensors", "model.layers.20.self_attn.v_proj.weight": "model-00002-of-00014.safetensors", "model.layers.20.self_attn.sinks": "model-00002-of-00014.safetensors", "model.layers.20.mlp.router.bias": "model-00002-of-00014.safetensors", "model.layers.20.mlp.router.weight": "model-00002-of-00014.safetensors", "model.layers.20.mlp.experts.gate_up_proj_bias": "model-00002-of-00014.safetensors", "model.layers.20.mlp.experts.gate_up_proj_blocks": "model-00002-of-00014.safetensors", "model.layers.20.mlp.experts.gate_up_proj_scales": "model-00002-of-00014.safetensors", "model.layers.20.mlp.experts.down_proj_bias": "model-00002-of-00014.safetensors", "model.layers.20.mlp.experts.down_proj_blocks": "model-00003-of-00014.safetensors", "model.layers.20.mlp.experts.down_proj_scales": "model-00003-of-00014.safetensors", "model.layers.20.post_attention_layernorm.weight": "model-00003-of-00014.safetensors", "model.layers.21.input_layernorm.weight": "model-00003-of-00014.safetensors", "model.layers.21.self_attn.o_proj.bias": "model-00003-of-00014.safetensors", "model.layers.21.self_attn.o_proj.weight": "model-00003-of-00014.safetensors", "model.layers.21.self_attn.q_proj.bias": "model-00003-of-00014.safetensors", "model.layers.21.self_attn.k_proj.bias": "model-00003-of-00014.safetensors", "model.layers.21.self_attn.v_proj.bias": "model-00003-of-00014.safetensors", "model.layers.21.self_attn.q_proj.weight": "model-00003-of-00014.safetensors", "model.layers.21.self_attn.k_proj.weight": "model-00003-of-00014.safetensors", "model.layers.21.self_attn.v_proj.weight": "model-00003-of-00014.safetensors", "model.layers.21.self_attn.sinks": "model-00003-of-00014.safetensors", "model.layers.21.mlp.router.bias": "model-00003-of-00014.safetensors", "model.layers.21.mlp.router.weight": "model-00003-of-00014.safetensors", "model.layers.21.mlp.experts.gate_up_proj_bias": "model-00003-of-00014.safetensors", "model.layers.21.mlp.experts.gate_up_proj_blocks": "model-00003-of-00014.safetensors", "model.layers.21.mlp.experts.gate_up_proj_scales": "model-00003-of-00014.safetensors", "model.layers.21.mlp.experts.down_proj_bias": "model-00003-of-00014.safetensors", "model.layers.21.mlp.experts.down_proj_blocks": "model-00003-of-00014.safetensors", "model.layers.21.mlp.experts.down_proj_scales": "model-00003-of-00014.safetensors", "model.layers.21.post_attention_layernorm.weight": "model-00003-of-00014.safetensors", "model.layers.22.input_layernorm.weight": "model-00003-of-00014.safetensors", "model.layers.22.self_attn.o_proj.bias": "model-00003-of-00014.safetensors", "model.layers.22.self_attn.o_proj.weight": "model-00003-of-00014.safetensors", "model.layers.22.self_attn.q_proj.bias": "model-00003-of-00014.safetensors", "model.layers.22.self_attn.k_proj.bias": "model-00003-of-00014.safetensors", "model.layers.22.self_attn.v_proj.bias": "model-00003-of-00014.safetensors", "model.layers.22.self_attn.q_proj.weight": "model-00003-of-00014.safetensors", "model.layers.22.self_attn.k_proj.weight": "model-00003-of-00014.safetensors", "model.layers.22.self_attn.v_proj.weight": "model-00003-of-00014.safetensors", "model.layers.22.self_attn.sinks": "model-00003-of-00014.safetensors", "model.layers.22.mlp.router.bias": "model-00003-of-00014.safetensors", "model.layers.22.mlp.router.weight": "model-00003-of-00014.safetensors", "model.layers.22.mlp.experts.gate_up_proj_bias": "model-00003-of-00014.safetensors", "model.layers.22.mlp.experts.gate_up_proj_blocks": "model-00003-of-00014.safetensors", "model.layers.22.mlp.experts.gate_up_proj_scales": "model-00003-of-00014.safetensors", "model.layers.22.mlp.experts.down_proj_bias": "model-00003-of-00014.safetensors", "model.layers.22.mlp.experts.down_proj_blocks": "model-00003-of-00014.safetensors", "model.layers.22.mlp.experts.down_proj_scales": "model-00003-of-00014.safetensors", "model.layers.22.post_attention_layernorm.weight": "model-00003-of-00014.safetensors", "model.layers.23.input_layernorm.weight": "model-00003-of-00014.safetensors", "model.layers.23.self_attn.o_proj.bias": "model-00003-of-00014.safetensors", "model.layers.23.self_attn.o_proj.weight": "model-00003-of-00014.safetensors", "model.layers.23.self_attn.q_proj.bias": "model-00003-of-00014.safetensors", "model.layers.23.self_attn.k_proj.bias": "model-00003-of-00014.safetensors", "model.layers.23.self_attn.v_proj.bias": "model-00003-of-00014.safetensors", "model.layers.23.self_attn.q_proj.weight": "model-00003-of-00014.safetensors", "model.layers.23.self_attn.k_proj.weight": "model-00003-of-00014.safetensors", "model.layers.23.self_attn.v_proj.weight": "model-00003-of-00014.safetensors", "model.layers.23.self_attn.sinks": "model-00003-of-00014.safetensors", "model.layers.23.mlp.router.bias": "model-00003-of-00014.safetensors", "model.layers.23.mlp.router.weight": "model-00003-of-00014.safetensors", "model.layers.23.mlp.experts.gate_up_proj_bias": "model-00003-of-00014.safetensors", "model.layers.23.mlp.experts.gate_up_proj_blocks": "model-00004-of-00014.safetensors", "model.layers.23.mlp.experts.gate_up_proj_scales": "model-00004-of-00014.safetensors", "model.layers.23.mlp.experts.down_proj_bias": "model-00004-of-00014.safetensors", "model.layers.23.mlp.experts.down_proj_blocks": "model-00004-of-00014.safetensors", "model.layers.23.mlp.experts.down_proj_scales": "model-00004-of-00014.safetensors", "model.layers.23.post_attention_layernorm.weight": "model-00004-of-00014.safetensors", "model.layers.24.input_layernorm.weight": "model-00004-of-00014.safetensors", "model.layers.24.self_attn.o_proj.bias": "model-00004-of-00014.safetensors", "model.layers.24.self_attn.o_proj.weight": "model-00004-of-00014.safetensors", "model.layers.24.self_attn.q_proj.bias": "model-00004-of-00014.safetensors", "model.layers.24.self_attn.k_proj.bias": "model-00004-of-00014.safetensors", "model.layers.24.self_attn.v_proj.bias": "model-00004-of-00014.safetensors", "model.layers.24.self_attn.q_proj.weight": "model-00004-of-00014.safetensors", "model.layers.24.self_attn.k_proj.weight": "model-00004-of-00014.safetensors", "model.layers.24.self_attn.v_proj.weight": "model-00004-of-00014.safetensors", "model.layers.24.self_attn.sinks": "model-00004-of-00014.safetensors", "model.layers.24.mlp.router.bias": "model-00004-of-00014.safetensors", "model.layers.24.mlp.router.weight": "model-00004-of-00014.safetensors", "model.layers.24.mlp.experts.gate_up_proj_bias": "model-00004-of-00014.safetensors", "model.layers.24.mlp.experts.gate_up_proj_blocks": "model-00004-of-00014.safetensors", "model.layers.24.mlp.experts.gate_up_proj_scales": "model-00004-of-00014.safetensors", "model.layers.24.mlp.experts.down_proj_bias": "model-00004-of-00014.safetensors", "model.layers.24.mlp.experts.down_proj_blocks": "model-00004-of-00014.safetensors", "model.layers.24.mlp.experts.down_proj_scales": "model-00004-of-00014.safetensors", "model.layers.24.post_attention_layernorm.weight": "model-00004-of-00014.safetensors", "model.layers.25.input_layernorm.weight": "model-00004-of-00014.safetensors", "model.layers.25.self_attn.o_proj.bias": "model-00004-of-00014.safetensors", "model.layers.25.self_attn.o_proj.weight": "model-00004-of-00014.safetensors", "model.layers.25.self_attn.q_proj.bias": "model-00004-of-00014.safetensors", "model.layers.25.self_attn.k_proj.bias": "model-00004-of-00014.safetensors", "model.layers.25.self_attn.v_proj.bias": "model-00004-of-00014.safetensors", "model.layers.25.self_attn.q_proj.weight": "model-00004-of-00014.safetensors", "model.layers.25.self_attn.k_proj.weight": "model-00004-of-00014.safetensors", "model.layers.25.self_attn.v_proj.weight": "model-00004-of-00014.safetensors", "model.layers.25.self_attn.sinks": "model-00004-of-00014.safetensors", "model.layers.25.mlp.router.bias": "model-00004-of-00014.safetensors", "model.layers.25.mlp.router.weight": "model-00004-of-00014.safetensors", "model.layers.25.mlp.experts.gate_up_proj_bias": "model-00004-of-00014.safetensors", "model.layers.4.mlp.experts.gate_up_proj_blocks": "model-00004-of-00014.safetensors", "model.layers.4.mlp.experts.gate_up_proj_scales": "model-00004-of-00014.safetensors", "model.layers.4.mlp.experts.down_proj_bias": "model-00004-of-00014.safetensors", "model.layers.4.mlp.experts.down_proj_blocks": "model-00005-of-00014.safetensors", "model.layers.4.mlp.experts.down_proj_scales": "model-00005-of-00014.safetensors", "model.layers.4.post_attention_layernorm.weight": "model-00005-of-00014.safetensors", "model.layers.5.input_layernorm.weight": "model-00005-of-00014.safetensors", "model.layers.5.self_attn.o_proj.bias": "model-00005-of-00014.safetensors", "model.layers.5.self_attn.o_proj.weight": "model-00005-of-00014.safetensors", "model.layers.5.self_attn.q_proj.bias": "model-00005-of-00014.safetensors", "model.layers.5.self_attn.k_proj.bias": "model-00005-of-00014.safetensors", "model.layers.5.self_attn.v_proj.bias": "model-00005-of-00014.safetensors", "model.layers.5.self_attn.q_proj.weight": "model-00005-of-00014.safetensors", "model.layers.5.self_attn.k_proj.weight": "model-00005-of-00014.safetensors", "model.layers.5.self_attn.v_proj.weight": "model-00005-of-00014.safetensors", "model.layers.5.self_attn.sinks": "model-00005-of-00014.safetensors", "model.layers.5.mlp.router.bias": "model-00005-of-00014.safetensors", "model.layers.5.mlp.router.weight": "model-00005-of-00014.safetensors", "model.layers.5.mlp.experts.gate_up_proj_bias": "model-00005-of-00014.safetensors", "model.layers.5.mlp.experts.gate_up_proj_blocks": "model-00005-of-00014.safetensors", "model.layers.5.mlp.experts.gate_up_proj_scales": "model-00005-of-00014.safetensors", "model.layers.5.mlp.experts.down_proj_bias": "model-00005-of-00014.safetensors", "model.layers.5.mlp.experts.down_proj_blocks": "model-00005-of-00014.safetensors", "model.layers.5.mlp.experts.down_proj_scales": "model-00005-of-00014.safetensors", "model.layers.5.post_attention_layernorm.weight": "model-00005-of-00014.safetensors", "model.layers.6.input_layernorm.weight": "model-00005-of-00014.safetensors", "model.layers.6.self_attn.o_proj.bias": "model-00005-of-00014.safetensors", "model.layers.6.self_attn.o_proj.weight": "model-00005-of-00014.safetensors", "model.layers.6.self_attn.q_proj.bias": "model-00005-of-00014.safetensors", "model.layers.6.self_attn.k_proj.bias": "model-00005-of-00014.safetensors", "model.layers.6.self_attn.v_proj.bias": "model-00005-of-00014.safetensors", "model.layers.6.self_attn.q_proj.weight": "model-00005-of-00014.safetensors", "model.layers.6.self_attn.k_proj.weight": "model-00005-of-00014.safetensors", "model.layers.6.self_attn.v_proj.weight": "model-00005-of-00014.safetensors", "model.layers.6.self_attn.sinks": "model-00005-of-00014.safetensors", "model.layers.6.mlp.router.bias": "model-00005-of-00014.safetensors", "model.layers.6.mlp.router.weight": "model-00005-of-00014.safetensors", "model.layers.6.mlp.experts.gate_up_proj_bias": "model-00005-of-00014.safetensors", "model.layers.6.mlp.experts.gate_up_proj_blocks": "model-00005-of-00014.safetensors", "model.layers.6.mlp.experts.gate_up_proj_scales": "model-00005-of-00014.safetensors", "model.layers.6.mlp.experts.down_proj_bias": "model-00005-of-00014.safetensors", "model.layers.6.mlp.experts.down_proj_blocks": "model-00005-of-00014.safetensors", "model.layers.6.mlp.experts.down_proj_scales": "model-00005-of-00014.safetensors", "model.layers.6.post_attention_layernorm.weight": "model-00005-of-00014.safetensors", "model.layers.7.input_layernorm.weight": "model-00005-of-00014.safetensors", "model.layers.7.self_attn.o_proj.bias": "model-00005-of-00014.safetensors", "model.layers.7.self_attn.o_proj.weight": "model-00005-of-00014.safetensors", "model.layers.7.self_attn.q_proj.bias": "model-00005-of-00014.safetensors", "model.layers.7.self_attn.k_proj.bias": "model-00005-of-00014.safetensors", "model.layers.7.self_attn.v_proj.bias": "model-00005-of-00014.safetensors", "model.layers.7.self_attn.q_proj.weight": "model-00005-of-00014.safetensors", "model.layers.7.self_attn.k_proj.weight": "model-00005-of-00014.safetensors", "model.layers.7.self_attn.v_proj.weight": "model-00005-of-00014.safetensors", "model.layers.7.self_attn.sinks": "model-00005-of-00014.safetensors", "model.layers.7.mlp.router.bias": "model-00005-of-00014.safetensors", "model.layers.7.mlp.router.weight": "model-00005-of-00014.safetensors", "model.layers.7.mlp.experts.gate_up_proj_bias": "model-00005-of-00014.safetensors", "model.layers.7.mlp.experts.gate_up_proj_blocks": "model-00006-of-00014.safetensors", "model.layers.7.mlp.experts.gate_up_proj_scales": "model-00006-of-00014.safetensors", "model.layers.7.mlp.experts.down_proj_bias": "model-00006-of-00014.safetensors", "model.layers.7.mlp.experts.down_proj_blocks": "model-00006-of-00014.safetensors", "model.layers.7.mlp.experts.down_proj_scales": "model-00006-of-00014.safetensors", "model.layers.7.post_attention_layernorm.weight": "model-00006-of-00014.safetensors", "model.layers.8.input_layernorm.weight": "model-00006-of-00014.safetensors", "model.layers.8.self_attn.o_proj.bias": "model-00006-of-00014.safetensors", "model.layers.8.self_attn.o_proj.weight": "model-00006-of-00014.safetensors", "model.layers.8.self_attn.q_proj.bias": "model-00006-of-00014.safetensors", "model.layers.8.self_attn.k_proj.bias": "model-00006-of-00014.safetensors", "model.layers.8.self_attn.v_proj.bias": "model-00006-of-00014.safetensors", "model.layers.8.self_attn.q_proj.weight": "model-00006-of-00014.safetensors", "model.layers.8.self_attn.k_proj.weight": "model-00006-of-00014.safetensors", "model.layers.8.self_attn.v_proj.weight": "model-00006-of-00014.safetensors", "model.layers.8.self_attn.sinks": "model-00006-of-00014.safetensors", "model.layers.8.mlp.router.bias": "model-00006-of-00014.safetensors", "model.layers.8.mlp.router.weight": "model-00006-of-00014.safetensors", "model.layers.8.mlp.experts.gate_up_proj_bias": "model-00006-of-00014.safetensors", "model.layers.8.mlp.experts.gate_up_proj_blocks": "model-00006-of-00014.safetensors", "model.layers.8.mlp.experts.gate_up_proj_scales": "model-00006-of-00014.safetensors", "model.layers.8.mlp.experts.down_proj_bias": "model-00006-of-00014.safetensors", "model.layers.8.mlp.experts.down_proj_blocks": "model-00006-of-00014.safetensors", "model.layers.8.mlp.experts.down_proj_scales": "model-00006-of-00014.safetensors", "model.layers.8.post_attention_layernorm.weight": "model-00006-of-00014.safetensors", "model.layers.9.input_layernorm.weight": "model-00006-of-00014.safetensors", "model.layers.9.self_attn.o_proj.bias": "model-00006-of-00014.safetensors", "model.layers.9.self_attn.o_proj.weight": "model-00006-of-00014.safetensors", "model.layers.9.self_attn.q_proj.bias": "model-00006-of-00014.safetensors", "model.layers.9.self_attn.k_proj.bias": "model-00006-of-00014.safetensors", "model.layers.9.self_attn.v_proj.bias": "model-00006-of-00014.safetensors", "model.layers.9.self_attn.q_proj.weight": "model-00006-of-00014.safetensors", "model.layers.9.self_attn.k_proj.weight": "model-00006-of-00014.safetensors", "model.layers.9.self_attn.v_proj.weight": "model-00006-of-00014.safetensors", "model.layers.9.self_attn.sinks": "model-00006-of-00014.safetensors", "model.layers.9.mlp.router.bias": "model-00006-of-00014.safetensors", "model.layers.9.mlp.router.weight": "model-00006-of-00014.safetensors", "model.layers.9.mlp.experts.gate_up_proj_bias": "model-00006-of-00014.safetensors", "model.layers.9.mlp.experts.gate_up_proj_blocks": "model-00006-of-00014.safetensors", "model.layers.9.mlp.experts.gate_up_proj_scales": "model-00006-of-00014.safetensors", "model.layers.9.mlp.experts.down_proj_bias": "model-00006-of-00014.safetensors", "model.layers.9.mlp.experts.down_proj_blocks": "model-00007-of-00014.safetensors", "model.layers.9.mlp.experts.down_proj_scales": "model-00007-of-00014.safetensors", "model.layers.9.post_attention_layernorm.weight": "model-00007-of-00014.safetensors", "model.layers.14.mlp.experts.gate_up_proj_blocks": "model-00007-of-00014.safetensors", "model.layers.14.mlp.experts.gate_up_proj_scales": "model-00007-of-00014.safetensors", "model.layers.14.mlp.experts.down_proj_bias": "model-00007-of-00014.safetensors", "model.layers.14.mlp.experts.down_proj_blocks": "model-00007-of-00014.safetensors", "model.layers.14.mlp.experts.down_proj_scales": "model-00007-of-00014.safetensors", "model.layers.14.post_attention_layernorm.weight": "model-00007-of-00014.safetensors", "model.layers.15.input_layernorm.weight": "model-00007-of-00014.safetensors", "model.layers.15.self_attn.o_proj.bias": "model-00007-of-00014.safetensors", "model.layers.15.self_attn.o_proj.weight": "model-00007-of-00014.safetensors", "model.layers.15.self_attn.q_proj.bias": "model-00007-of-00014.safetensors", "model.layers.15.self_attn.k_proj.bias": "model-00007-of-00014.safetensors", "model.layers.15.self_attn.v_proj.bias": "model-00007-of-00014.safetensors", "model.layers.15.self_attn.q_proj.weight": "model-00007-of-00014.safetensors", "model.layers.15.self_attn.k_proj.weight": "model-00007-of-00014.safetensors", "model.layers.15.self_attn.v_proj.weight": "model-00007-of-00014.safetensors", "model.layers.15.self_attn.sinks": "model-00007-of-00014.safetensors", "model.layers.15.mlp.router.bias": "model-00007-of-00014.safetensors", "model.layers.15.mlp.router.weight": "model-00007-of-00014.safetensors", "model.layers.15.mlp.experts.gate_up_proj_bias": "model-00007-of-00014.safetensors", "model.layers.15.mlp.experts.gate_up_proj_blocks": "model-00007-of-00014.safetensors", "model.layers.15.mlp.experts.gate_up_proj_scales": "model-00007-of-00014.safetensors", "model.layers.15.mlp.experts.down_proj_bias": "model-00007-of-00014.safetensors", "model.layers.15.mlp.experts.down_proj_blocks": "model-00007-of-00014.safetensors", "model.layers.15.mlp.experts.down_proj_scales": "model-00007-of-00014.safetensors", "model.layers.15.post_attention_layernorm.weight": "model-00007-of-00014.safetensors", "model.layers.16.input_layernorm.weight": "model-00007-of-00014.safetensors", "model.layers.16.self_attn.o_proj.bias": "model-00007-of-00014.safetensors", "model.layers.16.self_attn.o_proj.weight": "model-00007-of-00014.safetensors", "model.layers.16.self_attn.q_proj.bias": "model-00007-of-00014.safetensors", "model.layers.16.self_attn.k_proj.bias": "model-00007-of-00014.safetensors", "model.layers.16.self_attn.v_proj.bias": "model-00007-of-00014.safetensors", "model.layers.16.self_attn.q_proj.weight": "model-00007-of-00014.safetensors", "model.layers.16.self_attn.k_proj.weight": "model-00007-of-00014.safetensors", "model.layers.16.self_attn.v_proj.weight": "model-00007-of-00014.safetensors", "model.layers.16.self_attn.sinks": "model-00007-of-00014.safetensors", "model.layers.16.mlp.router.bias": "model-00007-of-00014.safetensors", "model.layers.16.mlp.router.weight": "model-00007-of-00014.safetensors", "model.layers.16.mlp.experts.gate_up_proj_bias": "model-00007-of-00014.safetensors", "model.layers.16.mlp.experts.gate_up_proj_blocks": "model-00008-of-00014.safetensors", "model.layers.16.mlp.experts.gate_up_proj_scales": "model-00008-of-00014.safetensors", "model.layers.16.mlp.experts.down_proj_bias": "model-00008-of-00014.safetensors", "model.layers.16.mlp.experts.down_proj_blocks": "model-00008-of-00014.safetensors", "model.layers.16.mlp.experts.down_proj_scales": "model-00008-of-00014.safetensors", "model.layers.16.post_attention_layernorm.weight": "model-00008-of-00014.safetensors", "model.layers.17.input_layernorm.weight": "model-00008-of-00014.safetensors", "model.layers.17.self_attn.o_proj.bias": "model-00008-of-00014.safetensors", "model.layers.17.self_attn.o_proj.weight": "model-00008-of-00014.safetensors", "model.layers.17.self_attn.q_proj.bias": "model-00008-of-00014.safetensors", "model.layers.17.self_attn.k_proj.bias": "model-00008-of-00014.safetensors", "model.layers.17.self_attn.v_proj.bias": "model-00008-of-00014.safetensors", "model.layers.17.self_attn.q_proj.weight": "model-00008-of-00014.safetensors", "model.layers.17.self_attn.k_proj.weight": "model-00008-of-00014.safetensors", "model.layers.17.self_attn.v_proj.weight": "model-00008-of-00014.safetensors", "model.layers.17.self_attn.sinks": "model-00008-of-00014.safetensors", "model.layers.17.mlp.router.bias": "model-00008-of-00014.safetensors", "model.layers.17.mlp.router.weight": "model-00008-of-00014.safetensors", "model.layers.17.mlp.experts.gate_up_proj_bias": "model-00008-of-00014.safetensors", "model.layers.17.mlp.experts.gate_up_proj_blocks": "model-00008-of-00014.safetensors", "model.layers.17.mlp.experts.gate_up_proj_scales": "model-00008-of-00014.safetensors", "model.layers.17.mlp.experts.down_proj_bias": "model-00008-of-00014.safetensors", "model.layers.17.mlp.experts.down_proj_blocks": "model-00008-of-00014.safetensors", "model.layers.17.mlp.experts.down_proj_scales": "model-00008-of-00014.safetensors", "model.layers.17.post_attention_layernorm.weight": "model-00008-of-00014.safetensors", "model.layers.18.input_layernorm.weight": "model-00008-of-00014.safetensors", "model.layers.18.self_attn.o_proj.bias": "model-00008-of-00014.safetensors", "model.layers.18.self_attn.o_proj.weight": "model-00008-of-00014.safetensors", "model.layers.18.self_attn.q_proj.bias": "model-00008-of-00014.safetensors", "model.layers.18.self_attn.k_proj.bias": "model-00008-of-00014.safetensors", "model.layers.18.self_attn.v_proj.bias": "model-00008-of-00014.safetensors", "model.layers.18.self_attn.q_proj.weight": "model-00008-of-00014.safetensors", "model.layers.18.self_attn.k_proj.weight": "model-00008-of-00014.safetensors", "model.layers.18.self_attn.v_proj.weight": "model-00008-of-00014.safetensors", "model.layers.18.self_attn.sinks": "model-00008-of-00014.safetensors", "model.layers.18.mlp.router.bias": "model-00008-of-00014.safetensors", "model.layers.18.mlp.router.weight": "model-00008-of-00014.safetensors", "model.layers.18.mlp.experts.gate_up_proj_bias": "model-00008-of-00014.safetensors", "model.layers.18.mlp.experts.gate_up_proj_blocks": "model-00008-of-00014.safetensors", "model.layers.18.mlp.experts.gate_up_proj_scales": "model-00008-of-00014.safetensors", "model.layers.18.mlp.experts.down_proj_bias": "model-00008-of-00014.safetensors", "model.layers.18.mlp.experts.down_proj_blocks": "model-00009-of-00014.safetensors", "model.layers.18.mlp.experts.down_proj_scales": "model-00009-of-00014.safetensors", "model.layers.18.post_attention_layernorm.weight": "model-00009-of-00014.safetensors", "model.layers.19.input_layernorm.weight": "model-00009-of-00014.safetensors", "model.layers.19.self_attn.o_proj.bias": "model-00009-of-00014.safetensors", "model.layers.19.self_attn.o_proj.weight": "model-00009-of-00014.safetensors", "model.layers.19.self_attn.q_proj.bias": "model-00009-of-00014.safetensors", "model.layers.19.self_attn.k_proj.bias": "model-00009-of-00014.safetensors", "model.layers.19.self_attn.v_proj.bias": "model-00009-of-00014.safetensors", "model.layers.19.self_attn.q_proj.weight": "model-00009-of-00014.safetensors", "model.layers.19.self_attn.k_proj.weight": "model-00009-of-00014.safetensors", "model.layers.19.self_attn.v_proj.weight": "model-00009-of-00014.safetensors", "model.layers.19.self_attn.sinks": "model-00009-of-00014.safetensors", "model.layers.19.mlp.router.bias": "model-00009-of-00014.safetensors", "model.layers.19.mlp.router.weight": "model-00009-of-00014.safetensors", "model.layers.19.mlp.experts.gate_up_proj_bias": "model-00009-of-00014.safetensors", "model.layers.19.mlp.experts.gate_up_proj_blocks": "model-00009-of-00014.safetensors", "model.layers.19.mlp.experts.gate_up_proj_scales": "model-00009-of-00014.safetensors", "model.layers.19.mlp.experts.down_proj_bias": "model-00009-of-00014.safetensors", "model.layers.19.mlp.experts.down_proj_blocks": "model-00009-of-00014.safetensors", "model.layers.19.mlp.experts.down_proj_scales": "model-00009-of-00014.safetensors", "model.layers.19.post_attention_layernorm.weight": "model-00009-of-00014.safetensors", "model.layers.2.input_layernorm.weight": "model-00009-of-00014.safetensors", "model.layers.2.self_attn.o_proj.bias": "model-00009-of-00014.safetensors", "model.layers.2.self_attn.o_proj.weight": "model-00009-of-00014.safetensors", "model.layers.2.self_attn.q_proj.bias": "model-00009-of-00014.safetensors", "model.layers.2.self_attn.k_proj.bias": "model-00009-of-00014.safetensors", "model.layers.2.self_attn.v_proj.bias": "model-00009-of-00014.safetensors", "model.layers.2.self_attn.q_proj.weight": "model-00009-of-00014.safetensors", "model.layers.2.self_attn.k_proj.weight": "model-00009-of-00014.safetensors", "model.layers.2.self_attn.v_proj.weight": "model-00009-of-00014.safetensors", "model.layers.2.self_attn.sinks": "model-00009-of-00014.safetensors", "model.layers.2.mlp.router.bias": "model-00009-of-00014.safetensors", "model.layers.2.mlp.router.weight": "model-00009-of-00014.safetensors", "model.layers.2.mlp.experts.gate_up_proj_bias": "model-00009-of-00014.safetensors", "model.layers.0.input_layernorm.weight": "model-00009-of-00014.safetensors", "model.layers.0.self_attn.o_proj.bias": "model-00009-of-00014.safetensors", "model.layers.0.self_attn.o_proj.weight": "model-00009-of-00014.safetensors", "model.layers.0.self_attn.q_proj.bias": "model-00009-of-00014.safetensors", "model.layers.0.self_attn.k_proj.bias": "model-00009-of-00014.safetensors", "model.layers.0.self_attn.v_proj.bias": "model-00009-of-00014.safetensors", "model.layers.0.self_attn.q_proj.weight": "model-00009-of-00014.safetensors", "model.layers.0.self_attn.k_proj.weight": "model-00009-of-00014.safetensors", "model.layers.0.self_attn.v_proj.weight": "model-00009-of-00014.safetensors", "model.layers.0.self_attn.sinks": "model-00009-of-00014.safetensors", "model.layers.0.mlp.router.bias": "model-00009-of-00014.safetensors", "model.layers.0.mlp.router.weight": "model-00009-of-00014.safetensors", "model.layers.0.mlp.experts.gate_up_proj_bias": "model-00009-of-00014.safetensors", "model.layers.0.mlp.experts.gate_up_proj_blocks": "model-00009-of-00014.safetensors", "model.layers.0.mlp.experts.gate_up_proj_scales": "model-00009-of-00014.safetensors", "model.layers.0.mlp.experts.down_proj_bias": "model-00009-of-00014.safetensors", "model.layers.0.mlp.experts.down_proj_blocks": "model-00009-of-00014.safetensors", "model.layers.0.mlp.experts.down_proj_scales": "model-00009-of-00014.safetensors", "model.layers.0.post_attention_layernorm.weight": "model-00009-of-00014.safetensors", "model.layers.1.input_layernorm.weight": "model-00009-of-00014.safetensors", "model.layers.1.self_attn.o_proj.bias": "model-00009-of-00014.safetensors", "model.layers.1.self_attn.o_proj.weight": "model-00009-of-00014.safetensors", "model.layers.1.self_attn.q_proj.bias": "model-00009-of-00014.safetensors", "model.layers.1.self_attn.k_proj.bias": "model-00009-of-00014.safetensors", "model.layers.1.self_attn.v_proj.bias": "model-00009-of-00014.safetensors", "model.layers.1.self_attn.q_proj.weight": "model-00009-of-00014.safetensors", "model.layers.1.self_attn.k_proj.weight": "model-00009-of-00014.safetensors", "model.layers.1.self_attn.v_proj.weight": "model-00009-of-00014.safetensors", "model.layers.1.self_attn.sinks": "model-00009-of-00014.safetensors", "model.layers.1.mlp.router.bias": "model-00009-of-00014.safetensors", "model.layers.1.mlp.router.weight": "model-00009-of-00014.safetensors", "model.layers.1.mlp.experts.gate_up_proj_bias": "model-00009-of-00014.safetensors", "model.layers.1.mlp.experts.gate_up_proj_blocks": "model-00010-of-00014.safetensors", "model.layers.1.mlp.experts.gate_up_proj_scales": "model-00010-of-00014.safetensors", "model.layers.1.mlp.experts.down_proj_bias": "model-00010-of-00014.safetensors", "model.layers.1.mlp.experts.down_proj_blocks": "model-00010-of-00014.safetensors", "model.layers.1.mlp.experts.down_proj_scales": "model-00010-of-00014.safetensors", "model.layers.1.post_attention_layernorm.weight": "model-00010-of-00014.safetensors", "model.layers.10.input_layernorm.weight": "model-00010-of-00014.safetensors", "model.layers.10.self_attn.o_proj.bias": "model-00010-of-00014.safetensors", "model.layers.10.self_attn.o_proj.weight": "model-00010-of-00014.safetensors", "model.layers.10.self_attn.q_proj.bias": "model-00010-of-00014.safetensors", "model.layers.10.self_attn.k_proj.bias": "model-00010-of-00014.safetensors", "model.layers.10.self_attn.v_proj.bias": "model-00010-of-00014.safetensors", "model.layers.10.self_attn.q_proj.weight": "model-00010-of-00014.safetensors", "model.layers.10.self_attn.k_proj.weight": "model-00010-of-00014.safetensors", "model.layers.10.self_attn.v_proj.weight": "model-00010-of-00014.safetensors", "model.layers.10.self_attn.sinks": "model-00010-of-00014.safetensors", "model.layers.10.mlp.router.bias": "model-00010-of-00014.safetensors", "model.layers.10.mlp.router.weight": "model-00010-of-00014.safetensors", "model.layers.10.mlp.experts.gate_up_proj_bias": "model-00010-of-00014.safetensors", "model.layers.10.mlp.experts.gate_up_proj_blocks": "model-00010-of-00014.safetensors", "model.layers.10.mlp.experts.gate_up_proj_scales": "model-00010-of-00014.safetensors", "model.layers.10.mlp.experts.down_proj_bias": "model-00010-of-00014.safetensors", "model.layers.10.mlp.experts.down_proj_blocks": "model-00010-of-00014.safetensors", "model.layers.10.mlp.experts.down_proj_scales": "model-00010-of-00014.safetensors", "model.layers.10.post_attention_layernorm.weight": "model-00010-of-00014.safetensors", "model.layers.11.input_layernorm.weight": "model-00010-of-00014.safetensors", "model.layers.11.self_attn.o_proj.bias": "model-00010-of-00014.safetensors", "model.layers.11.self_attn.o_proj.weight": "model-00010-of-00014.safetensors", "model.layers.11.self_attn.q_proj.bias": "model-00010-of-00014.safetensors", "model.layers.11.self_attn.k_proj.bias": "model-00010-of-00014.safetensors", "model.layers.11.self_attn.v_proj.bias": "model-00010-of-00014.safetensors", "model.layers.11.self_attn.q_proj.weight": "model-00010-of-00014.safetensors", "model.layers.11.self_attn.k_proj.weight": "model-00010-of-00014.safetensors", "model.layers.11.self_attn.v_proj.weight": "model-00010-of-00014.safetensors", "model.layers.11.self_attn.sinks": "model-00010-of-00014.safetensors", "model.layers.11.mlp.router.bias": "model-00010-of-00014.safetensors", "model.layers.11.mlp.router.weight": "model-00010-of-00014.safetensors", "model.layers.11.mlp.experts.gate_up_proj_bias": "model-00010-of-00014.safetensors", "model.layers.11.mlp.experts.gate_up_proj_blocks": "model-00010-of-00014.safetensors", "model.layers.11.mlp.experts.gate_up_proj_scales": "model-00010-of-00014.safetensors", "model.layers.11.mlp.experts.down_proj_bias": "model-00010-of-00014.safetensors", "model.layers.11.mlp.experts.down_proj_blocks": "model-00011-of-00014.safetensors", "model.layers.11.mlp.experts.down_proj_scales": "model-00011-of-00014.safetensors", "model.layers.11.post_attention_layernorm.weight": "model-00011-of-00014.safetensors", "model.layers.12.input_layernorm.weight": "model-00011-of-00014.safetensors", "model.layers.12.self_attn.o_proj.bias": "model-00011-of-00014.safetensors", "model.layers.12.self_attn.o_proj.weight": "model-00011-of-00014.safetensors", "model.layers.12.self_attn.q_proj.bias": "model-00011-of-00014.safetensors", "model.layers.12.self_attn.k_proj.bias": "model-00011-of-00014.safetensors", "model.layers.12.self_attn.v_proj.bias": "model-00011-of-00014.safetensors", "model.layers.12.self_attn.q_proj.weight": "model-00011-of-00014.safetensors", "model.layers.12.self_attn.k_proj.weight": "model-00011-of-00014.safetensors", "model.layers.12.self_attn.v_proj.weight": "model-00011-of-00014.safetensors", "model.layers.12.self_attn.sinks": "model-00011-of-00014.safetensors", "model.layers.12.mlp.router.bias": "model-00011-of-00014.safetensors", "model.layers.12.mlp.router.weight": "model-00011-of-00014.safetensors", "model.layers.12.mlp.experts.gate_up_proj_bias": "model-00011-of-00014.safetensors", "model.layers.12.mlp.experts.gate_up_proj_blocks": "model-00011-of-00014.safetensors", "model.layers.12.mlp.experts.gate_up_proj_scales": "model-00011-of-00014.safetensors", "model.layers.12.mlp.experts.down_proj_bias": "model-00011-of-00014.safetensors", "model.layers.12.mlp.experts.down_proj_blocks": "model-00011-of-00014.safetensors", "model.layers.12.mlp.experts.down_proj_scales": "model-00011-of-00014.safetensors", "model.layers.12.post_attention_layernorm.weight": "model-00011-of-00014.safetensors", "model.layers.13.input_layernorm.weight": "model-00011-of-00014.safetensors", "model.layers.13.self_attn.o_proj.bias": "model-00011-of-00014.safetensors", "model.layers.13.self_attn.o_proj.weight": "model-00011-of-00014.safetensors", "model.layers.13.self_attn.q_proj.bias": "model-00011-of-00014.safetensors", "model.layers.13.self_attn.k_proj.bias": "model-00011-of-00014.safetensors", "model.layers.13.self_attn.v_proj.bias": "model-00011-of-00014.safetensors", "model.layers.13.self_attn.q_proj.weight": "model-00011-of-00014.safetensors", "model.layers.13.self_attn.k_proj.weight": "model-00011-of-00014.safetensors", "model.layers.13.self_attn.v_proj.weight": "model-00011-of-00014.safetensors", "model.layers.13.self_attn.sinks": "model-00011-of-00014.safetensors", "model.layers.13.mlp.router.bias": "model-00011-of-00014.safetensors", "model.layers.13.mlp.router.weight": "model-00011-of-00014.safetensors", "model.layers.13.mlp.experts.gate_up_proj_bias": "model-00011-of-00014.safetensors", "model.layers.13.mlp.experts.gate_up_proj_blocks": "model-00011-of-00014.safetensors", "model.layers.13.mlp.experts.gate_up_proj_scales": "model-00011-of-00014.safetensors", "model.layers.13.mlp.experts.down_proj_bias": "model-00011-of-00014.safetensors", "model.layers.13.mlp.experts.down_proj_blocks": "model-00011-of-00014.safetensors", "model.layers.13.mlp.experts.down_proj_scales": "model-00011-of-00014.safetensors", "model.layers.13.post_attention_layernorm.weight": "model-00011-of-00014.safetensors", "model.layers.14.input_layernorm.weight": "model-00011-of-00014.safetensors", "model.layers.14.self_attn.o_proj.bias": "model-00011-of-00014.safetensors", "model.layers.14.self_attn.o_proj.weight": "model-00011-of-00014.safetensors", "model.layers.14.self_attn.q_proj.bias": "model-00011-of-00014.safetensors", "model.layers.14.self_attn.k_proj.bias": "model-00011-of-00014.safetensors", "model.layers.14.self_attn.v_proj.bias": "model-00011-of-00014.safetensors", "model.layers.14.self_attn.q_proj.weight": "model-00011-of-00014.safetensors", "model.layers.14.self_attn.k_proj.weight": "model-00011-of-00014.safetensors", "model.layers.14.self_attn.v_proj.weight": "model-00011-of-00014.safetensors", "model.layers.14.self_attn.sinks": "model-00011-of-00014.safetensors", "model.layers.14.mlp.router.bias": "model-00011-of-00014.safetensors", "model.layers.14.mlp.router.weight": "model-00011-of-00014.safetensors", "model.layers.14.mlp.experts.gate_up_proj_bias": "model-00011-of-00014.safetensors", "model.embed_tokens.weight": "model-00012-of-00014.safetensors", "model.norm.weight": "model-00012-of-00014.safetensors", "lm_head.weight": "model-00012-of-00014.safetensors", "model.layers.30.mlp.experts.gate_up_proj_blocks": "model-00012-of-00014.safetensors", "model.layers.30.mlp.experts.gate_up_proj_scales": "model-00012-of-00014.safetensors", "model.layers.30.mlp.experts.down_proj_bias": "model-00012-of-00014.safetensors", "model.layers.30.mlp.experts.down_proj_blocks": "model-00012-of-00014.safetensors", "model.layers.30.mlp.experts.down_proj_scales": "model-00012-of-00014.safetensors", "model.layers.30.post_attention_layernorm.weight": "model-00012-of-00014.safetensors", "model.layers.31.input_layernorm.weight": "model-00012-of-00014.safetensors", "model.layers.31.self_attn.o_proj.bias": "model-00012-of-00014.safetensors", "model.layers.31.self_attn.o_proj.weight": "model-00012-of-00014.safetensors", "model.layers.31.self_attn.q_proj.bias": "model-00012-of-00014.safetensors", "model.layers.31.self_attn.k_proj.bias": "model-00012-of-00014.safetensors", "model.layers.31.self_attn.v_proj.bias": "model-00012-of-00014.safetensors", "model.layers.31.self_attn.q_proj.weight": "model-00012-of-00014.safetensors", "model.layers.31.self_attn.k_proj.weight": "model-00012-of-00014.safetensors", "model.layers.31.self_attn.v_proj.weight": "model-00012-of-00014.safetensors", "model.layers.31.self_attn.sinks": "model-00012-of-00014.safetensors", "model.layers.31.mlp.router.bias": "model-00012-of-00014.safetensors", "model.layers.31.mlp.router.weight": "model-00012-of-00014.safetensors", "model.layers.31.mlp.experts.gate_up_proj_bias": "model-00012-of-00014.safetensors", "model.layers.31.mlp.experts.gate_up_proj_blocks": "model-00013-of-00014.safetensors", "model.layers.31.mlp.experts.gate_up_proj_scales": "model-00013-of-00014.safetensors", "model.layers.31.mlp.experts.down_proj_bias": "model-00013-of-00014.safetensors", "model.layers.31.mlp.experts.down_proj_blocks": "model-00013-of-00014.safetensors", "model.layers.31.mlp.experts.down_proj_scales": "model-00013-of-00014.safetensors", "model.layers.31.post_attention_layernorm.weight": "model-00013-of-00014.safetensors", "model.layers.32.input_layernorm.weight": "model-00013-of-00014.safetensors", "model.layers.32.self_attn.o_proj.bias": "model-00013-of-00014.safetensors", "model.layers.32.self_attn.o_proj.weight": "model-00013-of-00014.safetensors", "model.layers.32.self_attn.q_proj.bias": "model-00013-of-00014.safetensors", "model.layers.32.self_attn.k_proj.bias": "model-00013-of-00014.safetensors", "model.layers.32.self_attn.v_proj.bias": "model-00013-of-00014.safetensors", "model.layers.32.self_attn.q_proj.weight": "model-00013-of-00014.safetensors", "model.layers.32.self_attn.k_proj.weight": "model-00013-of-00014.safetensors", "model.layers.32.self_attn.v_proj.weight": "model-00013-of-00014.safetensors", "model.layers.32.self_attn.sinks": "model-00013-of-00014.safetensors", "model.layers.32.mlp.router.bias": "model-00013-of-00014.safetensors", "model.layers.32.mlp.router.weight": "model-00013-of-00014.safetensors", "model.layers.32.mlp.experts.gate_up_proj_bias": "model-00013-of-00014.safetensors", "model.layers.32.mlp.experts.gate_up_proj_blocks": "model-00013-of-00014.safetensors", "model.layers.32.mlp.experts.gate_up_proj_scales": "model-00013-of-00014.safetensors", "model.layers.32.mlp.experts.down_proj_bias": "model-00013-of-00014.safetensors", "model.layers.32.mlp.experts.down_proj_blocks": "model-00013-of-00014.safetensors", "model.layers.32.mlp.experts.down_proj_scales": "model-00013-of-00014.safetensors", "model.layers.32.post_attention_layernorm.weight": "model-00013-of-00014.safetensors", "model.layers.33.input_layernorm.weight": "model-00013-of-00014.safetensors", "model.layers.33.self_attn.o_proj.bias": "model-00013-of-00014.safetensors", "model.layers.33.self_attn.o_proj.weight": "model-00013-of-00014.safetensors", "model.layers.33.self_attn.q_proj.bias": "model-00013-of-00014.safetensors", "model.layers.33.self_attn.k_proj.bias": "model-00013-of-00014.safetensors", "model.layers.33.self_attn.v_proj.bias": "model-00013-of-00014.safetensors", "model.layers.33.self_attn.q_proj.weight": "model-00013-of-00014.safetensors", "model.layers.33.self_attn.k_proj.weight": "model-00013-of-00014.safetensors", "model.layers.33.self_attn.v_proj.weight": "model-00013-of-00014.safetensors", "model.layers.33.self_attn.sinks": "model-00013-of-00014.safetensors", "model.layers.33.mlp.router.bias": "model-00013-of-00014.safetensors", "model.layers.33.mlp.router.weight": "model-00013-of-00014.safetensors", "model.layers.33.mlp.experts.gate_up_proj_bias": "model-00013-of-00014.safetensors", "model.layers.33.mlp.experts.gate_up_proj_blocks": "model-00013-of-00014.safetensors", "model.layers.33.mlp.experts.gate_up_proj_scales": "model-00013-of-00014.safetensors", "model.layers.33.mlp.experts.down_proj_bias": "model-00013-of-00014.safetensors", "model.layers.33.mlp.experts.down_proj_blocks": "model-00014-of-00014.safetensors", "model.layers.33.mlp.experts.down_proj_scales": "model-00014-of-00014.safetensors", "model.layers.33.post_attention_layernorm.weight": "model-00014-of-00014.safetensors", "model.layers.34.input_layernorm.weight": "model-00014-of-00014.safetensors", "model.layers.34.self_attn.o_proj.bias": "model-00014-of-00014.safetensors", "model.layers.34.self_attn.o_proj.weight": "model-00014-of-00014.safetensors", "model.layers.34.self_attn.q_proj.bias": "model-00014-of-00014.safetensors", "model.layers.34.self_attn.k_proj.bias": "model-00014-of-00014.safetensors", "model.layers.34.self_attn.v_proj.bias": "model-00014-of-00014.safetensors", "model.layers.34.self_attn.q_proj.weight": "model-00014-of-00014.safetensors", "model.layers.34.self_attn.k_proj.weight": "model-00014-of-00014.safetensors", "model.layers.34.self_attn.v_proj.weight": "model-00014-of-00014.safetensors", "model.layers.34.self_attn.sinks": "model-00014-of-00014.safetensors", "model.layers.34.mlp.router.bias": "model-00014-of-00014.safetensors", "model.layers.34.mlp.router.weight": "model-00014-of-00014.safetensors", "model.layers.34.mlp.experts.gate_up_proj_bias": "model-00014-of-00014.safetensors", "model.layers.34.mlp.experts.gate_up_proj_blocks": "model-00014-of-00014.safetensors", "model.layers.34.mlp.experts.gate_up_proj_scales": "model-00014-of-00014.safetensors", "model.layers.34.mlp.experts.down_proj_bias": "model-00014-of-00014.safetensors", "model.layers.34.mlp.experts.down_proj_blocks": "model-00014-of-00014.safetensors", "model.layers.34.mlp.experts.down_proj_scales": "model-00014-of-00014.safetensors", "model.layers.34.post_attention_layernorm.weight": "model-00014-of-00014.safetensors", "model.layers.35.input_layernorm.weight": "model-00014-of-00014.safetensors", "model.layers.35.self_attn.o_proj.bias": "model-00014-of-00014.safetensors", "model.layers.35.self_attn.o_proj.weight": "model-00014-of-00014.safetensors", "model.layers.35.self_attn.q_proj.bias": "model-00014-of-00014.safetensors", "model.layers.35.self_attn.k_proj.bias": "model-00014-of-00014.safetensors", "model.layers.35.self_attn.v_proj.bias": "model-00014-of-00014.safetensors", "model.layers.35.self_attn.q_proj.weight": "model-00014-of-00014.safetensors", "model.layers.35.self_attn.k_proj.weight": "model-00014-of-00014.safetensors", "model.layers.35.self_attn.v_proj.weight": "model-00014-of-00014.safetensors", "model.layers.35.self_attn.sinks": "model-00014-of-00014.safetensors", "model.layers.35.mlp.router.bias": "model-00014-of-00014.safetensors", "model.layers.35.mlp.router.weight": "model-00014-of-00014.safetensors", "model.layers.35.mlp.experts.gate_up_proj_bias": "model-00014-of-00014.safetensors", "model.layers.35.mlp.experts.gate_up_proj_blocks": "model-00014-of-00014.safetensors", "model.layers.35.mlp.experts.gate_up_proj_scales": "model-00014-of-00014.safetensors", "model.layers.35.mlp.experts.down_proj_bias": "model-00014-of-00014.safetensors", "model.layers.35.mlp.experts.down_proj_blocks": "model-00014-of-00014.safetensors", "model.layers.35.mlp.experts.down_proj_scales": "model-00014-of-00014.safetensors", "model.layers.35.post_attention_layernorm.weight": "model-00014-of-00014.safetensors", "model.layers.4.input_layernorm.weight": "model-00014-of-00014.safetensors", "model.layers.4.self_attn.o_proj.bias": "model-00014-of-00014.safetensors", "model.layers.4.self_attn.o_proj.weight": "model-00014-of-00014.safetensors", "model.layers.4.self_attn.q_proj.bias": "model-00014-of-00014.safetensors", "model.layers.4.self_attn.k_proj.bias": "model-00014-of-00014.safetensors", "model.layers.4.self_attn.v_proj.bias": "model-00014-of-00014.safetensors", "model.layers.4.self_attn.q_proj.weight": "model-00014-of-00014.safetensors", "model.layers.4.self_attn.k_proj.weight": "model-00014-of-00014.safetensors", "model.layers.4.self_attn.v_proj.weight": "model-00014-of-00014.safetensors", "model.layers.4.self_attn.sinks": "model-00014-of-00014.safetensors", "model.layers.4.mlp.router.bias": "model-00014-of-00014.safetensors", "model.layers.4.mlp.router.weight": "model-00014-of-00014.safetensors", "model.layers.4.mlp.experts.gate_up_proj_bias": "model-00014-of-00014.safetensors"}} \ No newline at end of file +{ + "metadata": { + "total_parameters": 116829156672, + "total_size": 233658313344 + }, + "weight_map": { + "lm_head.weight": "model-00073-of-00073.safetensors", + "model.embed_tokens.weight": "model-00001-of-00073.safetensors", + "model.layers.0.input_layernorm.weight": "model-00003-of-00073.safetensors", + "model.layers.0.mlp.experts.down_proj": "model-00003-of-00073.safetensors", + "model.layers.0.mlp.experts.down_proj_bias": "model-00003-of-00073.safetensors", + "model.layers.0.mlp.experts.gate_up_proj": "model-00002-of-00073.safetensors", + "model.layers.0.mlp.experts.gate_up_proj_bias": "model-00002-of-00073.safetensors", + "model.layers.0.mlp.router.bias": "model-00001-of-00073.safetensors", + "model.layers.0.mlp.router.weight": "model-00001-of-00073.safetensors", + "model.layers.0.post_attention_layernorm.weight": "model-00003-of-00073.safetensors", + "model.layers.0.self_attn.k_proj.bias": "model-00001-of-00073.safetensors", + "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00073.safetensors", + "model.layers.0.self_attn.o_proj.bias": "model-00001-of-00073.safetensors", + "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00073.safetensors", + "model.layers.0.self_attn.q_proj.bias": "model-00001-of-00073.safetensors", + "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00073.safetensors", + "model.layers.0.self_attn.sinks": "model-00001-of-00073.safetensors", + "model.layers.0.self_attn.v_proj.bias": "model-00001-of-00073.safetensors", + "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00073.safetensors", + "model.layers.1.input_layernorm.weight": "model-00005-of-00073.safetensors", + "model.layers.1.mlp.experts.down_proj": "model-00005-of-00073.safetensors", + "model.layers.1.mlp.experts.down_proj_bias": "model-00005-of-00073.safetensors", + "model.layers.1.mlp.experts.gate_up_proj": "model-00004-of-00073.safetensors", + "model.layers.1.mlp.experts.gate_up_proj_bias": "model-00004-of-00073.safetensors", + "model.layers.1.mlp.router.bias": "model-00003-of-00073.safetensors", + "model.layers.1.mlp.router.weight": "model-00003-of-00073.safetensors", + "model.layers.1.post_attention_layernorm.weight": "model-00005-of-00073.safetensors", + "model.layers.1.self_attn.k_proj.bias": "model-00003-of-00073.safetensors", + "model.layers.1.self_attn.k_proj.weight": "model-00003-of-00073.safetensors", + "model.layers.1.self_attn.o_proj.bias": "model-00003-of-00073.safetensors", + "model.layers.1.self_attn.o_proj.weight": "model-00003-of-00073.safetensors", + "model.layers.1.self_attn.q_proj.bias": "model-00003-of-00073.safetensors", + "model.layers.1.self_attn.q_proj.weight": "model-00003-of-00073.safetensors", + "model.layers.1.self_attn.sinks": "model-00003-of-00073.safetensors", + "model.layers.1.self_attn.v_proj.bias": "model-00003-of-00073.safetensors", + "model.layers.1.self_attn.v_proj.weight": "model-00003-of-00073.safetensors", + "model.layers.10.input_layernorm.weight": "model-00023-of-00073.safetensors", + "model.layers.10.mlp.experts.down_proj": "model-00023-of-00073.safetensors", + "model.layers.10.mlp.experts.down_proj_bias": "model-00023-of-00073.safetensors", + "model.layers.10.mlp.experts.gate_up_proj": "model-00022-of-00073.safetensors", + "model.layers.10.mlp.experts.gate_up_proj_bias": "model-00022-of-00073.safetensors", + "model.layers.10.mlp.router.bias": "model-00021-of-00073.safetensors", + "model.layers.10.mlp.router.weight": "model-00021-of-00073.safetensors", + "model.layers.10.post_attention_layernorm.weight": "model-00023-of-00073.safetensors", + "model.layers.10.self_attn.k_proj.bias": "model-00021-of-00073.safetensors", + "model.layers.10.self_attn.k_proj.weight": "model-00021-of-00073.safetensors", + "model.layers.10.self_attn.o_proj.bias": "model-00021-of-00073.safetensors", + "model.layers.10.self_attn.o_proj.weight": "model-00021-of-00073.safetensors", + "model.layers.10.self_attn.q_proj.bias": "model-00021-of-00073.safetensors", + "model.layers.10.self_attn.q_proj.weight": "model-00021-of-00073.safetensors", + "model.layers.10.self_attn.sinks": "model-00021-of-00073.safetensors", + "model.layers.10.self_attn.v_proj.bias": "model-00021-of-00073.safetensors", + "model.layers.10.self_attn.v_proj.weight": "model-00021-of-00073.safetensors", + "model.layers.11.input_layernorm.weight": "model-00025-of-00073.safetensors", + "model.layers.11.mlp.experts.down_proj": "model-00025-of-00073.safetensors", + "model.layers.11.mlp.experts.down_proj_bias": "model-00025-of-00073.safetensors", + "model.layers.11.mlp.experts.gate_up_proj": "model-00024-of-00073.safetensors", + "model.layers.11.mlp.experts.gate_up_proj_bias": "model-00024-of-00073.safetensors", + "model.layers.11.mlp.router.bias": "model-00023-of-00073.safetensors", + "model.layers.11.mlp.router.weight": "model-00023-of-00073.safetensors", + "model.layers.11.post_attention_layernorm.weight": "model-00025-of-00073.safetensors", + "model.layers.11.self_attn.k_proj.bias": "model-00023-of-00073.safetensors", + "model.layers.11.self_attn.k_proj.weight": "model-00023-of-00073.safetensors", + "model.layers.11.self_attn.o_proj.bias": "model-00023-of-00073.safetensors", + "model.layers.11.self_attn.o_proj.weight": "model-00023-of-00073.safetensors", + "model.layers.11.self_attn.q_proj.bias": "model-00023-of-00073.safetensors", + "model.layers.11.self_attn.q_proj.weight": "model-00023-of-00073.safetensors", + "model.layers.11.self_attn.sinks": "model-00023-of-00073.safetensors", + "model.layers.11.self_attn.v_proj.bias": "model-00023-of-00073.safetensors", + "model.layers.11.self_attn.v_proj.weight": "model-00023-of-00073.safetensors", + "model.layers.12.input_layernorm.weight": "model-00027-of-00073.safetensors", + "model.layers.12.mlp.experts.down_proj": "model-00027-of-00073.safetensors", + "model.layers.12.mlp.experts.down_proj_bias": "model-00027-of-00073.safetensors", + "model.layers.12.mlp.experts.gate_up_proj": "model-00026-of-00073.safetensors", + "model.layers.12.mlp.experts.gate_up_proj_bias": "model-00026-of-00073.safetensors", + "model.layers.12.mlp.router.bias": "model-00025-of-00073.safetensors", + "model.layers.12.mlp.router.weight": "model-00025-of-00073.safetensors", + "model.layers.12.post_attention_layernorm.weight": "model-00027-of-00073.safetensors", + "model.layers.12.self_attn.k_proj.bias": "model-00025-of-00073.safetensors", + "model.layers.12.self_attn.k_proj.weight": "model-00025-of-00073.safetensors", + "model.layers.12.self_attn.o_proj.bias": "model-00025-of-00073.safetensors", + "model.layers.12.self_attn.o_proj.weight": "model-00025-of-00073.safetensors", + "model.layers.12.self_attn.q_proj.bias": "model-00025-of-00073.safetensors", + "model.layers.12.self_attn.q_proj.weight": "model-00025-of-00073.safetensors", + "model.layers.12.self_attn.sinks": "model-00025-of-00073.safetensors", + "model.layers.12.self_attn.v_proj.bias": "model-00025-of-00073.safetensors", + "model.layers.12.self_attn.v_proj.weight": "model-00025-of-00073.safetensors", + "model.layers.13.input_layernorm.weight": "model-00029-of-00073.safetensors", + "model.layers.13.mlp.experts.down_proj": "model-00029-of-00073.safetensors", + "model.layers.13.mlp.experts.down_proj_bias": "model-00029-of-00073.safetensors", + "model.layers.13.mlp.experts.gate_up_proj": "model-00028-of-00073.safetensors", + "model.layers.13.mlp.experts.gate_up_proj_bias": "model-00028-of-00073.safetensors", + "model.layers.13.mlp.router.bias": "model-00027-of-00073.safetensors", + "model.layers.13.mlp.router.weight": "model-00027-of-00073.safetensors", + "model.layers.13.post_attention_layernorm.weight": "model-00029-of-00073.safetensors", + "model.layers.13.self_attn.k_proj.bias": "model-00027-of-00073.safetensors", + "model.layers.13.self_attn.k_proj.weight": "model-00027-of-00073.safetensors", + "model.layers.13.self_attn.o_proj.bias": "model-00027-of-00073.safetensors", + "model.layers.13.self_attn.o_proj.weight": "model-00027-of-00073.safetensors", + "model.layers.13.self_attn.q_proj.bias": "model-00027-of-00073.safetensors", + "model.layers.13.self_attn.q_proj.weight": "model-00027-of-00073.safetensors", + "model.layers.13.self_attn.sinks": "model-00027-of-00073.safetensors", + "model.layers.13.self_attn.v_proj.bias": "model-00027-of-00073.safetensors", + "model.layers.13.self_attn.v_proj.weight": "model-00027-of-00073.safetensors", + "model.layers.14.input_layernorm.weight": "model-00031-of-00073.safetensors", + "model.layers.14.mlp.experts.down_proj": "model-00031-of-00073.safetensors", + "model.layers.14.mlp.experts.down_proj_bias": "model-00031-of-00073.safetensors", + "model.layers.14.mlp.experts.gate_up_proj": "model-00030-of-00073.safetensors", + "model.layers.14.mlp.experts.gate_up_proj_bias": "model-00030-of-00073.safetensors", + "model.layers.14.mlp.router.bias": "model-00029-of-00073.safetensors", + "model.layers.14.mlp.router.weight": "model-00029-of-00073.safetensors", + "model.layers.14.post_attention_layernorm.weight": "model-00031-of-00073.safetensors", + "model.layers.14.self_attn.k_proj.bias": "model-00029-of-00073.safetensors", + "model.layers.14.self_attn.k_proj.weight": "model-00029-of-00073.safetensors", + "model.layers.14.self_attn.o_proj.bias": "model-00029-of-00073.safetensors", + "model.layers.14.self_attn.o_proj.weight": "model-00029-of-00073.safetensors", + "model.layers.14.self_attn.q_proj.bias": "model-00029-of-00073.safetensors", + "model.layers.14.self_attn.q_proj.weight": "model-00029-of-00073.safetensors", + "model.layers.14.self_attn.sinks": "model-00029-of-00073.safetensors", + "model.layers.14.self_attn.v_proj.bias": "model-00029-of-00073.safetensors", + "model.layers.14.self_attn.v_proj.weight": "model-00029-of-00073.safetensors", + "model.layers.15.input_layernorm.weight": "model-00033-of-00073.safetensors", + "model.layers.15.mlp.experts.down_proj": "model-00033-of-00073.safetensors", + "model.layers.15.mlp.experts.down_proj_bias": "model-00033-of-00073.safetensors", + "model.layers.15.mlp.experts.gate_up_proj": "model-00032-of-00073.safetensors", + "model.layers.15.mlp.experts.gate_up_proj_bias": "model-00032-of-00073.safetensors", + "model.layers.15.mlp.router.bias": "model-00031-of-00073.safetensors", + "model.layers.15.mlp.router.weight": "model-00031-of-00073.safetensors", + "model.layers.15.post_attention_layernorm.weight": "model-00033-of-00073.safetensors", + "model.layers.15.self_attn.k_proj.bias": "model-00031-of-00073.safetensors", + "model.layers.15.self_attn.k_proj.weight": "model-00031-of-00073.safetensors", + "model.layers.15.self_attn.o_proj.bias": "model-00031-of-00073.safetensors", + "model.layers.15.self_attn.o_proj.weight": "model-00031-of-00073.safetensors", + "model.layers.15.self_attn.q_proj.bias": "model-00031-of-00073.safetensors", + "model.layers.15.self_attn.q_proj.weight": "model-00031-of-00073.safetensors", + "model.layers.15.self_attn.sinks": "model-00031-of-00073.safetensors", + "model.layers.15.self_attn.v_proj.bias": "model-00031-of-00073.safetensors", + "model.layers.15.self_attn.v_proj.weight": "model-00031-of-00073.safetensors", + "model.layers.16.input_layernorm.weight": "model-00035-of-00073.safetensors", + "model.layers.16.mlp.experts.down_proj": "model-00035-of-00073.safetensors", + "model.layers.16.mlp.experts.down_proj_bias": "model-00035-of-00073.safetensors", + "model.layers.16.mlp.experts.gate_up_proj": "model-00034-of-00073.safetensors", + "model.layers.16.mlp.experts.gate_up_proj_bias": "model-00034-of-00073.safetensors", + "model.layers.16.mlp.router.bias": "model-00033-of-00073.safetensors", + "model.layers.16.mlp.router.weight": "model-00033-of-00073.safetensors", + "model.layers.16.post_attention_layernorm.weight": "model-00035-of-00073.safetensors", + "model.layers.16.self_attn.k_proj.bias": "model-00033-of-00073.safetensors", + "model.layers.16.self_attn.k_proj.weight": "model-00033-of-00073.safetensors", + "model.layers.16.self_attn.o_proj.bias": "model-00033-of-00073.safetensors", + "model.layers.16.self_attn.o_proj.weight": "model-00033-of-00073.safetensors", + "model.layers.16.self_attn.q_proj.bias": "model-00033-of-00073.safetensors", + "model.layers.16.self_attn.q_proj.weight": "model-00033-of-00073.safetensors", + "model.layers.16.self_attn.sinks": "model-00033-of-00073.safetensors", + "model.layers.16.self_attn.v_proj.bias": "model-00033-of-00073.safetensors", + "model.layers.16.self_attn.v_proj.weight": "model-00033-of-00073.safetensors", + "model.layers.17.input_layernorm.weight": "model-00037-of-00073.safetensors", + "model.layers.17.mlp.experts.down_proj": "model-00037-of-00073.safetensors", + "model.layers.17.mlp.experts.down_proj_bias": "model-00037-of-00073.safetensors", + "model.layers.17.mlp.experts.gate_up_proj": "model-00036-of-00073.safetensors", + "model.layers.17.mlp.experts.gate_up_proj_bias": "model-00036-of-00073.safetensors", + "model.layers.17.mlp.router.bias": "model-00035-of-00073.safetensors", + "model.layers.17.mlp.router.weight": "model-00035-of-00073.safetensors", + "model.layers.17.post_attention_layernorm.weight": "model-00037-of-00073.safetensors", + "model.layers.17.self_attn.k_proj.bias": "model-00035-of-00073.safetensors", + "model.layers.17.self_attn.k_proj.weight": "model-00035-of-00073.safetensors", + "model.layers.17.self_attn.o_proj.bias": "model-00035-of-00073.safetensors", + "model.layers.17.self_attn.o_proj.weight": "model-00035-of-00073.safetensors", + "model.layers.17.self_attn.q_proj.bias": "model-00035-of-00073.safetensors", + "model.layers.17.self_attn.q_proj.weight": "model-00035-of-00073.safetensors", + "model.layers.17.self_attn.sinks": "model-00035-of-00073.safetensors", + "model.layers.17.self_attn.v_proj.bias": "model-00035-of-00073.safetensors", + "model.layers.17.self_attn.v_proj.weight": "model-00035-of-00073.safetensors", + "model.layers.18.input_layernorm.weight": "model-00039-of-00073.safetensors", + "model.layers.18.mlp.experts.down_proj": "model-00039-of-00073.safetensors", + "model.layers.18.mlp.experts.down_proj_bias": "model-00039-of-00073.safetensors", + "model.layers.18.mlp.experts.gate_up_proj": "model-00038-of-00073.safetensors", + "model.layers.18.mlp.experts.gate_up_proj_bias": "model-00038-of-00073.safetensors", + "model.layers.18.mlp.router.bias": "model-00037-of-00073.safetensors", + "model.layers.18.mlp.router.weight": "model-00037-of-00073.safetensors", + "model.layers.18.post_attention_layernorm.weight": "model-00039-of-00073.safetensors", + "model.layers.18.self_attn.k_proj.bias": "model-00037-of-00073.safetensors", + "model.layers.18.self_attn.k_proj.weight": "model-00037-of-00073.safetensors", + "model.layers.18.self_attn.o_proj.bias": "model-00037-of-00073.safetensors", + "model.layers.18.self_attn.o_proj.weight": "model-00037-of-00073.safetensors", + "model.layers.18.self_attn.q_proj.bias": "model-00037-of-00073.safetensors", + "model.layers.18.self_attn.q_proj.weight": "model-00037-of-00073.safetensors", + "model.layers.18.self_attn.sinks": "model-00037-of-00073.safetensors", + "model.layers.18.self_attn.v_proj.bias": "model-00037-of-00073.safetensors", + "model.layers.18.self_attn.v_proj.weight": "model-00037-of-00073.safetensors", + "model.layers.19.input_layernorm.weight": "model-00041-of-00073.safetensors", + "model.layers.19.mlp.experts.down_proj": "model-00041-of-00073.safetensors", + "model.layers.19.mlp.experts.down_proj_bias": "model-00041-of-00073.safetensors", + "model.layers.19.mlp.experts.gate_up_proj": "model-00040-of-00073.safetensors", + "model.layers.19.mlp.experts.gate_up_proj_bias": "model-00040-of-00073.safetensors", + "model.layers.19.mlp.router.bias": "model-00039-of-00073.safetensors", + "model.layers.19.mlp.router.weight": "model-00039-of-00073.safetensors", + "model.layers.19.post_attention_layernorm.weight": "model-00041-of-00073.safetensors", + "model.layers.19.self_attn.k_proj.bias": "model-00039-of-00073.safetensors", + "model.layers.19.self_attn.k_proj.weight": "model-00039-of-00073.safetensors", + "model.layers.19.self_attn.o_proj.bias": "model-00039-of-00073.safetensors", + "model.layers.19.self_attn.o_proj.weight": "model-00039-of-00073.safetensors", + "model.layers.19.self_attn.q_proj.bias": "model-00039-of-00073.safetensors", + "model.layers.19.self_attn.q_proj.weight": "model-00039-of-00073.safetensors", + "model.layers.19.self_attn.sinks": "model-00039-of-00073.safetensors", + "model.layers.19.self_attn.v_proj.bias": "model-00039-of-00073.safetensors", + "model.layers.19.self_attn.v_proj.weight": "model-00039-of-00073.safetensors", + "model.layers.2.input_layernorm.weight": "model-00007-of-00073.safetensors", + "model.layers.2.mlp.experts.down_proj": "model-00007-of-00073.safetensors", + "model.layers.2.mlp.experts.down_proj_bias": "model-00007-of-00073.safetensors", + "model.layers.2.mlp.experts.gate_up_proj": "model-00006-of-00073.safetensors", + "model.layers.2.mlp.experts.gate_up_proj_bias": "model-00006-of-00073.safetensors", + "model.layers.2.mlp.router.bias": "model-00005-of-00073.safetensors", + "model.layers.2.mlp.router.weight": "model-00005-of-00073.safetensors", + "model.layers.2.post_attention_layernorm.weight": "model-00007-of-00073.safetensors", + "model.layers.2.self_attn.k_proj.bias": "model-00005-of-00073.safetensors", + "model.layers.2.self_attn.k_proj.weight": "model-00005-of-00073.safetensors", + "model.layers.2.self_attn.o_proj.bias": "model-00005-of-00073.safetensors", + "model.layers.2.self_attn.o_proj.weight": "model-00005-of-00073.safetensors", + "model.layers.2.self_attn.q_proj.bias": "model-00005-of-00073.safetensors", + "model.layers.2.self_attn.q_proj.weight": "model-00005-of-00073.safetensors", + "model.layers.2.self_attn.sinks": "model-00005-of-00073.safetensors", + "model.layers.2.self_attn.v_proj.bias": "model-00005-of-00073.safetensors", + "model.layers.2.self_attn.v_proj.weight": "model-00005-of-00073.safetensors", + "model.layers.20.input_layernorm.weight": "model-00043-of-00073.safetensors", + "model.layers.20.mlp.experts.down_proj": "model-00043-of-00073.safetensors", + "model.layers.20.mlp.experts.down_proj_bias": "model-00043-of-00073.safetensors", + "model.layers.20.mlp.experts.gate_up_proj": "model-00042-of-00073.safetensors", + "model.layers.20.mlp.experts.gate_up_proj_bias": "model-00042-of-00073.safetensors", + "model.layers.20.mlp.router.bias": "model-00041-of-00073.safetensors", + "model.layers.20.mlp.router.weight": "model-00041-of-00073.safetensors", + "model.layers.20.post_attention_layernorm.weight": "model-00043-of-00073.safetensors", + "model.layers.20.self_attn.k_proj.bias": "model-00041-of-00073.safetensors", + "model.layers.20.self_attn.k_proj.weight": "model-00041-of-00073.safetensors", + "model.layers.20.self_attn.o_proj.bias": "model-00041-of-00073.safetensors", + "model.layers.20.self_attn.o_proj.weight": "model-00041-of-00073.safetensors", + "model.layers.20.self_attn.q_proj.bias": "model-00041-of-00073.safetensors", + "model.layers.20.self_attn.q_proj.weight": "model-00041-of-00073.safetensors", + "model.layers.20.self_attn.sinks": "model-00041-of-00073.safetensors", + "model.layers.20.self_attn.v_proj.bias": "model-00041-of-00073.safetensors", + "model.layers.20.self_attn.v_proj.weight": "model-00041-of-00073.safetensors", + "model.layers.21.input_layernorm.weight": "model-00045-of-00073.safetensors", + "model.layers.21.mlp.experts.down_proj": "model-00045-of-00073.safetensors", + "model.layers.21.mlp.experts.down_proj_bias": "model-00045-of-00073.safetensors", + "model.layers.21.mlp.experts.gate_up_proj": "model-00044-of-00073.safetensors", + "model.layers.21.mlp.experts.gate_up_proj_bias": "model-00044-of-00073.safetensors", + "model.layers.21.mlp.router.bias": "model-00043-of-00073.safetensors", + "model.layers.21.mlp.router.weight": "model-00043-of-00073.safetensors", + "model.layers.21.post_attention_layernorm.weight": "model-00045-of-00073.safetensors", + "model.layers.21.self_attn.k_proj.bias": "model-00043-of-00073.safetensors", + "model.layers.21.self_attn.k_proj.weight": "model-00043-of-00073.safetensors", + "model.layers.21.self_attn.o_proj.bias": "model-00043-of-00073.safetensors", + "model.layers.21.self_attn.o_proj.weight": "model-00043-of-00073.safetensors", + "model.layers.21.self_attn.q_proj.bias": "model-00043-of-00073.safetensors", + "model.layers.21.self_attn.q_proj.weight": "model-00043-of-00073.safetensors", + "model.layers.21.self_attn.sinks": "model-00043-of-00073.safetensors", + "model.layers.21.self_attn.v_proj.bias": "model-00043-of-00073.safetensors", + "model.layers.21.self_attn.v_proj.weight": "model-00043-of-00073.safetensors", + "model.layers.22.input_layernorm.weight": "model-00047-of-00073.safetensors", + "model.layers.22.mlp.experts.down_proj": "model-00047-of-00073.safetensors", + "model.layers.22.mlp.experts.down_proj_bias": "model-00047-of-00073.safetensors", + "model.layers.22.mlp.experts.gate_up_proj": "model-00046-of-00073.safetensors", + "model.layers.22.mlp.experts.gate_up_proj_bias": "model-00046-of-00073.safetensors", + "model.layers.22.mlp.router.bias": "model-00045-of-00073.safetensors", + "model.layers.22.mlp.router.weight": "model-00045-of-00073.safetensors", + "model.layers.22.post_attention_layernorm.weight": "model-00047-of-00073.safetensors", + "model.layers.22.self_attn.k_proj.bias": "model-00045-of-00073.safetensors", + "model.layers.22.self_attn.k_proj.weight": "model-00045-of-00073.safetensors", + "model.layers.22.self_attn.o_proj.bias": "model-00045-of-00073.safetensors", + "model.layers.22.self_attn.o_proj.weight": "model-00045-of-00073.safetensors", + "model.layers.22.self_attn.q_proj.bias": "model-00045-of-00073.safetensors", + "model.layers.22.self_attn.q_proj.weight": "model-00045-of-00073.safetensors", + "model.layers.22.self_attn.sinks": "model-00045-of-00073.safetensors", + "model.layers.22.self_attn.v_proj.bias": "model-00045-of-00073.safetensors", + "model.layers.22.self_attn.v_proj.weight": "model-00045-of-00073.safetensors", + "model.layers.23.input_layernorm.weight": "model-00049-of-00073.safetensors", + "model.layers.23.mlp.experts.down_proj": "model-00049-of-00073.safetensors", + "model.layers.23.mlp.experts.down_proj_bias": "model-00049-of-00073.safetensors", + "model.layers.23.mlp.experts.gate_up_proj": "model-00048-of-00073.safetensors", + "model.layers.23.mlp.experts.gate_up_proj_bias": "model-00048-of-00073.safetensors", + "model.layers.23.mlp.router.bias": "model-00047-of-00073.safetensors", + "model.layers.23.mlp.router.weight": "model-00047-of-00073.safetensors", + "model.layers.23.post_attention_layernorm.weight": "model-00049-of-00073.safetensors", + "model.layers.23.self_attn.k_proj.bias": "model-00047-of-00073.safetensors", + "model.layers.23.self_attn.k_proj.weight": "model-00047-of-00073.safetensors", + "model.layers.23.self_attn.o_proj.bias": "model-00047-of-00073.safetensors", + "model.layers.23.self_attn.o_proj.weight": "model-00047-of-00073.safetensors", + "model.layers.23.self_attn.q_proj.bias": "model-00047-of-00073.safetensors", + "model.layers.23.self_attn.q_proj.weight": "model-00047-of-00073.safetensors", + "model.layers.23.self_attn.sinks": "model-00047-of-00073.safetensors", + "model.layers.23.self_attn.v_proj.bias": "model-00047-of-00073.safetensors", + "model.layers.23.self_attn.v_proj.weight": "model-00047-of-00073.safetensors", + "model.layers.24.input_layernorm.weight": "model-00051-of-00073.safetensors", + "model.layers.24.mlp.experts.down_proj": "model-00051-of-00073.safetensors", + "model.layers.24.mlp.experts.down_proj_bias": "model-00051-of-00073.safetensors", + "model.layers.24.mlp.experts.gate_up_proj": "model-00050-of-00073.safetensors", + "model.layers.24.mlp.experts.gate_up_proj_bias": "model-00050-of-00073.safetensors", + "model.layers.24.mlp.router.bias": "model-00049-of-00073.safetensors", + "model.layers.24.mlp.router.weight": "model-00049-of-00073.safetensors", + "model.layers.24.post_attention_layernorm.weight": "model-00051-of-00073.safetensors", + "model.layers.24.self_attn.k_proj.bias": "model-00049-of-00073.safetensors", + "model.layers.24.self_attn.k_proj.weight": "model-00049-of-00073.safetensors", + "model.layers.24.self_attn.o_proj.bias": "model-00049-of-00073.safetensors", + "model.layers.24.self_attn.o_proj.weight": "model-00049-of-00073.safetensors", + "model.layers.24.self_attn.q_proj.bias": "model-00049-of-00073.safetensors", + "model.layers.24.self_attn.q_proj.weight": "model-00049-of-00073.safetensors", + "model.layers.24.self_attn.sinks": "model-00049-of-00073.safetensors", + "model.layers.24.self_attn.v_proj.bias": "model-00049-of-00073.safetensors", + "model.layers.24.self_attn.v_proj.weight": "model-00049-of-00073.safetensors", + "model.layers.25.input_layernorm.weight": "model-00053-of-00073.safetensors", + "model.layers.25.mlp.experts.down_proj": "model-00053-of-00073.safetensors", + "model.layers.25.mlp.experts.down_proj_bias": "model-00053-of-00073.safetensors", + "model.layers.25.mlp.experts.gate_up_proj": "model-00052-of-00073.safetensors", + "model.layers.25.mlp.experts.gate_up_proj_bias": "model-00052-of-00073.safetensors", + "model.layers.25.mlp.router.bias": "model-00051-of-00073.safetensors", + "model.layers.25.mlp.router.weight": "model-00051-of-00073.safetensors", + "model.layers.25.post_attention_layernorm.weight": "model-00053-of-00073.safetensors", + "model.layers.25.self_attn.k_proj.bias": "model-00051-of-00073.safetensors", + "model.layers.25.self_attn.k_proj.weight": "model-00051-of-00073.safetensors", + "model.layers.25.self_attn.o_proj.bias": "model-00051-of-00073.safetensors", + "model.layers.25.self_attn.o_proj.weight": "model-00051-of-00073.safetensors", + "model.layers.25.self_attn.q_proj.bias": "model-00051-of-00073.safetensors", + "model.layers.25.self_attn.q_proj.weight": "model-00051-of-00073.safetensors", + "model.layers.25.self_attn.sinks": "model-00051-of-00073.safetensors", + "model.layers.25.self_attn.v_proj.bias": "model-00051-of-00073.safetensors", + "model.layers.25.self_attn.v_proj.weight": "model-00051-of-00073.safetensors", + "model.layers.26.input_layernorm.weight": "model-00055-of-00073.safetensors", + "model.layers.26.mlp.experts.down_proj": "model-00055-of-00073.safetensors", + "model.layers.26.mlp.experts.down_proj_bias": "model-00055-of-00073.safetensors", + "model.layers.26.mlp.experts.gate_up_proj": "model-00054-of-00073.safetensors", + "model.layers.26.mlp.experts.gate_up_proj_bias": "model-00054-of-00073.safetensors", + "model.layers.26.mlp.router.bias": "model-00053-of-00073.safetensors", + "model.layers.26.mlp.router.weight": "model-00053-of-00073.safetensors", + "model.layers.26.post_attention_layernorm.weight": "model-00055-of-00073.safetensors", + "model.layers.26.self_attn.k_proj.bias": "model-00053-of-00073.safetensors", + "model.layers.26.self_attn.k_proj.weight": "model-00053-of-00073.safetensors", + "model.layers.26.self_attn.o_proj.bias": "model-00053-of-00073.safetensors", + "model.layers.26.self_attn.o_proj.weight": "model-00053-of-00073.safetensors", + "model.layers.26.self_attn.q_proj.bias": "model-00053-of-00073.safetensors", + "model.layers.26.self_attn.q_proj.weight": "model-00053-of-00073.safetensors", + "model.layers.26.self_attn.sinks": "model-00053-of-00073.safetensors", + "model.layers.26.self_attn.v_proj.bias": "model-00053-of-00073.safetensors", + "model.layers.26.self_attn.v_proj.weight": "model-00053-of-00073.safetensors", + "model.layers.27.input_layernorm.weight": "model-00057-of-00073.safetensors", + "model.layers.27.mlp.experts.down_proj": "model-00057-of-00073.safetensors", + "model.layers.27.mlp.experts.down_proj_bias": "model-00057-of-00073.safetensors", + "model.layers.27.mlp.experts.gate_up_proj": "model-00056-of-00073.safetensors", + "model.layers.27.mlp.experts.gate_up_proj_bias": "model-00056-of-00073.safetensors", + "model.layers.27.mlp.router.bias": "model-00055-of-00073.safetensors", + "model.layers.27.mlp.router.weight": "model-00055-of-00073.safetensors", + "model.layers.27.post_attention_layernorm.weight": "model-00057-of-00073.safetensors", + "model.layers.27.self_attn.k_proj.bias": "model-00055-of-00073.safetensors", + "model.layers.27.self_attn.k_proj.weight": "model-00055-of-00073.safetensors", + "model.layers.27.self_attn.o_proj.bias": "model-00055-of-00073.safetensors", + "model.layers.27.self_attn.o_proj.weight": "model-00055-of-00073.safetensors", + "model.layers.27.self_attn.q_proj.bias": "model-00055-of-00073.safetensors", + "model.layers.27.self_attn.q_proj.weight": "model-00055-of-00073.safetensors", + "model.layers.27.self_attn.sinks": "model-00055-of-00073.safetensors", + "model.layers.27.self_attn.v_proj.bias": "model-00055-of-00073.safetensors", + "model.layers.27.self_attn.v_proj.weight": "model-00055-of-00073.safetensors", + "model.layers.28.input_layernorm.weight": "model-00059-of-00073.safetensors", + "model.layers.28.mlp.experts.down_proj": "model-00059-of-00073.safetensors", + "model.layers.28.mlp.experts.down_proj_bias": "model-00059-of-00073.safetensors", + "model.layers.28.mlp.experts.gate_up_proj": "model-00058-of-00073.safetensors", + "model.layers.28.mlp.experts.gate_up_proj_bias": "model-00058-of-00073.safetensors", + "model.layers.28.mlp.router.bias": "model-00057-of-00073.safetensors", + "model.layers.28.mlp.router.weight": "model-00057-of-00073.safetensors", + "model.layers.28.post_attention_layernorm.weight": "model-00059-of-00073.safetensors", + "model.layers.28.self_attn.k_proj.bias": "model-00057-of-00073.safetensors", + "model.layers.28.self_attn.k_proj.weight": "model-00057-of-00073.safetensors", + "model.layers.28.self_attn.o_proj.bias": "model-00057-of-00073.safetensors", + "model.layers.28.self_attn.o_proj.weight": "model-00057-of-00073.safetensors", + "model.layers.28.self_attn.q_proj.bias": "model-00057-of-00073.safetensors", + "model.layers.28.self_attn.q_proj.weight": "model-00057-of-00073.safetensors", + "model.layers.28.self_attn.sinks": "model-00057-of-00073.safetensors", + "model.layers.28.self_attn.v_proj.bias": "model-00057-of-00073.safetensors", + "model.layers.28.self_attn.v_proj.weight": "model-00057-of-00073.safetensors", + "model.layers.29.input_layernorm.weight": "model-00061-of-00073.safetensors", + "model.layers.29.mlp.experts.down_proj": "model-00061-of-00073.safetensors", + "model.layers.29.mlp.experts.down_proj_bias": "model-00061-of-00073.safetensors", + "model.layers.29.mlp.experts.gate_up_proj": "model-00060-of-00073.safetensors", + "model.layers.29.mlp.experts.gate_up_proj_bias": "model-00060-of-00073.safetensors", + "model.layers.29.mlp.router.bias": "model-00059-of-00073.safetensors", + "model.layers.29.mlp.router.weight": "model-00059-of-00073.safetensors", + "model.layers.29.post_attention_layernorm.weight": "model-00061-of-00073.safetensors", + "model.layers.29.self_attn.k_proj.bias": "model-00059-of-00073.safetensors", + "model.layers.29.self_attn.k_proj.weight": "model-00059-of-00073.safetensors", + "model.layers.29.self_attn.o_proj.bias": "model-00059-of-00073.safetensors", + "model.layers.29.self_attn.o_proj.weight": "model-00059-of-00073.safetensors", + "model.layers.29.self_attn.q_proj.bias": "model-00059-of-00073.safetensors", + "model.layers.29.self_attn.q_proj.weight": "model-00059-of-00073.safetensors", + "model.layers.29.self_attn.sinks": "model-00059-of-00073.safetensors", + "model.layers.29.self_attn.v_proj.bias": "model-00059-of-00073.safetensors", + "model.layers.29.self_attn.v_proj.weight": "model-00059-of-00073.safetensors", + "model.layers.3.input_layernorm.weight": "model-00009-of-00073.safetensors", + "model.layers.3.mlp.experts.down_proj": "model-00009-of-00073.safetensors", + "model.layers.3.mlp.experts.down_proj_bias": "model-00009-of-00073.safetensors", + "model.layers.3.mlp.experts.gate_up_proj": "model-00008-of-00073.safetensors", + "model.layers.3.mlp.experts.gate_up_proj_bias": "model-00008-of-00073.safetensors", + "model.layers.3.mlp.router.bias": "model-00007-of-00073.safetensors", + "model.layers.3.mlp.router.weight": "model-00007-of-00073.safetensors", + "model.layers.3.post_attention_layernorm.weight": "model-00009-of-00073.safetensors", + "model.layers.3.self_attn.k_proj.bias": "model-00007-of-00073.safetensors", + "model.layers.3.self_attn.k_proj.weight": "model-00007-of-00073.safetensors", + "model.layers.3.self_attn.o_proj.bias": "model-00007-of-00073.safetensors", + "model.layers.3.self_attn.o_proj.weight": "model-00007-of-00073.safetensors", + "model.layers.3.self_attn.q_proj.bias": "model-00007-of-00073.safetensors", + "model.layers.3.self_attn.q_proj.weight": "model-00007-of-00073.safetensors", + "model.layers.3.self_attn.sinks": "model-00007-of-00073.safetensors", + "model.layers.3.self_attn.v_proj.bias": "model-00007-of-00073.safetensors", + "model.layers.3.self_attn.v_proj.weight": "model-00007-of-00073.safetensors", + "model.layers.30.input_layernorm.weight": "model-00063-of-00073.safetensors", + "model.layers.30.mlp.experts.down_proj": "model-00063-of-00073.safetensors", + "model.layers.30.mlp.experts.down_proj_bias": "model-00063-of-00073.safetensors", + "model.layers.30.mlp.experts.gate_up_proj": "model-00062-of-00073.safetensors", + "model.layers.30.mlp.experts.gate_up_proj_bias": "model-00062-of-00073.safetensors", + "model.layers.30.mlp.router.bias": "model-00061-of-00073.safetensors", + "model.layers.30.mlp.router.weight": "model-00061-of-00073.safetensors", + "model.layers.30.post_attention_layernorm.weight": "model-00063-of-00073.safetensors", + "model.layers.30.self_attn.k_proj.bias": "model-00061-of-00073.safetensors", + "model.layers.30.self_attn.k_proj.weight": "model-00061-of-00073.safetensors", + "model.layers.30.self_attn.o_proj.bias": "model-00061-of-00073.safetensors", + "model.layers.30.self_attn.o_proj.weight": "model-00061-of-00073.safetensors", + "model.layers.30.self_attn.q_proj.bias": "model-00061-of-00073.safetensors", + "model.layers.30.self_attn.q_proj.weight": "model-00061-of-00073.safetensors", + "model.layers.30.self_attn.sinks": "model-00061-of-00073.safetensors", + "model.layers.30.self_attn.v_proj.bias": "model-00061-of-00073.safetensors", + "model.layers.30.self_attn.v_proj.weight": "model-00061-of-00073.safetensors", + "model.layers.31.input_layernorm.weight": "model-00065-of-00073.safetensors", + "model.layers.31.mlp.experts.down_proj": "model-00065-of-00073.safetensors", + "model.layers.31.mlp.experts.down_proj_bias": "model-00065-of-00073.safetensors", + "model.layers.31.mlp.experts.gate_up_proj": "model-00064-of-00073.safetensors", + "model.layers.31.mlp.experts.gate_up_proj_bias": "model-00064-of-00073.safetensors", + "model.layers.31.mlp.router.bias": "model-00063-of-00073.safetensors", + "model.layers.31.mlp.router.weight": "model-00063-of-00073.safetensors", + "model.layers.31.post_attention_layernorm.weight": "model-00065-of-00073.safetensors", + "model.layers.31.self_attn.k_proj.bias": "model-00063-of-00073.safetensors", + "model.layers.31.self_attn.k_proj.weight": "model-00063-of-00073.safetensors", + "model.layers.31.self_attn.o_proj.bias": "model-00063-of-00073.safetensors", + "model.layers.31.self_attn.o_proj.weight": "model-00063-of-00073.safetensors", + "model.layers.31.self_attn.q_proj.bias": "model-00063-of-00073.safetensors", + "model.layers.31.self_attn.q_proj.weight": "model-00063-of-00073.safetensors", + "model.layers.31.self_attn.sinks": "model-00063-of-00073.safetensors", + "model.layers.31.self_attn.v_proj.bias": "model-00063-of-00073.safetensors", + "model.layers.31.self_attn.v_proj.weight": "model-00063-of-00073.safetensors", + "model.layers.32.input_layernorm.weight": "model-00067-of-00073.safetensors", + "model.layers.32.mlp.experts.down_proj": "model-00067-of-00073.safetensors", + "model.layers.32.mlp.experts.down_proj_bias": "model-00067-of-00073.safetensors", + "model.layers.32.mlp.experts.gate_up_proj": "model-00066-of-00073.safetensors", + "model.layers.32.mlp.experts.gate_up_proj_bias": "model-00066-of-00073.safetensors", + "model.layers.32.mlp.router.bias": "model-00065-of-00073.safetensors", + "model.layers.32.mlp.router.weight": "model-00065-of-00073.safetensors", + "model.layers.32.post_attention_layernorm.weight": "model-00067-of-00073.safetensors", + "model.layers.32.self_attn.k_proj.bias": "model-00065-of-00073.safetensors", + "model.layers.32.self_attn.k_proj.weight": "model-00065-of-00073.safetensors", + "model.layers.32.self_attn.o_proj.bias": "model-00065-of-00073.safetensors", + "model.layers.32.self_attn.o_proj.weight": "model-00065-of-00073.safetensors", + "model.layers.32.self_attn.q_proj.bias": "model-00065-of-00073.safetensors", + "model.layers.32.self_attn.q_proj.weight": "model-00065-of-00073.safetensors", + "model.layers.32.self_attn.sinks": "model-00065-of-00073.safetensors", + "model.layers.32.self_attn.v_proj.bias": "model-00065-of-00073.safetensors", + "model.layers.32.self_attn.v_proj.weight": "model-00065-of-00073.safetensors", + "model.layers.33.input_layernorm.weight": "model-00069-of-00073.safetensors", + "model.layers.33.mlp.experts.down_proj": "model-00069-of-00073.safetensors", + "model.layers.33.mlp.experts.down_proj_bias": "model-00069-of-00073.safetensors", + "model.layers.33.mlp.experts.gate_up_proj": "model-00068-of-00073.safetensors", + "model.layers.33.mlp.experts.gate_up_proj_bias": "model-00068-of-00073.safetensors", + "model.layers.33.mlp.router.bias": "model-00067-of-00073.safetensors", + "model.layers.33.mlp.router.weight": "model-00067-of-00073.safetensors", + "model.layers.33.post_attention_layernorm.weight": "model-00069-of-00073.safetensors", + "model.layers.33.self_attn.k_proj.bias": "model-00067-of-00073.safetensors", + "model.layers.33.self_attn.k_proj.weight": "model-00067-of-00073.safetensors", + "model.layers.33.self_attn.o_proj.bias": "model-00067-of-00073.safetensors", + "model.layers.33.self_attn.o_proj.weight": "model-00067-of-00073.safetensors", + "model.layers.33.self_attn.q_proj.bias": "model-00067-of-00073.safetensors", + "model.layers.33.self_attn.q_proj.weight": "model-00067-of-00073.safetensors", + "model.layers.33.self_attn.sinks": "model-00067-of-00073.safetensors", + "model.layers.33.self_attn.v_proj.bias": "model-00067-of-00073.safetensors", + "model.layers.33.self_attn.v_proj.weight": "model-00067-of-00073.safetensors", + "model.layers.34.input_layernorm.weight": "model-00071-of-00073.safetensors", + "model.layers.34.mlp.experts.down_proj": "model-00071-of-00073.safetensors", + "model.layers.34.mlp.experts.down_proj_bias": "model-00071-of-00073.safetensors", + "model.layers.34.mlp.experts.gate_up_proj": "model-00070-of-00073.safetensors", + "model.layers.34.mlp.experts.gate_up_proj_bias": "model-00070-of-00073.safetensors", + "model.layers.34.mlp.router.bias": "model-00069-of-00073.safetensors", + "model.layers.34.mlp.router.weight": "model-00069-of-00073.safetensors", + "model.layers.34.post_attention_layernorm.weight": "model-00071-of-00073.safetensors", + "model.layers.34.self_attn.k_proj.bias": "model-00069-of-00073.safetensors", + "model.layers.34.self_attn.k_proj.weight": "model-00069-of-00073.safetensors", + "model.layers.34.self_attn.o_proj.bias": "model-00069-of-00073.safetensors", + "model.layers.34.self_attn.o_proj.weight": "model-00069-of-00073.safetensors", + "model.layers.34.self_attn.q_proj.bias": "model-00069-of-00073.safetensors", + "model.layers.34.self_attn.q_proj.weight": "model-00069-of-00073.safetensors", + "model.layers.34.self_attn.sinks": "model-00069-of-00073.safetensors", + "model.layers.34.self_attn.v_proj.bias": "model-00069-of-00073.safetensors", + "model.layers.34.self_attn.v_proj.weight": "model-00069-of-00073.safetensors", + "model.layers.35.input_layernorm.weight": "model-00073-of-00073.safetensors", + "model.layers.35.mlp.experts.down_proj": "model-00073-of-00073.safetensors", + "model.layers.35.mlp.experts.down_proj_bias": "model-00073-of-00073.safetensors", + "model.layers.35.mlp.experts.gate_up_proj": "model-00072-of-00073.safetensors", + "model.layers.35.mlp.experts.gate_up_proj_bias": "model-00072-of-00073.safetensors", + "model.layers.35.mlp.router.bias": "model-00071-of-00073.safetensors", + "model.layers.35.mlp.router.weight": "model-00071-of-00073.safetensors", + "model.layers.35.post_attention_layernorm.weight": "model-00073-of-00073.safetensors", + "model.layers.35.self_attn.k_proj.bias": "model-00071-of-00073.safetensors", + "model.layers.35.self_attn.k_proj.weight": "model-00071-of-00073.safetensors", + "model.layers.35.self_attn.o_proj.bias": "model-00071-of-00073.safetensors", + "model.layers.35.self_attn.o_proj.weight": "model-00071-of-00073.safetensors", + "model.layers.35.self_attn.q_proj.bias": "model-00071-of-00073.safetensors", + "model.layers.35.self_attn.q_proj.weight": "model-00071-of-00073.safetensors", + "model.layers.35.self_attn.sinks": "model-00071-of-00073.safetensors", + "model.layers.35.self_attn.v_proj.bias": "model-00071-of-00073.safetensors", + "model.layers.35.self_attn.v_proj.weight": "model-00071-of-00073.safetensors", + "model.layers.4.input_layernorm.weight": "model-00011-of-00073.safetensors", + "model.layers.4.mlp.experts.down_proj": "model-00011-of-00073.safetensors", + "model.layers.4.mlp.experts.down_proj_bias": "model-00011-of-00073.safetensors", + "model.layers.4.mlp.experts.gate_up_proj": "model-00010-of-00073.safetensors", + "model.layers.4.mlp.experts.gate_up_proj_bias": "model-00010-of-00073.safetensors", + "model.layers.4.mlp.router.bias": "model-00009-of-00073.safetensors", + "model.layers.4.mlp.router.weight": "model-00009-of-00073.safetensors", + "model.layers.4.post_attention_layernorm.weight": "model-00011-of-00073.safetensors", + "model.layers.4.self_attn.k_proj.bias": "model-00009-of-00073.safetensors", + "model.layers.4.self_attn.k_proj.weight": "model-00009-of-00073.safetensors", + "model.layers.4.self_attn.o_proj.bias": "model-00009-of-00073.safetensors", + "model.layers.4.self_attn.o_proj.weight": "model-00009-of-00073.safetensors", + "model.layers.4.self_attn.q_proj.bias": "model-00009-of-00073.safetensors", + "model.layers.4.self_attn.q_proj.weight": "model-00009-of-00073.safetensors", + "model.layers.4.self_attn.sinks": "model-00009-of-00073.safetensors", + "model.layers.4.self_attn.v_proj.bias": "model-00009-of-00073.safetensors", + "model.layers.4.self_attn.v_proj.weight": "model-00009-of-00073.safetensors", + "model.layers.5.input_layernorm.weight": "model-00013-of-00073.safetensors", + "model.layers.5.mlp.experts.down_proj": "model-00013-of-00073.safetensors", + "model.layers.5.mlp.experts.down_proj_bias": "model-00013-of-00073.safetensors", + "model.layers.5.mlp.experts.gate_up_proj": "model-00012-of-00073.safetensors", + "model.layers.5.mlp.experts.gate_up_proj_bias": "model-00012-of-00073.safetensors", + "model.layers.5.mlp.router.bias": "model-00011-of-00073.safetensors", + "model.layers.5.mlp.router.weight": "model-00011-of-00073.safetensors", + "model.layers.5.post_attention_layernorm.weight": "model-00013-of-00073.safetensors", + "model.layers.5.self_attn.k_proj.bias": "model-00011-of-00073.safetensors", + "model.layers.5.self_attn.k_proj.weight": "model-00011-of-00073.safetensors", + "model.layers.5.self_attn.o_proj.bias": "model-00011-of-00073.safetensors", + "model.layers.5.self_attn.o_proj.weight": "model-00011-of-00073.safetensors", + "model.layers.5.self_attn.q_proj.bias": "model-00011-of-00073.safetensors", + "model.layers.5.self_attn.q_proj.weight": "model-00011-of-00073.safetensors", + "model.layers.5.self_attn.sinks": "model-00011-of-00073.safetensors", + "model.layers.5.self_attn.v_proj.bias": "model-00011-of-00073.safetensors", + "model.layers.5.self_attn.v_proj.weight": "model-00011-of-00073.safetensors", + "model.layers.6.input_layernorm.weight": "model-00015-of-00073.safetensors", + "model.layers.6.mlp.experts.down_proj": "model-00015-of-00073.safetensors", + "model.layers.6.mlp.experts.down_proj_bias": "model-00015-of-00073.safetensors", + "model.layers.6.mlp.experts.gate_up_proj": "model-00014-of-00073.safetensors", + "model.layers.6.mlp.experts.gate_up_proj_bias": "model-00014-of-00073.safetensors", + "model.layers.6.mlp.router.bias": "model-00013-of-00073.safetensors", + "model.layers.6.mlp.router.weight": "model-00013-of-00073.safetensors", + "model.layers.6.post_attention_layernorm.weight": "model-00015-of-00073.safetensors", + "model.layers.6.self_attn.k_proj.bias": "model-00013-of-00073.safetensors", + "model.layers.6.self_attn.k_proj.weight": "model-00013-of-00073.safetensors", + "model.layers.6.self_attn.o_proj.bias": "model-00013-of-00073.safetensors", + "model.layers.6.self_attn.o_proj.weight": "model-00013-of-00073.safetensors", + "model.layers.6.self_attn.q_proj.bias": "model-00013-of-00073.safetensors", + "model.layers.6.self_attn.q_proj.weight": "model-00013-of-00073.safetensors", + "model.layers.6.self_attn.sinks": "model-00013-of-00073.safetensors", + "model.layers.6.self_attn.v_proj.bias": "model-00013-of-00073.safetensors", + "model.layers.6.self_attn.v_proj.weight": "model-00013-of-00073.safetensors", + "model.layers.7.input_layernorm.weight": "model-00017-of-00073.safetensors", + "model.layers.7.mlp.experts.down_proj": "model-00017-of-00073.safetensors", + "model.layers.7.mlp.experts.down_proj_bias": "model-00017-of-00073.safetensors", + "model.layers.7.mlp.experts.gate_up_proj": "model-00016-of-00073.safetensors", + "model.layers.7.mlp.experts.gate_up_proj_bias": "model-00016-of-00073.safetensors", + "model.layers.7.mlp.router.bias": "model-00015-of-00073.safetensors", + "model.layers.7.mlp.router.weight": "model-00015-of-00073.safetensors", + "model.layers.7.post_attention_layernorm.weight": "model-00017-of-00073.safetensors", + "model.layers.7.self_attn.k_proj.bias": "model-00015-of-00073.safetensors", + "model.layers.7.self_attn.k_proj.weight": "model-00015-of-00073.safetensors", + "model.layers.7.self_attn.o_proj.bias": "model-00015-of-00073.safetensors", + "model.layers.7.self_attn.o_proj.weight": "model-00015-of-00073.safetensors", + "model.layers.7.self_attn.q_proj.bias": "model-00015-of-00073.safetensors", + "model.layers.7.self_attn.q_proj.weight": "model-00015-of-00073.safetensors", + "model.layers.7.self_attn.sinks": "model-00015-of-00073.safetensors", + "model.layers.7.self_attn.v_proj.bias": "model-00015-of-00073.safetensors", + "model.layers.7.self_attn.v_proj.weight": "model-00015-of-00073.safetensors", + "model.layers.8.input_layernorm.weight": "model-00019-of-00073.safetensors", + "model.layers.8.mlp.experts.down_proj": "model-00019-of-00073.safetensors", + "model.layers.8.mlp.experts.down_proj_bias": "model-00019-of-00073.safetensors", + "model.layers.8.mlp.experts.gate_up_proj": "model-00018-of-00073.safetensors", + "model.layers.8.mlp.experts.gate_up_proj_bias": "model-00018-of-00073.safetensors", + "model.layers.8.mlp.router.bias": "model-00017-of-00073.safetensors", + "model.layers.8.mlp.router.weight": "model-00017-of-00073.safetensors", + "model.layers.8.post_attention_layernorm.weight": "model-00019-of-00073.safetensors", + "model.layers.8.self_attn.k_proj.bias": "model-00017-of-00073.safetensors", + "model.layers.8.self_attn.k_proj.weight": "model-00017-of-00073.safetensors", + "model.layers.8.self_attn.o_proj.bias": "model-00017-of-00073.safetensors", + "model.layers.8.self_attn.o_proj.weight": "model-00017-of-00073.safetensors", + "model.layers.8.self_attn.q_proj.bias": "model-00017-of-00073.safetensors", + "model.layers.8.self_attn.q_proj.weight": "model-00017-of-00073.safetensors", + "model.layers.8.self_attn.sinks": "model-00017-of-00073.safetensors", + "model.layers.8.self_attn.v_proj.bias": "model-00017-of-00073.safetensors", + "model.layers.8.self_attn.v_proj.weight": "model-00017-of-00073.safetensors", + "model.layers.9.input_layernorm.weight": "model-00021-of-00073.safetensors", + "model.layers.9.mlp.experts.down_proj": "model-00021-of-00073.safetensors", + "model.layers.9.mlp.experts.down_proj_bias": "model-00021-of-00073.safetensors", + "model.layers.9.mlp.experts.gate_up_proj": "model-00020-of-00073.safetensors", + "model.layers.9.mlp.experts.gate_up_proj_bias": "model-00020-of-00073.safetensors", + "model.layers.9.mlp.router.bias": "model-00019-of-00073.safetensors", + "model.layers.9.mlp.router.weight": "model-00019-of-00073.safetensors", + "model.layers.9.post_attention_layernorm.weight": "model-00021-of-00073.safetensors", + "model.layers.9.self_attn.k_proj.bias": "model-00019-of-00073.safetensors", + "model.layers.9.self_attn.k_proj.weight": "model-00019-of-00073.safetensors", + "model.layers.9.self_attn.o_proj.bias": "model-00019-of-00073.safetensors", + "model.layers.9.self_attn.o_proj.weight": "model-00019-of-00073.safetensors", + "model.layers.9.self_attn.q_proj.bias": "model-00019-of-00073.safetensors", + "model.layers.9.self_attn.q_proj.weight": "model-00019-of-00073.safetensors", + "model.layers.9.self_attn.sinks": "model-00019-of-00073.safetensors", + "model.layers.9.self_attn.v_proj.bias": "model-00019-of-00073.safetensors", + "model.layers.9.self_attn.v_proj.weight": "model-00019-of-00073.safetensors", + "model.norm.weight": "model-00073-of-00073.safetensors" + } +} diff --git a/original/config.json b/original/config.json deleted file mode 100644 index ece4561b8e64e114d036c672b999ba38e8ae1fbb..0000000000000000000000000000000000000000 --- a/original/config.json +++ /dev/null @@ -1 +0,0 @@ -{"num_hidden_layers": 36, "num_experts": 128, "experts_per_token": 4, "vocab_size": 201088, "hidden_size": 2880, "intermediate_size": 2880, "swiglu_limit": 7.0, "head_dim": 64, "num_attention_heads": 64, "num_key_value_heads": 8, "sliding_window": 128, "initial_context_length": 4096, "rope_theta": 150000, "rope_scaling_factor": 32.0, "rope_ntk_alpha": 1, "rope_ntk_beta": 32} \ No newline at end of file diff --git a/original/dtypes.json b/original/dtypes.json deleted file mode 100644 index cadb8bc1d992e4c5aeecd3beaaae9d6509ff68e1..0000000000000000000000000000000000000000 --- a/original/dtypes.json +++ /dev/null @@ -1 +0,0 @@ -{"embedding.weight": "BF16", "block.0.attn.norm.scale": "BF16", "block.0.attn.qkv.weight": "BF16", "block.0.attn.qkv.bias": "BF16", "block.0.attn.sinks": "BF16", "block.0.attn.out.weight": "BF16", "block.0.attn.out.bias": "BF16", "block.0.mlp.norm.scale": "BF16", "block.0.mlp.gate.weight": "BF16", "block.0.mlp.gate.bias": "BF16", "block.0.mlp.mlp1_weight.blocks": "FP4", "block.0.mlp.mlp1_weight.scales": "UE8", "block.0.mlp.mlp1_bias": "BF16", "block.0.mlp.mlp2_weight.blocks": "FP4", "block.0.mlp.mlp2_weight.scales": "UE8", "block.0.mlp.mlp2_bias": "BF16", "block.1.attn.norm.scale": "BF16", "block.1.attn.qkv.weight": "BF16", "block.1.attn.qkv.bias": "BF16", "block.1.attn.sinks": "BF16", "block.1.attn.out.weight": "BF16", "block.1.attn.out.bias": "BF16", "block.1.mlp.norm.scale": "BF16", "block.1.mlp.gate.weight": "BF16", "block.1.mlp.gate.bias": "BF16", "block.1.mlp.mlp1_weight.blocks": "FP4", "block.1.mlp.mlp1_weight.scales": "UE8", "block.1.mlp.mlp1_bias": "BF16", "block.1.mlp.mlp2_weight.blocks": "FP4", "block.1.mlp.mlp2_weight.scales": "UE8", "block.1.mlp.mlp2_bias": "BF16", "block.2.attn.norm.scale": "BF16", "block.2.attn.qkv.weight": "BF16", "block.2.attn.qkv.bias": "BF16", "block.2.attn.sinks": "BF16", "block.2.attn.out.weight": "BF16", "block.2.attn.out.bias": "BF16", "block.2.mlp.norm.scale": "BF16", "block.2.mlp.gate.weight": "BF16", "block.2.mlp.gate.bias": "BF16", "block.2.mlp.mlp1_weight.blocks": "FP4", "block.2.mlp.mlp1_weight.scales": "UE8", "block.2.mlp.mlp1_bias": "BF16", "block.2.mlp.mlp2_weight.blocks": "FP4", "block.2.mlp.mlp2_weight.scales": "UE8", "block.2.mlp.mlp2_bias": "BF16", "block.3.attn.norm.scale": "BF16", "block.3.attn.qkv.weight": "BF16", "block.3.attn.qkv.bias": "BF16", "block.3.attn.sinks": "BF16", "block.3.attn.out.weight": "BF16", "block.3.attn.out.bias": "BF16", "block.3.mlp.norm.scale": "BF16", "block.3.mlp.gate.weight": "BF16", "block.3.mlp.gate.bias": "BF16", "block.3.mlp.mlp1_weight.blocks": "FP4", "block.3.mlp.mlp1_weight.scales": "UE8", "block.3.mlp.mlp1_bias": "BF16", "block.3.mlp.mlp2_weight.blocks": "FP4", "block.3.mlp.mlp2_weight.scales": "UE8", "block.3.mlp.mlp2_bias": "BF16", "block.4.attn.norm.scale": "BF16", "block.4.attn.qkv.weight": "BF16", "block.4.attn.qkv.bias": "BF16", "block.4.attn.sinks": "BF16", "block.4.attn.out.weight": "BF16", "block.4.attn.out.bias": "BF16", "block.4.mlp.norm.scale": "BF16", "block.4.mlp.gate.weight": "BF16", "block.4.mlp.gate.bias": "BF16", "block.4.mlp.mlp1_weight.blocks": "FP4", "block.4.mlp.mlp1_weight.scales": "UE8", "block.4.mlp.mlp1_bias": "BF16", "block.4.mlp.mlp2_weight.blocks": "FP4", "block.4.mlp.mlp2_weight.scales": "UE8", "block.4.mlp.mlp2_bias": "BF16", "block.5.attn.norm.scale": "BF16", "block.5.attn.qkv.weight": "BF16", "block.5.attn.qkv.bias": "BF16", "block.5.attn.sinks": "BF16", "block.5.attn.out.weight": "BF16", "block.5.attn.out.bias": "BF16", "block.5.mlp.norm.scale": "BF16", "block.5.mlp.gate.weight": "BF16", "block.5.mlp.gate.bias": "BF16", "block.5.mlp.mlp1_weight.blocks": "FP4", "block.5.mlp.mlp1_weight.scales": "UE8", "block.5.mlp.mlp1_bias": "BF16", "block.5.mlp.mlp2_weight.blocks": "FP4", "block.5.mlp.mlp2_weight.scales": "UE8", "block.5.mlp.mlp2_bias": "BF16", "block.6.attn.norm.scale": "BF16", "block.6.attn.qkv.weight": "BF16", "block.6.attn.qkv.bias": "BF16", "block.6.attn.sinks": "BF16", "block.6.attn.out.weight": "BF16", "block.6.attn.out.bias": "BF16", "block.6.mlp.norm.scale": "BF16", "block.6.mlp.gate.weight": "BF16", "block.6.mlp.gate.bias": "BF16", "block.6.mlp.mlp1_weight.blocks": "FP4", "block.6.mlp.mlp1_weight.scales": "UE8", "block.6.mlp.mlp1_bias": "BF16", "block.6.mlp.mlp2_weight.blocks": "FP4", "block.6.mlp.mlp2_weight.scales": "UE8", "block.6.mlp.mlp2_bias": "BF16", "block.7.attn.norm.scale": "BF16", "block.7.attn.qkv.weight": "BF16", "block.7.attn.qkv.bias": "BF16", "block.7.attn.sinks": "BF16", "block.7.attn.out.weight": "BF16", "block.7.attn.out.bias": "BF16", "block.7.mlp.norm.scale": "BF16", "block.7.mlp.gate.weight": "BF16", "block.7.mlp.gate.bias": "BF16", "block.7.mlp.mlp1_weight.blocks": "FP4", "block.7.mlp.mlp1_weight.scales": "UE8", "block.7.mlp.mlp1_bias": "BF16", "block.7.mlp.mlp2_weight.blocks": "FP4", "block.7.mlp.mlp2_weight.scales": "UE8", "block.7.mlp.mlp2_bias": "BF16", "block.8.attn.norm.scale": "BF16", "block.8.attn.qkv.weight": "BF16", "block.8.attn.qkv.bias": "BF16", "block.8.attn.sinks": "BF16", "block.8.attn.out.weight": "BF16", "block.8.attn.out.bias": "BF16", "block.8.mlp.norm.scale": "BF16", "block.8.mlp.gate.weight": "BF16", "block.8.mlp.gate.bias": "BF16", "block.8.mlp.mlp1_weight.blocks": "FP4", "block.8.mlp.mlp1_weight.scales": "UE8", "block.8.mlp.mlp1_bias": "BF16", "block.8.mlp.mlp2_weight.blocks": "FP4", "block.8.mlp.mlp2_weight.scales": "UE8", "block.8.mlp.mlp2_bias": "BF16", "block.9.attn.norm.scale": "BF16", "block.9.attn.qkv.weight": "BF16", "block.9.attn.qkv.bias": "BF16", "block.9.attn.sinks": "BF16", "block.9.attn.out.weight": "BF16", "block.9.attn.out.bias": "BF16", "block.9.mlp.norm.scale": "BF16", "block.9.mlp.gate.weight": "BF16", "block.9.mlp.gate.bias": "BF16", "block.9.mlp.mlp1_weight.blocks": "FP4", "block.9.mlp.mlp1_weight.scales": "UE8", "block.9.mlp.mlp1_bias": "BF16", "block.9.mlp.mlp2_weight.blocks": "FP4", "block.9.mlp.mlp2_weight.scales": "UE8", "block.9.mlp.mlp2_bias": "BF16", "block.10.attn.norm.scale": "BF16", "block.10.attn.qkv.weight": "BF16", "block.10.attn.qkv.bias": "BF16", "block.10.attn.sinks": "BF16", "block.10.attn.out.weight": "BF16", "block.10.attn.out.bias": "BF16", "block.10.mlp.norm.scale": "BF16", "block.10.mlp.gate.weight": "BF16", "block.10.mlp.gate.bias": "BF16", "block.10.mlp.mlp1_weight.blocks": "FP4", "block.10.mlp.mlp1_weight.scales": "UE8", "block.10.mlp.mlp1_bias": "BF16", "block.10.mlp.mlp2_weight.blocks": "FP4", "block.10.mlp.mlp2_weight.scales": "UE8", "block.10.mlp.mlp2_bias": "BF16", "block.11.attn.norm.scale": "BF16", "block.11.attn.qkv.weight": "BF16", "block.11.attn.qkv.bias": "BF16", "block.11.attn.sinks": "BF16", "block.11.attn.out.weight": "BF16", "block.11.attn.out.bias": "BF16", "block.11.mlp.norm.scale": "BF16", "block.11.mlp.gate.weight": "BF16", "block.11.mlp.gate.bias": "BF16", "block.11.mlp.mlp1_weight.blocks": "FP4", "block.11.mlp.mlp1_weight.scales": "UE8", "block.11.mlp.mlp1_bias": "BF16", "block.11.mlp.mlp2_weight.blocks": "FP4", "block.11.mlp.mlp2_weight.scales": "UE8", "block.11.mlp.mlp2_bias": "BF16", "block.12.attn.norm.scale": "BF16", "block.12.attn.qkv.weight": "BF16", "block.12.attn.qkv.bias": "BF16", "block.12.attn.sinks": "BF16", "block.12.attn.out.weight": "BF16", "block.12.attn.out.bias": "BF16", "block.12.mlp.norm.scale": "BF16", "block.12.mlp.gate.weight": "BF16", "block.12.mlp.gate.bias": "BF16", "block.12.mlp.mlp1_weight.blocks": "FP4", "block.12.mlp.mlp1_weight.scales": "UE8", "block.12.mlp.mlp1_bias": "BF16", "block.12.mlp.mlp2_weight.blocks": "FP4", "block.12.mlp.mlp2_weight.scales": "UE8", "block.12.mlp.mlp2_bias": "BF16", "block.13.attn.norm.scale": "BF16", "block.13.attn.qkv.weight": "BF16", "block.13.attn.qkv.bias": "BF16", "block.13.attn.sinks": "BF16", "block.13.attn.out.weight": "BF16", "block.13.attn.out.bias": "BF16", "block.13.mlp.norm.scale": "BF16", "block.13.mlp.gate.weight": "BF16", "block.13.mlp.gate.bias": "BF16", "block.13.mlp.mlp1_weight.blocks": "FP4", "block.13.mlp.mlp1_weight.scales": "UE8", "block.13.mlp.mlp1_bias": "BF16", "block.13.mlp.mlp2_weight.blocks": "FP4", "block.13.mlp.mlp2_weight.scales": "UE8", "block.13.mlp.mlp2_bias": "BF16", "block.14.attn.norm.scale": "BF16", "block.14.attn.qkv.weight": "BF16", "block.14.attn.qkv.bias": "BF16", "block.14.attn.sinks": "BF16", "block.14.attn.out.weight": "BF16", "block.14.attn.out.bias": "BF16", "block.14.mlp.norm.scale": "BF16", "block.14.mlp.gate.weight": "BF16", "block.14.mlp.gate.bias": "BF16", "block.14.mlp.mlp1_weight.blocks": "FP4", "block.14.mlp.mlp1_weight.scales": "UE8", "block.14.mlp.mlp1_bias": "BF16", "block.14.mlp.mlp2_weight.blocks": "FP4", "block.14.mlp.mlp2_weight.scales": "UE8", "block.14.mlp.mlp2_bias": "BF16", "block.15.attn.norm.scale": "BF16", "block.15.attn.qkv.weight": "BF16", "block.15.attn.qkv.bias": "BF16", "block.15.attn.sinks": "BF16", "block.15.attn.out.weight": "BF16", "block.15.attn.out.bias": "BF16", "block.15.mlp.norm.scale": "BF16", "block.15.mlp.gate.weight": "BF16", "block.15.mlp.gate.bias": "BF16", "block.15.mlp.mlp1_weight.blocks": "FP4", "block.15.mlp.mlp1_weight.scales": "UE8", "block.15.mlp.mlp1_bias": "BF16", "block.15.mlp.mlp2_weight.blocks": "FP4", "block.15.mlp.mlp2_weight.scales": "UE8", "block.15.mlp.mlp2_bias": "BF16", "block.16.attn.norm.scale": "BF16", "block.16.attn.qkv.weight": "BF16", "block.16.attn.qkv.bias": "BF16", "block.16.attn.sinks": "BF16", "block.16.attn.out.weight": "BF16", "block.16.attn.out.bias": "BF16", "block.16.mlp.norm.scale": "BF16", "block.16.mlp.gate.weight": "BF16", "block.16.mlp.gate.bias": "BF16", "block.16.mlp.mlp1_weight.blocks": "FP4", "block.16.mlp.mlp1_weight.scales": "UE8", "block.16.mlp.mlp1_bias": "BF16", "block.16.mlp.mlp2_weight.blocks": "FP4", "block.16.mlp.mlp2_weight.scales": "UE8", "block.16.mlp.mlp2_bias": "BF16", "block.17.attn.norm.scale": "BF16", "block.17.attn.qkv.weight": "BF16", "block.17.attn.qkv.bias": "BF16", "block.17.attn.sinks": "BF16", "block.17.attn.out.weight": "BF16", "block.17.attn.out.bias": "BF16", "block.17.mlp.norm.scale": "BF16", "block.17.mlp.gate.weight": "BF16", "block.17.mlp.gate.bias": "BF16", "block.17.mlp.mlp1_weight.blocks": "FP4", "block.17.mlp.mlp1_weight.scales": "UE8", "block.17.mlp.mlp1_bias": "BF16", "block.17.mlp.mlp2_weight.blocks": "FP4", "block.17.mlp.mlp2_weight.scales": "UE8", "block.17.mlp.mlp2_bias": "BF16", "block.18.attn.norm.scale": "BF16", "block.18.attn.qkv.weight": "BF16", "block.18.attn.qkv.bias": "BF16", "block.18.attn.sinks": "BF16", "block.18.attn.out.weight": "BF16", "block.18.attn.out.bias": "BF16", "block.18.mlp.norm.scale": "BF16", "block.18.mlp.gate.weight": "BF16", "block.18.mlp.gate.bias": "BF16", "block.18.mlp.mlp1_weight.blocks": "FP4", "block.18.mlp.mlp1_weight.scales": "UE8", "block.18.mlp.mlp1_bias": "BF16", "block.18.mlp.mlp2_weight.blocks": "FP4", "block.18.mlp.mlp2_weight.scales": "UE8", "block.18.mlp.mlp2_bias": "BF16", "block.19.attn.norm.scale": "BF16", "block.19.attn.qkv.weight": "BF16", "block.19.attn.qkv.bias": "BF16", "block.19.attn.sinks": "BF16", "block.19.attn.out.weight": "BF16", "block.19.attn.out.bias": "BF16", "block.19.mlp.norm.scale": "BF16", "block.19.mlp.gate.weight": "BF16", "block.19.mlp.gate.bias": "BF16", "block.19.mlp.mlp1_weight.blocks": "FP4", "block.19.mlp.mlp1_weight.scales": "UE8", "block.19.mlp.mlp1_bias": "BF16", "block.19.mlp.mlp2_weight.blocks": "FP4", "block.19.mlp.mlp2_weight.scales": "UE8", "block.19.mlp.mlp2_bias": "BF16", "block.20.attn.norm.scale": "BF16", "block.20.attn.qkv.weight": "BF16", "block.20.attn.qkv.bias": "BF16", "block.20.attn.sinks": "BF16", "block.20.attn.out.weight": "BF16", "block.20.attn.out.bias": "BF16", "block.20.mlp.norm.scale": "BF16", "block.20.mlp.gate.weight": "BF16", "block.20.mlp.gate.bias": "BF16", "block.20.mlp.mlp1_weight.blocks": "FP4", "block.20.mlp.mlp1_weight.scales": "UE8", "block.20.mlp.mlp1_bias": "BF16", "block.20.mlp.mlp2_weight.blocks": "FP4", "block.20.mlp.mlp2_weight.scales": "UE8", "block.20.mlp.mlp2_bias": "BF16", "block.21.attn.norm.scale": "BF16", "block.21.attn.qkv.weight": "BF16", "block.21.attn.qkv.bias": "BF16", "block.21.attn.sinks": "BF16", "block.21.attn.out.weight": "BF16", "block.21.attn.out.bias": "BF16", "block.21.mlp.norm.scale": "BF16", "block.21.mlp.gate.weight": "BF16", "block.21.mlp.gate.bias": "BF16", "block.21.mlp.mlp1_weight.blocks": "FP4", "block.21.mlp.mlp1_weight.scales": "UE8", "block.21.mlp.mlp1_bias": "BF16", "block.21.mlp.mlp2_weight.blocks": "FP4", "block.21.mlp.mlp2_weight.scales": "UE8", "block.21.mlp.mlp2_bias": "BF16", "block.22.attn.norm.scale": "BF16", "block.22.attn.qkv.weight": "BF16", "block.22.attn.qkv.bias": "BF16", "block.22.attn.sinks": "BF16", "block.22.attn.out.weight": "BF16", "block.22.attn.out.bias": "BF16", "block.22.mlp.norm.scale": "BF16", "block.22.mlp.gate.weight": "BF16", "block.22.mlp.gate.bias": "BF16", "block.22.mlp.mlp1_weight.blocks": "FP4", "block.22.mlp.mlp1_weight.scales": "UE8", "block.22.mlp.mlp1_bias": "BF16", "block.22.mlp.mlp2_weight.blocks": "FP4", "block.22.mlp.mlp2_weight.scales": "UE8", "block.22.mlp.mlp2_bias": "BF16", "block.23.attn.norm.scale": "BF16", "block.23.attn.qkv.weight": "BF16", "block.23.attn.qkv.bias": "BF16", "block.23.attn.sinks": "BF16", "block.23.attn.out.weight": "BF16", "block.23.attn.out.bias": "BF16", "block.23.mlp.norm.scale": "BF16", "block.23.mlp.gate.weight": "BF16", "block.23.mlp.gate.bias": "BF16", "block.23.mlp.mlp1_weight.blocks": "FP4", "block.23.mlp.mlp1_weight.scales": "UE8", "block.23.mlp.mlp1_bias": "BF16", "block.23.mlp.mlp2_weight.blocks": "FP4", "block.23.mlp.mlp2_weight.scales": "UE8", "block.23.mlp.mlp2_bias": "BF16", "block.24.attn.norm.scale": "BF16", "block.24.attn.qkv.weight": "BF16", "block.24.attn.qkv.bias": "BF16", "block.24.attn.sinks": "BF16", "block.24.attn.out.weight": "BF16", "block.24.attn.out.bias": "BF16", "block.24.mlp.norm.scale": "BF16", "block.24.mlp.gate.weight": "BF16", "block.24.mlp.gate.bias": "BF16", "block.24.mlp.mlp1_weight.blocks": "FP4", "block.24.mlp.mlp1_weight.scales": "UE8", "block.24.mlp.mlp1_bias": "BF16", "block.24.mlp.mlp2_weight.blocks": "FP4", "block.24.mlp.mlp2_weight.scales": "UE8", "block.24.mlp.mlp2_bias": "BF16", "block.25.attn.norm.scale": "BF16", "block.25.attn.qkv.weight": "BF16", "block.25.attn.qkv.bias": "BF16", "block.25.attn.sinks": "BF16", "block.25.attn.out.weight": "BF16", "block.25.attn.out.bias": "BF16", "block.25.mlp.norm.scale": "BF16", "block.25.mlp.gate.weight": "BF16", "block.25.mlp.gate.bias": "BF16", "block.25.mlp.mlp1_weight.blocks": "FP4", "block.25.mlp.mlp1_weight.scales": "UE8", "block.25.mlp.mlp1_bias": "BF16", "block.25.mlp.mlp2_weight.blocks": "FP4", "block.25.mlp.mlp2_weight.scales": "UE8", "block.25.mlp.mlp2_bias": "BF16", "block.26.attn.norm.scale": "BF16", "block.26.attn.qkv.weight": "BF16", "block.26.attn.qkv.bias": "BF16", "block.26.attn.sinks": "BF16", "block.26.attn.out.weight": "BF16", "block.26.attn.out.bias": "BF16", "block.26.mlp.norm.scale": "BF16", "block.26.mlp.gate.weight": "BF16", "block.26.mlp.gate.bias": "BF16", "block.26.mlp.mlp1_weight.blocks": "FP4", "block.26.mlp.mlp1_weight.scales": "UE8", "block.26.mlp.mlp1_bias": "BF16", "block.26.mlp.mlp2_weight.blocks": "FP4", "block.26.mlp.mlp2_weight.scales": "UE8", "block.26.mlp.mlp2_bias": "BF16", "block.27.attn.norm.scale": "BF16", "block.27.attn.qkv.weight": "BF16", "block.27.attn.qkv.bias": "BF16", "block.27.attn.sinks": "BF16", "block.27.attn.out.weight": "BF16", "block.27.attn.out.bias": "BF16", "block.27.mlp.norm.scale": "BF16", "block.27.mlp.gate.weight": "BF16", "block.27.mlp.gate.bias": "BF16", "block.27.mlp.mlp1_weight.blocks": "FP4", "block.27.mlp.mlp1_weight.scales": "UE8", "block.27.mlp.mlp1_bias": "BF16", "block.27.mlp.mlp2_weight.blocks": "FP4", "block.27.mlp.mlp2_weight.scales": "UE8", "block.27.mlp.mlp2_bias": "BF16", "block.28.attn.norm.scale": "BF16", "block.28.attn.qkv.weight": "BF16", "block.28.attn.qkv.bias": "BF16", "block.28.attn.sinks": "BF16", "block.28.attn.out.weight": "BF16", "block.28.attn.out.bias": "BF16", "block.28.mlp.norm.scale": "BF16", "block.28.mlp.gate.weight": "BF16", "block.28.mlp.gate.bias": "BF16", "block.28.mlp.mlp1_weight.blocks": "FP4", "block.28.mlp.mlp1_weight.scales": "UE8", "block.28.mlp.mlp1_bias": "BF16", "block.28.mlp.mlp2_weight.blocks": "FP4", "block.28.mlp.mlp2_weight.scales": "UE8", "block.28.mlp.mlp2_bias": "BF16", "block.29.attn.norm.scale": "BF16", "block.29.attn.qkv.weight": "BF16", "block.29.attn.qkv.bias": "BF16", "block.29.attn.sinks": "BF16", "block.29.attn.out.weight": "BF16", "block.29.attn.out.bias": "BF16", "block.29.mlp.norm.scale": "BF16", "block.29.mlp.gate.weight": "BF16", "block.29.mlp.gate.bias": "BF16", "block.29.mlp.mlp1_weight.blocks": "FP4", "block.29.mlp.mlp1_weight.scales": "UE8", "block.29.mlp.mlp1_bias": "BF16", "block.29.mlp.mlp2_weight.blocks": "FP4", "block.29.mlp.mlp2_weight.scales": "UE8", "block.29.mlp.mlp2_bias": "BF16", "block.30.attn.norm.scale": "BF16", "block.30.attn.qkv.weight": "BF16", "block.30.attn.qkv.bias": "BF16", "block.30.attn.sinks": "BF16", "block.30.attn.out.weight": "BF16", "block.30.attn.out.bias": "BF16", "block.30.mlp.norm.scale": "BF16", "block.30.mlp.gate.weight": "BF16", "block.30.mlp.gate.bias": "BF16", "block.30.mlp.mlp1_weight.blocks": "FP4", "block.30.mlp.mlp1_weight.scales": "UE8", "block.30.mlp.mlp1_bias": "BF16", "block.30.mlp.mlp2_weight.blocks": "FP4", "block.30.mlp.mlp2_weight.scales": "UE8", "block.30.mlp.mlp2_bias": "BF16", "block.31.attn.norm.scale": "BF16", "block.31.attn.qkv.weight": "BF16", "block.31.attn.qkv.bias": "BF16", "block.31.attn.sinks": "BF16", "block.31.attn.out.weight": "BF16", "block.31.attn.out.bias": "BF16", "block.31.mlp.norm.scale": "BF16", "block.31.mlp.gate.weight": "BF16", "block.31.mlp.gate.bias": "BF16", "block.31.mlp.mlp1_weight.blocks": "FP4", "block.31.mlp.mlp1_weight.scales": "UE8", "block.31.mlp.mlp1_bias": "BF16", "block.31.mlp.mlp2_weight.blocks": "FP4", "block.31.mlp.mlp2_weight.scales": "UE8", "block.31.mlp.mlp2_bias": "BF16", "block.32.attn.norm.scale": "BF16", "block.32.attn.qkv.weight": "BF16", "block.32.attn.qkv.bias": "BF16", "block.32.attn.sinks": "BF16", "block.32.attn.out.weight": "BF16", "block.32.attn.out.bias": "BF16", "block.32.mlp.norm.scale": "BF16", "block.32.mlp.gate.weight": "BF16", "block.32.mlp.gate.bias": "BF16", "block.32.mlp.mlp1_weight.blocks": "FP4", "block.32.mlp.mlp1_weight.scales": "UE8", "block.32.mlp.mlp1_bias": "BF16", "block.32.mlp.mlp2_weight.blocks": "FP4", "block.32.mlp.mlp2_weight.scales": "UE8", "block.32.mlp.mlp2_bias": "BF16", "block.33.attn.norm.scale": "BF16", "block.33.attn.qkv.weight": "BF16", "block.33.attn.qkv.bias": "BF16", "block.33.attn.sinks": "BF16", "block.33.attn.out.weight": "BF16", "block.33.attn.out.bias": "BF16", "block.33.mlp.norm.scale": "BF16", "block.33.mlp.gate.weight": "BF16", "block.33.mlp.gate.bias": "BF16", "block.33.mlp.mlp1_weight.blocks": "FP4", "block.33.mlp.mlp1_weight.scales": "UE8", "block.33.mlp.mlp1_bias": "BF16", "block.33.mlp.mlp2_weight.blocks": "FP4", "block.33.mlp.mlp2_weight.scales": "UE8", "block.33.mlp.mlp2_bias": "BF16", "block.34.attn.norm.scale": "BF16", "block.34.attn.qkv.weight": "BF16", "block.34.attn.qkv.bias": "BF16", "block.34.attn.sinks": "BF16", "block.34.attn.out.weight": "BF16", "block.34.attn.out.bias": "BF16", "block.34.mlp.norm.scale": "BF16", "block.34.mlp.gate.weight": "BF16", "block.34.mlp.gate.bias": "BF16", "block.34.mlp.mlp1_weight.blocks": "FP4", "block.34.mlp.mlp1_weight.scales": "UE8", "block.34.mlp.mlp1_bias": "BF16", "block.34.mlp.mlp2_weight.blocks": "FP4", "block.34.mlp.mlp2_weight.scales": "UE8", "block.34.mlp.mlp2_bias": "BF16", "block.35.attn.norm.scale": "BF16", "block.35.attn.qkv.weight": "BF16", "block.35.attn.qkv.bias": "BF16", "block.35.attn.sinks": "BF16", "block.35.attn.out.weight": "BF16", "block.35.attn.out.bias": "BF16", "block.35.mlp.norm.scale": "BF16", "block.35.mlp.gate.weight": "BF16", "block.35.mlp.gate.bias": "BF16", "block.35.mlp.mlp1_weight.blocks": "FP4", "block.35.mlp.mlp1_weight.scales": "UE8", "block.35.mlp.mlp1_bias": "BF16", "block.35.mlp.mlp2_weight.blocks": "FP4", "block.35.mlp.mlp2_weight.scales": "UE8", "block.35.mlp.mlp2_bias": "BF16", "norm.scale": "BF16", "unembedding.weight": "BF16"} \ No newline at end of file diff --git a/original/model--00001-of-00007.safetensors b/original/model--00001-of-00007.safetensors deleted file mode 100644 index 80f34bea4f777d0c3762e7d49df8aa31b855d653..0000000000000000000000000000000000000000 --- a/original/model--00001-of-00007.safetensors +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:68a8dc1f8e2e5996cb702f14332a25ddf3463daeab2df68e21ca09ef181203c3 -size 10544040680 diff --git a/original/model--00002-of-00007.safetensors b/original/model--00002-of-00007.safetensors deleted file mode 100644 index a0165d5415ff9b14c4b67323c76573d3425699f0..0000000000000000000000000000000000000000 --- a/original/model--00002-of-00007.safetensors +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:19b8f0d5c7dc3195c61a711d08384a1f85624f018186da541585c0f97ac61020 -size 10488721680 diff --git a/original/model--00003-of-00007.safetensors b/original/model--00003-of-00007.safetensors deleted file mode 100644 index 34dd1b9735600e96ae58a7b50dd2f326e61f7c74..0000000000000000000000000000000000000000 --- a/original/model--00003-of-00007.safetensors +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:0dbccd746d50e9543e8016d0a43ab4487c7f86d72349b1ef17abdfec509d0701 -size 10488721688 diff --git a/original/model--00004-of-00007.safetensors b/original/model--00004-of-00007.safetensors deleted file mode 100644 index a58203e49954ad7d16e1981cffa386867be30972..0000000000000000000000000000000000000000 --- a/original/model--00004-of-00007.safetensors +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:bcc73cf6d18f96a2e62428758463157cc12768f410873152a50d3929a64cd049 -size 10488721672 diff --git a/original/model--00005-of-00007.safetensors b/original/model--00005-of-00007.safetensors deleted file mode 100644 index 8495f23cf1dd31cbbee1775fe83fc20cf0ae00cf..0000000000000000000000000000000000000000 --- a/original/model--00005-of-00007.safetensors +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:15fd69843e9cc6fdf2db0efe0cf0979b49a6ba84b3a38169b2fabc5479d04a7d -size 10488721680 diff --git a/original/model--00006-of-00007.safetensors b/original/model--00006-of-00007.safetensors deleted file mode 100644 index 2763decb72d62523cade9e55d14e812875925e15..0000000000000000000000000000000000000000 --- a/original/model--00006-of-00007.safetensors +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:3aedef2ee0a5a78a003b3f74fd6883033946b80097bf41e4f4715d95066f0588 -size 10433402600 diff --git a/original/model--00007-of-00007.safetensors b/original/model--00007-of-00007.safetensors deleted file mode 100644 index f705f03a6b04c570c8869e1817aba7ee2b4cdd59..0000000000000000000000000000000000000000 --- a/original/model--00007-of-00007.safetensors +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:20d5dfcad1ed6c50aa3c0da7d3f08828dba72b5f58686a987bf3a8f01659cda6 -size 2316539800 diff --git a/original/model.safetensors.index.json b/original/model.safetensors.index.json deleted file mode 100644 index a3a283fe7dbbf8de067cf3f71d44f1f840318d98..0000000000000000000000000000000000000000 --- a/original/model.safetensors.index.json +++ /dev/null @@ -1,550 +0,0 @@ -{ - "metadata": { - "total_size": 65248815744 - }, - "weight_map": { - "block.0.attn.norm.scale": "model--00001-of-00007.safetensors", - "block.0.attn.out.bias": "model--00001-of-00007.safetensors", - "block.0.attn.out.weight": "model--00001-of-00007.safetensors", - "block.0.attn.qkv.bias": "model--00001-of-00007.safetensors", - "block.0.attn.qkv.weight": "model--00001-of-00007.safetensors", - "block.0.attn.sinks": "model--00001-of-00007.safetensors", - "block.0.mlp.gate.bias": "model--00001-of-00007.safetensors", - "block.0.mlp.gate.weight": "model--00001-of-00007.safetensors", - "block.0.mlp.mlp1_bias": "model--00001-of-00007.safetensors", - "block.0.mlp.mlp1_weight.blocks": "model--00001-of-00007.safetensors", - "block.0.mlp.mlp1_weight.scales": "model--00001-of-00007.safetensors", - "block.0.mlp.mlp2_bias": "model--00001-of-00007.safetensors", - "block.0.mlp.mlp2_weight.blocks": "model--00001-of-00007.safetensors", - "block.0.mlp.mlp2_weight.scales": "model--00001-of-00007.safetensors", - "block.0.mlp.norm.scale": "model--00001-of-00007.safetensors", - "block.1.attn.norm.scale": "model--00001-of-00007.safetensors", - "block.1.attn.out.bias": "model--00001-of-00007.safetensors", - "block.1.attn.out.weight": "model--00001-of-00007.safetensors", - "block.1.attn.qkv.bias": "model--00001-of-00007.safetensors", - "block.1.attn.qkv.weight": "model--00001-of-00007.safetensors", - "block.1.attn.sinks": "model--00001-of-00007.safetensors", - "block.1.mlp.gate.bias": "model--00001-of-00007.safetensors", - "block.1.mlp.gate.weight": "model--00001-of-00007.safetensors", - "block.1.mlp.mlp1_bias": "model--00001-of-00007.safetensors", - "block.1.mlp.mlp1_weight.blocks": "model--00001-of-00007.safetensors", - "block.1.mlp.mlp1_weight.scales": "model--00001-of-00007.safetensors", - "block.1.mlp.mlp2_bias": "model--00001-of-00007.safetensors", - "block.1.mlp.mlp2_weight.blocks": "model--00001-of-00007.safetensors", - "block.1.mlp.mlp2_weight.scales": "model--00001-of-00007.safetensors", - "block.1.mlp.norm.scale": "model--00001-of-00007.safetensors", - "block.10.attn.norm.scale": "model--00001-of-00007.safetensors", - "block.10.attn.out.bias": "model--00001-of-00007.safetensors", - "block.10.attn.out.weight": "model--00001-of-00007.safetensors", - "block.10.attn.qkv.bias": "model--00001-of-00007.safetensors", - "block.10.attn.qkv.weight": "model--00001-of-00007.safetensors", - "block.10.attn.sinks": "model--00001-of-00007.safetensors", - "block.10.mlp.gate.bias": "model--00001-of-00007.safetensors", - "block.10.mlp.gate.weight": "model--00001-of-00007.safetensors", - "block.10.mlp.mlp1_bias": "model--00001-of-00007.safetensors", - "block.10.mlp.mlp1_weight.blocks": "model--00001-of-00007.safetensors", - "block.10.mlp.mlp1_weight.scales": "model--00001-of-00007.safetensors", - "block.10.mlp.mlp2_bias": "model--00001-of-00007.safetensors", - "block.10.mlp.mlp2_weight.blocks": "model--00001-of-00007.safetensors", - "block.10.mlp.mlp2_weight.scales": "model--00001-of-00007.safetensors", - "block.10.mlp.norm.scale": "model--00001-of-00007.safetensors", - "block.11.attn.norm.scale": "model--00001-of-00007.safetensors", - "block.11.attn.out.bias": "model--00001-of-00007.safetensors", - "block.11.attn.out.weight": "model--00001-of-00007.safetensors", - "block.11.attn.qkv.bias": "model--00001-of-00007.safetensors", - "block.11.attn.qkv.weight": "model--00001-of-00007.safetensors", - "block.11.attn.sinks": "model--00001-of-00007.safetensors", - "block.11.mlp.gate.bias": "model--00001-of-00007.safetensors", - "block.11.mlp.gate.weight": "model--00001-of-00007.safetensors", - "block.11.mlp.mlp1_bias": "model--00001-of-00007.safetensors", - "block.11.mlp.mlp1_weight.blocks": "model--00001-of-00007.safetensors", - "block.11.mlp.mlp1_weight.scales": "model--00001-of-00007.safetensors", - "block.11.mlp.mlp2_bias": "model--00001-of-00007.safetensors", - "block.11.mlp.mlp2_weight.blocks": "model--00001-of-00007.safetensors", - "block.11.mlp.mlp2_weight.scales": "model--00001-of-00007.safetensors", - "block.11.mlp.norm.scale": "model--00001-of-00007.safetensors", - "block.12.attn.norm.scale": "model--00001-of-00007.safetensors", - "block.12.attn.out.bias": "model--00001-of-00007.safetensors", - "block.12.attn.out.weight": "model--00001-of-00007.safetensors", - "block.12.attn.qkv.bias": "model--00001-of-00007.safetensors", - "block.12.attn.qkv.weight": "model--00001-of-00007.safetensors", - "block.12.attn.sinks": "model--00001-of-00007.safetensors", - "block.12.mlp.gate.bias": "model--00001-of-00007.safetensors", - "block.12.mlp.gate.weight": "model--00001-of-00007.safetensors", - "block.12.mlp.mlp1_bias": "model--00001-of-00007.safetensors", - "block.12.mlp.mlp1_weight.blocks": "model--00001-of-00007.safetensors", - "block.12.mlp.mlp1_weight.scales": "model--00001-of-00007.safetensors", - "block.12.mlp.mlp2_bias": "model--00001-of-00007.safetensors", - "block.12.mlp.mlp2_weight.blocks": "model--00001-of-00007.safetensors", - "block.12.mlp.mlp2_weight.scales": "model--00001-of-00007.safetensors", - "block.12.mlp.norm.scale": "model--00001-of-00007.safetensors", - "block.13.attn.norm.scale": "model--00001-of-00007.safetensors", - "block.13.attn.out.bias": "model--00001-of-00007.safetensors", - "block.13.attn.out.weight": "model--00001-of-00007.safetensors", - "block.13.attn.qkv.bias": "model--00001-of-00007.safetensors", - "block.13.attn.qkv.weight": "model--00001-of-00007.safetensors", - "block.13.attn.sinks": "model--00001-of-00007.safetensors", - "block.13.mlp.gate.bias": "model--00001-of-00007.safetensors", - "block.13.mlp.gate.weight": "model--00001-of-00007.safetensors", - "block.13.mlp.mlp1_bias": "model--00001-of-00007.safetensors", - "block.13.mlp.mlp1_weight.blocks": "model--00001-of-00007.safetensors", - "block.13.mlp.mlp1_weight.scales": "model--00001-of-00007.safetensors", - "block.13.mlp.mlp2_bias": "model--00001-of-00007.safetensors", - "block.13.mlp.mlp2_weight.blocks": "model--00001-of-00007.safetensors", - "block.13.mlp.mlp2_weight.scales": "model--00001-of-00007.safetensors", - "block.13.mlp.norm.scale": "model--00001-of-00007.safetensors", - "block.14.attn.norm.scale": "model--00001-of-00007.safetensors", - "block.14.attn.out.bias": "model--00001-of-00007.safetensors", - "block.14.attn.out.weight": "model--00001-of-00007.safetensors", - "block.14.attn.qkv.bias": "model--00001-of-00007.safetensors", - "block.14.attn.qkv.weight": "model--00001-of-00007.safetensors", - "block.14.attn.sinks": "model--00001-of-00007.safetensors", - "block.14.mlp.gate.bias": "model--00001-of-00007.safetensors", - "block.14.mlp.gate.weight": "model--00001-of-00007.safetensors", - "block.14.mlp.mlp1_bias": "model--00001-of-00007.safetensors", - "block.14.mlp.mlp1_weight.blocks": "model--00002-of-00007.safetensors", - "block.14.mlp.mlp1_weight.scales": "model--00002-of-00007.safetensors", - "block.14.mlp.mlp2_bias": "model--00002-of-00007.safetensors", - "block.14.mlp.mlp2_weight.blocks": "model--00002-of-00007.safetensors", - "block.14.mlp.mlp2_weight.scales": "model--00002-of-00007.safetensors", - "block.14.mlp.norm.scale": "model--00002-of-00007.safetensors", - "block.15.attn.norm.scale": "model--00002-of-00007.safetensors", - "block.15.attn.out.bias": "model--00002-of-00007.safetensors", - "block.15.attn.out.weight": "model--00002-of-00007.safetensors", - "block.15.attn.qkv.bias": "model--00002-of-00007.safetensors", - "block.15.attn.qkv.weight": "model--00002-of-00007.safetensors", - "block.15.attn.sinks": "model--00002-of-00007.safetensors", - "block.15.mlp.gate.bias": "model--00002-of-00007.safetensors", - "block.15.mlp.gate.weight": "model--00002-of-00007.safetensors", - "block.15.mlp.mlp1_bias": "model--00002-of-00007.safetensors", - "block.15.mlp.mlp1_weight.blocks": "model--00002-of-00007.safetensors", - "block.15.mlp.mlp1_weight.scales": "model--00002-of-00007.safetensors", - "block.15.mlp.mlp2_bias": "model--00002-of-00007.safetensors", - "block.15.mlp.mlp2_weight.blocks": "model--00002-of-00007.safetensors", - "block.15.mlp.mlp2_weight.scales": "model--00002-of-00007.safetensors", - "block.15.mlp.norm.scale": "model--00002-of-00007.safetensors", - "block.16.attn.norm.scale": "model--00002-of-00007.safetensors", - "block.16.attn.out.bias": "model--00002-of-00007.safetensors", - "block.16.attn.out.weight": "model--00002-of-00007.safetensors", - "block.16.attn.qkv.bias": "model--00002-of-00007.safetensors", - "block.16.attn.qkv.weight": "model--00002-of-00007.safetensors", - "block.16.attn.sinks": "model--00002-of-00007.safetensors", - "block.16.mlp.gate.bias": "model--00002-of-00007.safetensors", - "block.16.mlp.gate.weight": "model--00002-of-00007.safetensors", - "block.16.mlp.mlp1_bias": "model--00002-of-00007.safetensors", - "block.16.mlp.mlp1_weight.blocks": "model--00002-of-00007.safetensors", - "block.16.mlp.mlp1_weight.scales": "model--00002-of-00007.safetensors", - "block.16.mlp.mlp2_bias": "model--00002-of-00007.safetensors", - "block.16.mlp.mlp2_weight.blocks": "model--00002-of-00007.safetensors", - "block.16.mlp.mlp2_weight.scales": "model--00002-of-00007.safetensors", - "block.16.mlp.norm.scale": "model--00002-of-00007.safetensors", - "block.17.attn.norm.scale": "model--00002-of-00007.safetensors", - "block.17.attn.out.bias": "model--00002-of-00007.safetensors", - "block.17.attn.out.weight": "model--00002-of-00007.safetensors", - "block.17.attn.qkv.bias": "model--00002-of-00007.safetensors", - "block.17.attn.qkv.weight": "model--00002-of-00007.safetensors", - "block.17.attn.sinks": "model--00002-of-00007.safetensors", - "block.17.mlp.gate.bias": "model--00002-of-00007.safetensors", - "block.17.mlp.gate.weight": "model--00002-of-00007.safetensors", - "block.17.mlp.mlp1_bias": "model--00002-of-00007.safetensors", - "block.17.mlp.mlp1_weight.blocks": "model--00002-of-00007.safetensors", - "block.17.mlp.mlp1_weight.scales": "model--00002-of-00007.safetensors", - "block.17.mlp.mlp2_bias": "model--00002-of-00007.safetensors", - "block.17.mlp.mlp2_weight.blocks": "model--00002-of-00007.safetensors", - "block.17.mlp.mlp2_weight.scales": "model--00002-of-00007.safetensors", - "block.17.mlp.norm.scale": "model--00002-of-00007.safetensors", - "block.18.attn.norm.scale": "model--00002-of-00007.safetensors", - "block.18.attn.out.bias": "model--00002-of-00007.safetensors", - "block.18.attn.out.weight": "model--00002-of-00007.safetensors", - "block.18.attn.qkv.bias": "model--00002-of-00007.safetensors", - "block.18.attn.qkv.weight": "model--00002-of-00007.safetensors", - "block.18.attn.sinks": "model--00002-of-00007.safetensors", - "block.18.mlp.gate.bias": "model--00002-of-00007.safetensors", - "block.18.mlp.gate.weight": "model--00002-of-00007.safetensors", - "block.18.mlp.mlp1_bias": "model--00002-of-00007.safetensors", - "block.18.mlp.mlp1_weight.blocks": "model--00002-of-00007.safetensors", - "block.18.mlp.mlp1_weight.scales": "model--00002-of-00007.safetensors", - "block.18.mlp.mlp2_bias": "model--00002-of-00007.safetensors", - "block.18.mlp.mlp2_weight.blocks": "model--00002-of-00007.safetensors", - "block.18.mlp.mlp2_weight.scales": "model--00002-of-00007.safetensors", - "block.18.mlp.norm.scale": "model--00002-of-00007.safetensors", - "block.19.attn.norm.scale": "model--00002-of-00007.safetensors", - "block.19.attn.out.bias": "model--00002-of-00007.safetensors", - "block.19.attn.out.weight": "model--00002-of-00007.safetensors", - "block.19.attn.qkv.bias": "model--00002-of-00007.safetensors", - "block.19.attn.qkv.weight": "model--00002-of-00007.safetensors", - "block.19.attn.sinks": "model--00002-of-00007.safetensors", - "block.19.mlp.gate.bias": "model--00002-of-00007.safetensors", - "block.19.mlp.gate.weight": "model--00002-of-00007.safetensors", - "block.19.mlp.mlp1_bias": "model--00002-of-00007.safetensors", - "block.19.mlp.mlp1_weight.blocks": "model--00002-of-00007.safetensors", - "block.19.mlp.mlp1_weight.scales": "model--00002-of-00007.safetensors", - "block.19.mlp.mlp2_bias": "model--00002-of-00007.safetensors", - "block.19.mlp.mlp2_weight.blocks": "model--00002-of-00007.safetensors", - "block.19.mlp.mlp2_weight.scales": "model--00002-of-00007.safetensors", - "block.19.mlp.norm.scale": "model--00002-of-00007.safetensors", - "block.2.attn.norm.scale": "model--00002-of-00007.safetensors", - "block.2.attn.out.bias": "model--00002-of-00007.safetensors", - "block.2.attn.out.weight": "model--00002-of-00007.safetensors", - "block.2.attn.qkv.bias": "model--00002-of-00007.safetensors", - "block.2.attn.qkv.weight": "model--00002-of-00007.safetensors", - "block.2.attn.sinks": "model--00002-of-00007.safetensors", - "block.2.mlp.gate.bias": "model--00002-of-00007.safetensors", - "block.2.mlp.gate.weight": "model--00002-of-00007.safetensors", - "block.2.mlp.mlp1_bias": "model--00002-of-00007.safetensors", - "block.2.mlp.mlp1_weight.blocks": "model--00003-of-00007.safetensors", - "block.2.mlp.mlp1_weight.scales": "model--00003-of-00007.safetensors", - "block.2.mlp.mlp2_bias": "model--00003-of-00007.safetensors", - "block.2.mlp.mlp2_weight.blocks": "model--00003-of-00007.safetensors", - "block.2.mlp.mlp2_weight.scales": "model--00003-of-00007.safetensors", - "block.2.mlp.norm.scale": "model--00003-of-00007.safetensors", - "block.20.attn.norm.scale": "model--00003-of-00007.safetensors", - "block.20.attn.out.bias": "model--00003-of-00007.safetensors", - "block.20.attn.out.weight": "model--00003-of-00007.safetensors", - "block.20.attn.qkv.bias": "model--00003-of-00007.safetensors", - "block.20.attn.qkv.weight": "model--00003-of-00007.safetensors", - "block.20.attn.sinks": "model--00003-of-00007.safetensors", - "block.20.mlp.gate.bias": "model--00003-of-00007.safetensors", - "block.20.mlp.gate.weight": "model--00003-of-00007.safetensors", - "block.20.mlp.mlp1_bias": "model--00003-of-00007.safetensors", - "block.20.mlp.mlp1_weight.blocks": "model--00003-of-00007.safetensors", - "block.20.mlp.mlp1_weight.scales": "model--00003-of-00007.safetensors", - "block.20.mlp.mlp2_bias": "model--00003-of-00007.safetensors", - "block.20.mlp.mlp2_weight.blocks": "model--00003-of-00007.safetensors", - "block.20.mlp.mlp2_weight.scales": "model--00003-of-00007.safetensors", - "block.20.mlp.norm.scale": "model--00003-of-00007.safetensors", - "block.21.attn.norm.scale": "model--00003-of-00007.safetensors", - "block.21.attn.out.bias": "model--00003-of-00007.safetensors", - "block.21.attn.out.weight": "model--00003-of-00007.safetensors", - "block.21.attn.qkv.bias": "model--00003-of-00007.safetensors", - "block.21.attn.qkv.weight": "model--00003-of-00007.safetensors", - "block.21.attn.sinks": "model--00003-of-00007.safetensors", - "block.21.mlp.gate.bias": "model--00003-of-00007.safetensors", - "block.21.mlp.gate.weight": "model--00003-of-00007.safetensors", - "block.21.mlp.mlp1_bias": "model--00003-of-00007.safetensors", - "block.21.mlp.mlp1_weight.blocks": "model--00003-of-00007.safetensors", - "block.21.mlp.mlp1_weight.scales": "model--00003-of-00007.safetensors", - "block.21.mlp.mlp2_bias": "model--00003-of-00007.safetensors", - "block.21.mlp.mlp2_weight.blocks": "model--00003-of-00007.safetensors", - "block.21.mlp.mlp2_weight.scales": "model--00003-of-00007.safetensors", - "block.21.mlp.norm.scale": "model--00003-of-00007.safetensors", - "block.22.attn.norm.scale": "model--00003-of-00007.safetensors", - "block.22.attn.out.bias": "model--00003-of-00007.safetensors", - "block.22.attn.out.weight": "model--00003-of-00007.safetensors", - "block.22.attn.qkv.bias": "model--00003-of-00007.safetensors", - "block.22.attn.qkv.weight": "model--00003-of-00007.safetensors", - "block.22.attn.sinks": "model--00003-of-00007.safetensors", - "block.22.mlp.gate.bias": "model--00003-of-00007.safetensors", - "block.22.mlp.gate.weight": "model--00003-of-00007.safetensors", - "block.22.mlp.mlp1_bias": "model--00003-of-00007.safetensors", - "block.22.mlp.mlp1_weight.blocks": "model--00003-of-00007.safetensors", - "block.22.mlp.mlp1_weight.scales": "model--00003-of-00007.safetensors", - "block.22.mlp.mlp2_bias": "model--00003-of-00007.safetensors", - "block.22.mlp.mlp2_weight.blocks": "model--00003-of-00007.safetensors", - "block.22.mlp.mlp2_weight.scales": "model--00003-of-00007.safetensors", - "block.22.mlp.norm.scale": "model--00003-of-00007.safetensors", - "block.23.attn.norm.scale": "model--00003-of-00007.safetensors", - "block.23.attn.out.bias": "model--00003-of-00007.safetensors", - "block.23.attn.out.weight": "model--00003-of-00007.safetensors", - "block.23.attn.qkv.bias": "model--00003-of-00007.safetensors", - "block.23.attn.qkv.weight": "model--00003-of-00007.safetensors", - "block.23.attn.sinks": "model--00003-of-00007.safetensors", - "block.23.mlp.gate.bias": "model--00003-of-00007.safetensors", - "block.23.mlp.gate.weight": "model--00003-of-00007.safetensors", - "block.23.mlp.mlp1_bias": "model--00003-of-00007.safetensors", - "block.23.mlp.mlp1_weight.blocks": "model--00003-of-00007.safetensors", - "block.23.mlp.mlp1_weight.scales": "model--00003-of-00007.safetensors", - "block.23.mlp.mlp2_bias": "model--00003-of-00007.safetensors", - "block.23.mlp.mlp2_weight.blocks": "model--00003-of-00007.safetensors", - "block.23.mlp.mlp2_weight.scales": "model--00003-of-00007.safetensors", - "block.23.mlp.norm.scale": "model--00003-of-00007.safetensors", - "block.24.attn.norm.scale": "model--00003-of-00007.safetensors", - "block.24.attn.out.bias": "model--00003-of-00007.safetensors", - "block.24.attn.out.weight": "model--00003-of-00007.safetensors", - "block.24.attn.qkv.bias": "model--00003-of-00007.safetensors", - "block.24.attn.qkv.weight": "model--00003-of-00007.safetensors", - "block.24.attn.sinks": "model--00003-of-00007.safetensors", - "block.24.mlp.gate.bias": "model--00003-of-00007.safetensors", - "block.24.mlp.gate.weight": "model--00003-of-00007.safetensors", - "block.24.mlp.mlp1_bias": "model--00003-of-00007.safetensors", - "block.24.mlp.mlp1_weight.blocks": "model--00003-of-00007.safetensors", - "block.24.mlp.mlp1_weight.scales": "model--00003-of-00007.safetensors", - "block.24.mlp.mlp2_bias": "model--00003-of-00007.safetensors", - "block.24.mlp.mlp2_weight.blocks": "model--00003-of-00007.safetensors", - "block.24.mlp.mlp2_weight.scales": "model--00003-of-00007.safetensors", - "block.24.mlp.norm.scale": "model--00003-of-00007.safetensors", - "block.25.attn.norm.scale": "model--00003-of-00007.safetensors", - "block.25.attn.out.bias": "model--00003-of-00007.safetensors", - "block.25.attn.out.weight": "model--00003-of-00007.safetensors", - "block.25.attn.qkv.bias": "model--00003-of-00007.safetensors", - "block.25.attn.qkv.weight": "model--00003-of-00007.safetensors", - "block.25.attn.sinks": "model--00003-of-00007.safetensors", - "block.25.mlp.gate.bias": "model--00003-of-00007.safetensors", - "block.25.mlp.gate.weight": "model--00003-of-00007.safetensors", - "block.25.mlp.mlp1_bias": "model--00003-of-00007.safetensors", - "block.25.mlp.mlp1_weight.blocks": "model--00004-of-00007.safetensors", - "block.25.mlp.mlp1_weight.scales": "model--00004-of-00007.safetensors", - "block.25.mlp.mlp2_bias": "model--00004-of-00007.safetensors", - "block.25.mlp.mlp2_weight.blocks": "model--00004-of-00007.safetensors", - "block.25.mlp.mlp2_weight.scales": "model--00004-of-00007.safetensors", - "block.25.mlp.norm.scale": "model--00004-of-00007.safetensors", - "block.26.attn.norm.scale": "model--00004-of-00007.safetensors", - "block.26.attn.out.bias": "model--00004-of-00007.safetensors", - "block.26.attn.out.weight": "model--00004-of-00007.safetensors", - "block.26.attn.qkv.bias": "model--00004-of-00007.safetensors", - "block.26.attn.qkv.weight": "model--00004-of-00007.safetensors", - "block.26.attn.sinks": "model--00004-of-00007.safetensors", - "block.26.mlp.gate.bias": "model--00004-of-00007.safetensors", - "block.26.mlp.gate.weight": "model--00004-of-00007.safetensors", - "block.26.mlp.mlp1_bias": "model--00004-of-00007.safetensors", - "block.26.mlp.mlp1_weight.blocks": "model--00004-of-00007.safetensors", - "block.26.mlp.mlp1_weight.scales": "model--00004-of-00007.safetensors", - "block.26.mlp.mlp2_bias": "model--00004-of-00007.safetensors", - "block.26.mlp.mlp2_weight.blocks": "model--00004-of-00007.safetensors", - "block.26.mlp.mlp2_weight.scales": "model--00004-of-00007.safetensors", - "block.26.mlp.norm.scale": "model--00004-of-00007.safetensors", - "block.27.attn.norm.scale": "model--00004-of-00007.safetensors", - "block.27.attn.out.bias": "model--00004-of-00007.safetensors", - "block.27.attn.out.weight": "model--00004-of-00007.safetensors", - "block.27.attn.qkv.bias": "model--00004-of-00007.safetensors", - "block.27.attn.qkv.weight": "model--00004-of-00007.safetensors", - "block.27.attn.sinks": "model--00004-of-00007.safetensors", - "block.27.mlp.gate.bias": "model--00004-of-00007.safetensors", - "block.27.mlp.gate.weight": "model--00004-of-00007.safetensors", - "block.27.mlp.mlp1_bias": "model--00004-of-00007.safetensors", - "block.27.mlp.mlp1_weight.blocks": "model--00004-of-00007.safetensors", - "block.27.mlp.mlp1_weight.scales": "model--00004-of-00007.safetensors", - "block.27.mlp.mlp2_bias": "model--00004-of-00007.safetensors", - "block.27.mlp.mlp2_weight.blocks": "model--00004-of-00007.safetensors", - "block.27.mlp.mlp2_weight.scales": "model--00004-of-00007.safetensors", - "block.27.mlp.norm.scale": "model--00004-of-00007.safetensors", - "block.28.attn.norm.scale": "model--00004-of-00007.safetensors", - "block.28.attn.out.bias": "model--00004-of-00007.safetensors", - "block.28.attn.out.weight": "model--00004-of-00007.safetensors", - "block.28.attn.qkv.bias": "model--00004-of-00007.safetensors", - "block.28.attn.qkv.weight": "model--00004-of-00007.safetensors", - "block.28.attn.sinks": "model--00004-of-00007.safetensors", - "block.28.mlp.gate.bias": "model--00004-of-00007.safetensors", - "block.28.mlp.gate.weight": "model--00004-of-00007.safetensors", - "block.28.mlp.mlp1_bias": "model--00004-of-00007.safetensors", - "block.28.mlp.mlp1_weight.blocks": "model--00004-of-00007.safetensors", - "block.28.mlp.mlp1_weight.scales": "model--00004-of-00007.safetensors", - "block.28.mlp.mlp2_bias": "model--00004-of-00007.safetensors", - "block.28.mlp.mlp2_weight.blocks": "model--00004-of-00007.safetensors", - "block.28.mlp.mlp2_weight.scales": "model--00004-of-00007.safetensors", - "block.28.mlp.norm.scale": "model--00004-of-00007.safetensors", - "block.29.attn.norm.scale": "model--00004-of-00007.safetensors", - "block.29.attn.out.bias": "model--00004-of-00007.safetensors", - "block.29.attn.out.weight": "model--00004-of-00007.safetensors", - "block.29.attn.qkv.bias": "model--00004-of-00007.safetensors", - "block.29.attn.qkv.weight": "model--00004-of-00007.safetensors", - "block.29.attn.sinks": "model--00004-of-00007.safetensors", - "block.29.mlp.gate.bias": "model--00004-of-00007.safetensors", - "block.29.mlp.gate.weight": "model--00004-of-00007.safetensors", - "block.29.mlp.mlp1_bias": "model--00004-of-00007.safetensors", - "block.29.mlp.mlp1_weight.blocks": "model--00004-of-00007.safetensors", - "block.29.mlp.mlp1_weight.scales": "model--00004-of-00007.safetensors", - "block.29.mlp.mlp2_bias": "model--00004-of-00007.safetensors", - "block.29.mlp.mlp2_weight.blocks": "model--00004-of-00007.safetensors", - "block.29.mlp.mlp2_weight.scales": "model--00004-of-00007.safetensors", - "block.29.mlp.norm.scale": "model--00004-of-00007.safetensors", - "block.3.attn.norm.scale": "model--00004-of-00007.safetensors", - "block.3.attn.out.bias": "model--00004-of-00007.safetensors", - "block.3.attn.out.weight": "model--00004-of-00007.safetensors", - "block.3.attn.qkv.bias": "model--00004-of-00007.safetensors", - "block.3.attn.qkv.weight": "model--00004-of-00007.safetensors", - "block.3.attn.sinks": "model--00004-of-00007.safetensors", - "block.3.mlp.gate.bias": "model--00004-of-00007.safetensors", - "block.3.mlp.gate.weight": "model--00004-of-00007.safetensors", - "block.3.mlp.mlp1_bias": "model--00004-of-00007.safetensors", - "block.3.mlp.mlp1_weight.blocks": "model--00004-of-00007.safetensors", - "block.3.mlp.mlp1_weight.scales": "model--00004-of-00007.safetensors", - "block.3.mlp.mlp2_bias": "model--00004-of-00007.safetensors", - "block.3.mlp.mlp2_weight.blocks": "model--00004-of-00007.safetensors", - "block.3.mlp.mlp2_weight.scales": "model--00004-of-00007.safetensors", - "block.3.mlp.norm.scale": "model--00004-of-00007.safetensors", - "block.30.attn.norm.scale": "model--00004-of-00007.safetensors", - "block.30.attn.out.bias": "model--00004-of-00007.safetensors", - "block.30.attn.out.weight": "model--00004-of-00007.safetensors", - "block.30.attn.qkv.bias": "model--00004-of-00007.safetensors", - "block.30.attn.qkv.weight": "model--00004-of-00007.safetensors", - "block.30.attn.sinks": "model--00004-of-00007.safetensors", - "block.30.mlp.gate.bias": "model--00004-of-00007.safetensors", - "block.30.mlp.gate.weight": "model--00004-of-00007.safetensors", - "block.30.mlp.mlp1_bias": "model--00004-of-00007.safetensors", - "block.30.mlp.mlp1_weight.blocks": "model--00005-of-00007.safetensors", - "block.30.mlp.mlp1_weight.scales": "model--00005-of-00007.safetensors", - "block.30.mlp.mlp2_bias": "model--00005-of-00007.safetensors", - "block.30.mlp.mlp2_weight.blocks": "model--00005-of-00007.safetensors", - "block.30.mlp.mlp2_weight.scales": "model--00005-of-00007.safetensors", - "block.30.mlp.norm.scale": "model--00005-of-00007.safetensors", - "block.31.attn.norm.scale": "model--00005-of-00007.safetensors", - "block.31.attn.out.bias": "model--00005-of-00007.safetensors", - "block.31.attn.out.weight": "model--00005-of-00007.safetensors", - "block.31.attn.qkv.bias": "model--00005-of-00007.safetensors", - "block.31.attn.qkv.weight": "model--00005-of-00007.safetensors", - "block.31.attn.sinks": "model--00005-of-00007.safetensors", - "block.31.mlp.gate.bias": "model--00005-of-00007.safetensors", - "block.31.mlp.gate.weight": "model--00005-of-00007.safetensors", - "block.31.mlp.mlp1_bias": "model--00005-of-00007.safetensors", - "block.31.mlp.mlp1_weight.blocks": "model--00005-of-00007.safetensors", - "block.31.mlp.mlp1_weight.scales": "model--00005-of-00007.safetensors", - "block.31.mlp.mlp2_bias": "model--00005-of-00007.safetensors", - "block.31.mlp.mlp2_weight.blocks": "model--00005-of-00007.safetensors", - "block.31.mlp.mlp2_weight.scales": "model--00005-of-00007.safetensors", - "block.31.mlp.norm.scale": "model--00005-of-00007.safetensors", - "block.32.attn.norm.scale": "model--00005-of-00007.safetensors", - "block.32.attn.out.bias": "model--00005-of-00007.safetensors", - "block.32.attn.out.weight": "model--00005-of-00007.safetensors", - "block.32.attn.qkv.bias": "model--00005-of-00007.safetensors", - "block.32.attn.qkv.weight": "model--00005-of-00007.safetensors", - "block.32.attn.sinks": "model--00005-of-00007.safetensors", - "block.32.mlp.gate.bias": "model--00005-of-00007.safetensors", - "block.32.mlp.gate.weight": "model--00005-of-00007.safetensors", - "block.32.mlp.mlp1_bias": "model--00005-of-00007.safetensors", - "block.32.mlp.mlp1_weight.blocks": "model--00005-of-00007.safetensors", - "block.32.mlp.mlp1_weight.scales": "model--00005-of-00007.safetensors", - "block.32.mlp.mlp2_bias": "model--00005-of-00007.safetensors", - "block.32.mlp.mlp2_weight.blocks": "model--00005-of-00007.safetensors", - "block.32.mlp.mlp2_weight.scales": "model--00005-of-00007.safetensors", - "block.32.mlp.norm.scale": "model--00005-of-00007.safetensors", - "block.33.attn.norm.scale": "model--00005-of-00007.safetensors", - "block.33.attn.out.bias": "model--00005-of-00007.safetensors", - "block.33.attn.out.weight": "model--00005-of-00007.safetensors", - "block.33.attn.qkv.bias": "model--00005-of-00007.safetensors", - "block.33.attn.qkv.weight": "model--00005-of-00007.safetensors", - "block.33.attn.sinks": "model--00005-of-00007.safetensors", - "block.33.mlp.gate.bias": "model--00005-of-00007.safetensors", - "block.33.mlp.gate.weight": "model--00005-of-00007.safetensors", - "block.33.mlp.mlp1_bias": "model--00005-of-00007.safetensors", - "block.33.mlp.mlp1_weight.blocks": "model--00005-of-00007.safetensors", - "block.33.mlp.mlp1_weight.scales": "model--00005-of-00007.safetensors", - "block.33.mlp.mlp2_bias": "model--00005-of-00007.safetensors", - "block.33.mlp.mlp2_weight.blocks": "model--00005-of-00007.safetensors", - "block.33.mlp.mlp2_weight.scales": "model--00005-of-00007.safetensors", - "block.33.mlp.norm.scale": "model--00005-of-00007.safetensors", - "block.34.attn.norm.scale": "model--00005-of-00007.safetensors", - "block.34.attn.out.bias": "model--00005-of-00007.safetensors", - "block.34.attn.out.weight": "model--00005-of-00007.safetensors", - "block.34.attn.qkv.bias": "model--00005-of-00007.safetensors", - "block.34.attn.qkv.weight": "model--00005-of-00007.safetensors", - "block.34.attn.sinks": "model--00005-of-00007.safetensors", - "block.34.mlp.gate.bias": "model--00005-of-00007.safetensors", - "block.34.mlp.gate.weight": "model--00005-of-00007.safetensors", - "block.34.mlp.mlp1_bias": "model--00005-of-00007.safetensors", - "block.34.mlp.mlp1_weight.blocks": "model--00005-of-00007.safetensors", - "block.34.mlp.mlp1_weight.scales": "model--00005-of-00007.safetensors", - "block.34.mlp.mlp2_bias": "model--00005-of-00007.safetensors", - "block.34.mlp.mlp2_weight.blocks": "model--00005-of-00007.safetensors", - "block.34.mlp.mlp2_weight.scales": "model--00005-of-00007.safetensors", - "block.34.mlp.norm.scale": "model--00005-of-00007.safetensors", - "block.35.attn.norm.scale": "model--00005-of-00007.safetensors", - "block.35.attn.out.bias": "model--00005-of-00007.safetensors", - "block.35.attn.out.weight": "model--00005-of-00007.safetensors", - "block.35.attn.qkv.bias": "model--00005-of-00007.safetensors", - "block.35.attn.qkv.weight": "model--00005-of-00007.safetensors", - "block.35.attn.sinks": "model--00005-of-00007.safetensors", - "block.35.mlp.gate.bias": "model--00005-of-00007.safetensors", - "block.35.mlp.gate.weight": "model--00005-of-00007.safetensors", - "block.35.mlp.mlp1_bias": "model--00005-of-00007.safetensors", - "block.35.mlp.mlp1_weight.blocks": "model--00005-of-00007.safetensors", - "block.35.mlp.mlp1_weight.scales": "model--00005-of-00007.safetensors", - "block.35.mlp.mlp2_bias": "model--00005-of-00007.safetensors", - "block.35.mlp.mlp2_weight.blocks": "model--00005-of-00007.safetensors", - "block.35.mlp.mlp2_weight.scales": "model--00005-of-00007.safetensors", - "block.35.mlp.norm.scale": "model--00005-of-00007.safetensors", - "block.4.attn.norm.scale": "model--00005-of-00007.safetensors", - "block.4.attn.out.bias": "model--00005-of-00007.safetensors", - "block.4.attn.out.weight": "model--00005-of-00007.safetensors", - "block.4.attn.qkv.bias": "model--00005-of-00007.safetensors", - "block.4.attn.qkv.weight": "model--00005-of-00007.safetensors", - "block.4.attn.sinks": "model--00005-of-00007.safetensors", - "block.4.mlp.gate.bias": "model--00005-of-00007.safetensors", - "block.4.mlp.gate.weight": "model--00005-of-00007.safetensors", - "block.4.mlp.mlp1_bias": "model--00005-of-00007.safetensors", - "block.4.mlp.mlp1_weight.blocks": "model--00006-of-00007.safetensors", - "block.4.mlp.mlp1_weight.scales": "model--00006-of-00007.safetensors", - "block.4.mlp.mlp2_bias": "model--00006-of-00007.safetensors", - "block.4.mlp.mlp2_weight.blocks": "model--00006-of-00007.safetensors", - "block.4.mlp.mlp2_weight.scales": "model--00006-of-00007.safetensors", - "block.4.mlp.norm.scale": "model--00006-of-00007.safetensors", - "block.5.attn.norm.scale": "model--00006-of-00007.safetensors", - "block.5.attn.out.bias": "model--00006-of-00007.safetensors", - "block.5.attn.out.weight": "model--00006-of-00007.safetensors", - "block.5.attn.qkv.bias": "model--00006-of-00007.safetensors", - "block.5.attn.qkv.weight": "model--00006-of-00007.safetensors", - "block.5.attn.sinks": "model--00006-of-00007.safetensors", - "block.5.mlp.gate.bias": "model--00006-of-00007.safetensors", - "block.5.mlp.gate.weight": "model--00006-of-00007.safetensors", - "block.5.mlp.mlp1_bias": "model--00006-of-00007.safetensors", - "block.5.mlp.mlp1_weight.blocks": "model--00006-of-00007.safetensors", - "block.5.mlp.mlp1_weight.scales": "model--00006-of-00007.safetensors", - "block.5.mlp.mlp2_bias": "model--00006-of-00007.safetensors", - "block.5.mlp.mlp2_weight.blocks": "model--00006-of-00007.safetensors", - "block.5.mlp.mlp2_weight.scales": "model--00006-of-00007.safetensors", - "block.5.mlp.norm.scale": "model--00006-of-00007.safetensors", - "block.6.attn.norm.scale": "model--00006-of-00007.safetensors", - "block.6.attn.out.bias": "model--00006-of-00007.safetensors", - "block.6.attn.out.weight": "model--00006-of-00007.safetensors", - "block.6.attn.qkv.bias": "model--00006-of-00007.safetensors", - "block.6.attn.qkv.weight": "model--00006-of-00007.safetensors", - "block.6.attn.sinks": "model--00006-of-00007.safetensors", - "block.6.mlp.gate.bias": "model--00006-of-00007.safetensors", - "block.6.mlp.gate.weight": "model--00006-of-00007.safetensors", - "block.6.mlp.mlp1_bias": "model--00006-of-00007.safetensors", - "block.6.mlp.mlp1_weight.blocks": "model--00006-of-00007.safetensors", - "block.6.mlp.mlp1_weight.scales": "model--00006-of-00007.safetensors", - "block.6.mlp.mlp2_bias": "model--00006-of-00007.safetensors", - "block.6.mlp.mlp2_weight.blocks": "model--00006-of-00007.safetensors", - "block.6.mlp.mlp2_weight.scales": "model--00006-of-00007.safetensors", - "block.6.mlp.norm.scale": "model--00006-of-00007.safetensors", - "block.7.attn.norm.scale": "model--00006-of-00007.safetensors", - "block.7.attn.out.bias": "model--00006-of-00007.safetensors", - "block.7.attn.out.weight": "model--00006-of-00007.safetensors", - "block.7.attn.qkv.bias": "model--00006-of-00007.safetensors", - "block.7.attn.qkv.weight": "model--00006-of-00007.safetensors", - "block.7.attn.sinks": "model--00006-of-00007.safetensors", - "block.7.mlp.gate.bias": "model--00006-of-00007.safetensors", - "block.7.mlp.gate.weight": "model--00006-of-00007.safetensors", - "block.7.mlp.mlp1_bias": "model--00006-of-00007.safetensors", - "block.7.mlp.mlp1_weight.blocks": "model--00006-of-00007.safetensors", - "block.7.mlp.mlp1_weight.scales": "model--00006-of-00007.safetensors", - "block.7.mlp.mlp2_bias": "model--00006-of-00007.safetensors", - "block.7.mlp.mlp2_weight.blocks": "model--00006-of-00007.safetensors", - "block.7.mlp.mlp2_weight.scales": "model--00006-of-00007.safetensors", - "block.7.mlp.norm.scale": "model--00006-of-00007.safetensors", - "block.8.attn.norm.scale": "model--00006-of-00007.safetensors", - "block.8.attn.out.bias": "model--00006-of-00007.safetensors", - "block.8.attn.out.weight": "model--00006-of-00007.safetensors", - "block.8.attn.qkv.bias": "model--00006-of-00007.safetensors", - "block.8.attn.qkv.weight": "model--00006-of-00007.safetensors", - "block.8.attn.sinks": "model--00006-of-00007.safetensors", - "block.8.mlp.gate.bias": "model--00006-of-00007.safetensors", - "block.8.mlp.gate.weight": "model--00006-of-00007.safetensors", - "block.8.mlp.mlp1_bias": "model--00006-of-00007.safetensors", - "block.8.mlp.mlp1_weight.blocks": "model--00006-of-00007.safetensors", - "block.8.mlp.mlp1_weight.scales": "model--00006-of-00007.safetensors", - "block.8.mlp.mlp2_bias": "model--00006-of-00007.safetensors", - "block.8.mlp.mlp2_weight.blocks": "model--00006-of-00007.safetensors", - "block.8.mlp.mlp2_weight.scales": "model--00006-of-00007.safetensors", - "block.8.mlp.norm.scale": "model--00006-of-00007.safetensors", - "block.9.attn.norm.scale": "model--00006-of-00007.safetensors", - "block.9.attn.out.bias": "model--00006-of-00007.safetensors", - "block.9.attn.out.weight": "model--00006-of-00007.safetensors", - "block.9.attn.qkv.bias": "model--00006-of-00007.safetensors", - "block.9.attn.qkv.weight": "model--00006-of-00007.safetensors", - "block.9.attn.sinks": "model--00006-of-00007.safetensors", - "block.9.mlp.gate.bias": "model--00006-of-00007.safetensors", - "block.9.mlp.gate.weight": "model--00006-of-00007.safetensors", - "block.9.mlp.mlp1_bias": "model--00006-of-00007.safetensors", - "block.9.mlp.mlp1_weight.blocks": "model--00006-of-00007.safetensors", - "block.9.mlp.mlp1_weight.scales": "model--00006-of-00007.safetensors", - "block.9.mlp.mlp2_bias": "model--00006-of-00007.safetensors", - "block.9.mlp.mlp2_weight.blocks": "model--00006-of-00007.safetensors", - "block.9.mlp.mlp2_weight.scales": "model--00006-of-00007.safetensors", - "block.9.mlp.norm.scale": "model--00006-of-00007.safetensors", - "embedding.weight": "model--00007-of-00007.safetensors", - "norm.scale": "model--00007-of-00007.safetensors", - "unembedding.weight": "model--00007-of-00007.safetensors" - } -} \ No newline at end of file diff --git a/special_tokens_map.json b/special_tokens_map.json index 73bd12e55e2004cdfff088f85092b39dba2ccdd0..6274cc1bd159aa75de771315558e5cac7dd8bea0 100644 --- a/special_tokens_map.json +++ b/special_tokens_map.json @@ -1,5 +1,23 @@ { - "bos_token": "<|startoftext|>", - "eos_token": "<|return|>", - "pad_token": "<|endoftext|>" + "bos_token": { + "content": "<|startoftext|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false + }, + "eos_token": { + "content": "<|return|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false + }, + "pad_token": { + "content": "<|endoftext|>", + "lstrip": false, + "normalized": false, + "rstrip": false, + "single_word": false + } }