--- base_model: - Delta-Vector/Francois-PE-V2-Huali-12B - Nitral-AI/Mag-Mell-Reasoner-12B - Dans-DiscountModels/12b-mn-dans-reasoning-test-5 - CreitinGameplays/Mistral-Nemo-12B-R1-v0.2 - BeaverAI/MN-2407-DSK-QwQify-v0.1-12B library_name: transformers tags: - mergekit - merge --- # Mistral-qwq-12b-merge --- Strange model --- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using BeaverAI/MN-2407-DSK-QwQify-v0.1-12B as a base. ### Models Merged The following models were included in the merge: * Delta-Vector/Francois-PE-V2-Huali-12B * Nitral-AI/Mag-Mell-Reasoner-12B * Dans-DiscountModels/12b-mn-dans-reasoning-test-5 * CreitinGameplays/Mistral-Nemo-12B-R1-v0.2 ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: dare_ties base_model: BeaverAI/MN-2407-DSK-QwQify-v0.1-12B models: - model: BeaverAI/MN-2407-DSK-QwQify-v0.1-12B parameters: weight: 0.225 - model: CreitinGameplays/Mistral-Nemo-12B-R1-v0.2 parameters: weight: 0.225 - model: Nitral-AI/Mag-Mell-Reasoner-12B parameters: weight: 0.225 - model: Dans-DiscountModels/12b-mn-dans-reasoning-test-5 parameters: weight: 0.225 - model: Delta-Vector/Francois-PE-V2-Huali-12B parameters: weight: 0.1 parameters: density: 0.35 ```