--- base_model: - CultriX/Qwen2.5-14B-ReasoningMerge - CultriX/Qwen2.5-14B-CoreGeneralist library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method. ### Models Merged The following models were included in the merge: * [CultriX/Qwen2.5-14B-ReasoningMerge](https://huggingface.co/CultriX/Qwen2.5-14B-ReasoningMerge) * [CultriX/Qwen2.5-14B-CoreGeneralist](https://huggingface.co/CultriX/Qwen2.5-14B-CoreGeneralist) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: CultriX/Qwen2.5-14B-CoreGeneralist merge_method: slerp dtype: bfloat16 parameters: # Uniform interpolation: change the values below if you wish to favor one model for certain layer types. t: - filter: self_attn value: 0.5 - filter: mlp value: 0.5 - value: 0.5 models: - model: CultriX/Qwen2.5-14B-CoreGeneralist - model: CultriX/Qwen2.5-14B-ReasoningMerge tokenizer_source: CultriX/Qwen2.5-14B-CoreGeneralist chat_template: chatml name: Qwen2.5-14B-CoreReasoning-Slerp ```