Qwen2.5-14B-1M-YOYO-V3
This time, I not only released the model but also shared some model merging insights that might be even more valuable than the model itself.
Let’s start by looking at the initial merge configuration (YAML):
merge_method: model_stock
base_model: Qwen/Qwen2.5-14B
models:
- model: Qwen/Qwen2.5-14B-instruct
- model: Qwen/Qwen2.5-14B-instruct-1M
dtype: bfloat16
Does it seem like there are no issues at all? However, merged models occasionally exhibit uncontrollable outputs, likely due to significant discrepancies between instruction-tuned models and base models.
To address this, I first attempted to directly integrate a fine-tuned model with smaller divergence from the base model, such as Virtuoso-Small-v2.
This gave rise to Qwen2.5-14B-YOYO-latest-V2.
merge_method: model_stock
base_model: Qwen/Qwen2.5-14B
models:
- model: Qwen/Qwen2.5-14B-instruct
- model: Qwen/Qwen2.5-14B-instruct-1M
- model: arcee-ai/Virtuoso-Small-v2
dtype: bfloat16
name: Qwen2.5-14B-YOYO-latest-V2
Although the uncontrollable output issue has been addressed, the model still lacks stability.
Through practical experimentation, I found that first merging "high-divergence" models (significantly different from the base) into "low-divergence" models (closer to the base) using the DELLA method, then applying the Model Stock method, ultimately produces a model that is not only more stable but also achieves better performance.
Key models used:
1. Low-divergence, high-performance models:
- Virtuoso-Small-v2
- Blossom-V6-14B
2. High-divergence, instruction-focused models:
- Qwen2.5-14B-instruct
- Qwen2.5-14B-instruct-1M
DELLA Merge Configuration:
models:
- model: Qwen/Qwen2.5-14B-Instruct
parameters:
density: 1
weight: 1
lambda: 0.9
merge_method: della
base_model: arcee-ai/Virtuoso-Small-v2
parameters:
density: 1
weight: 1
lambda: 0.9
normalize: true
int8_mask: true
dtype: bfloat16
tokenizer_source: base
name: Qwen2.5-14B-YOYO-della1
models:
- model: Qwen/Qwen2.5-14B-Instruct-1M
parameters:
density: 1
weight: 1
lambda: 0.9
merge_method: della
base_model: arcee-ai/Virtuoso-Small-v2
parameters:
density: 1
weight: 1
lambda: 0.9
normalize: true
int8_mask: true
dtype: bfloat16
tokenizer_source: base
name: Qwen2.5-14B-YOYO-della2
models:
- model: Qwen/Qwen2.5-14B-Instruct
parameters:
density: 1
weight: 1
lambda: 0.9
merge_method: della
base_model: Azure99/Blossom-V6-14B
parameters:
density: 1
weight: 1
lambda: 0.9
normalize: true
int8_mask: true
dtype: bfloat16
tokenizer_source: base
name: Qwen2.5-14B-YOYO-della3
models:
- model: Qwen/Qwen2.5-14B-Instruct-1M
parameters:
density: 1
weight: 1
lambda: 0.9
merge_method: della
base_model: Azure99/Blossom-V6-14B
parameters:
density: 1
weight: 1
lambda: 0.9
normalize: true
int8_mask: true
dtype: bfloat16
tokenizer_source: base
name: Qwen2.5-14B-YOYO-della4
This approach yielded four variants:
Qwen2.5-14B-YOYO-della1
Qwen2.5-14B-YOYO-della2
Qwen2.5-14B-YOYO-della3
Qwen2.5-14B-YOYO-della4
Base Model:
To enhance base model roleplay and creative writing capabilities, I applied the same strategy:
models:
- model: EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2
parameters:
density: 1
weight: 1
lambda: 0.9
merge_method: della
base_model: Qwen/Qwen2.5-14B
parameters:
density: 1
weight: 1
lambda: 0.9
normalize: true
int8_mask: true
dtype: bfloat16
tokenizer_source: base
name: EVA-Qwen2.5-14B-base
Next, I extended the context length using the SCE method:
merge_method: sce
models:
- model: EVA-Qwen2.5-14B-base
base_model: Qwen/Qwen2.5-14B-Instruct-1M
parameters:
select_topk: 1
dtype: bfloat16
tokenizer_source: base
normalize: true
int8_mask: true
name: Qwen2.5-14B-pro
Final Merge Step:
merge_method: model_stock
base_model: Qwen2.5-14B-pro
models:
- model: Qwen2.5-14B-YOYO-della1
- model: Qwen2.5-14B-YOYO-della2
- model: Qwen2.5-14B-YOYO-della3
- model: Qwen2.5-14B-YOYO-della4
dtype: bfloat16
tokenizer_source: base
int8_mask: true
normalize: true
name: Qwen2.5-14B-1M-YOYO-V3
I hope this helps!
- Downloads last month
- 10