|
--- |
|
base_model: |
|
- nbeerbower/Yanfei-v2-Qwen3-32B |
|
base_model_relation: quantized |
|
library_name: transformers |
|
tags: |
|
- mergekit |
|
- merge |
|
license: apache-2.0 |
|
datasets: |
|
- nbeerbower/YanfeiMix-DPO |
|
--- |
|
## Quantized using the default exllamav3 (0.0.3) quantization process. |
|
|
|
- Original model: https://huggingface.co/nbeerbower/Yanfei-v2-Qwen3-32B |
|
- exllamav3: https://github.com/turboderp-org/exllamav3 |
|
--- |
|
 |
|
# Yanfei-v2-Qwen3-32B |
|
|
|
A repair of Yanfei-Qwen-32B by [TIES](https://arxiv.org/abs/2306.01708) merging huihui-ai/Qwen3-32B-abliterated, Zhiming-Qwen3-32B, and Menghua-Qwen3-32B using [mergekit](https://github.com/cg123/mergekit). |
|
|
|
## Sponsorship |
|
|
|
This model was made possible with compute support from [Nectar AI](https://nectar.ai). Thank you! ❤️ |
|
|
|
### Configuration |
|
|
|
The following YAML configuration was used to produce this model: |
|
|
|
```yaml |
|
models: |
|
- model: ./Zhiming-Qwen3-32B-merged |
|
parameters: |
|
weight: 1 |
|
density: 1 |
|
- model: ./Menghua-Qwen3-32B-merged |
|
parameters: |
|
weight: 1 |
|
density: 1 |
|
- model: huihui-ai/Qwen3-32B-abliterated |
|
parameters: |
|
weight: 1 |
|
density: 1 |
|
merge_method: ties |
|
base_model: nbeerbower/Yanfei-Qwen3-32B |
|
parameters: |
|
weight: 1 |
|
density: 1 |
|
normalize: true |
|
int8_mask: true |
|
dtype: bfloat16 |
|
|
|
|
|
``` |