File size: 2,520 Bytes
c98518f 7e7ac80 37f15db d61c75b c98518f de5124a c98518f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 |
---
license: apache-2.0
library_name: transformers
language:
- en
tags:
- chat
- conversational
base_model:
- maldv/Qwentile2.5-32B-Instruct
- a-m-team/AM-Thinking-v1
- nvidia/OpenCodeReasoning-Nemotron-32B
- maldv/Loqwqtus2.5-32B-Instruct
- trashpanda-org/QwQ-32B-Snowdrop-v0
- ArliAI/QwQ-32B-ArliAI-RpR-v3
pipeline_tags:
- text-generation
- conversational
- chat
---

[GGUF](https://huggingface.co/mradermacher/QwentileLambda2.5-32B-Instruct-GGUF) [iMat](https://huggingface.co/mradermacher/QwentileLambda2.5-32B-Instruct-i1-GGUF)
# Qwentile Λ 2.5 32B Instruct
Qwentile Λ 2.5 32B Instruct is a *normalized denoised fourier interpolation* of the following models:
```yaml
output_base_model: "maldv/Qwentile2.5-32B-Instruct"
output_dtype: "bfloat16"
finetune_merge:
- { "model": "a-m-team/AM-Thinking-v1", "base": "Qwen/Qwen2.5-32B", "alpha": 0.9 }
- { "model": "nvidia/OpenCodeReasoning-Nemotron-32B", "base": "Qwen/Qwen2.5-32B", "alpha": 0.8, "is_input": true}
- { "model": "maldv/Loqwqtus2.5-32B-Instruct", "base": "Qwen/Qwen2.5-32B", "alpha": 0.9 }
- { "model": "trashpanda-org/QwQ-32B-Snowdrop-v0", "base": "Qwen/Qwen2.5-32B", "alpha": 0.9 }
- { "model": "ArliAI/QwQ-32B-ArliAI-RpR-v3", "base": "Qwen/Qwen2.5-32B", "alpha": 0.8 }
```
In other words, all of these models get warped and interpolated in signal space, and then jammed back on top of the base model (which in this case was Qwentile2.5-32B-Instruct); but with the Nemotron OpenCodeReasoning input layer.
### What is this?
The latest in my series of Qwen 2.5 merges. Some really good models have been released recently, so I folded them in with Qwentile as the base. It should exhibit superior thinking skills, and perhaps even some code ability. I was satisfied with QReasoner2.5-32B-Instruct for advanced reasoning, but I suspect this will be an improvement.
### A <think> model?
No, oddly enough, given it's lineage I thought for sure it would be a thought model, but instead it blends thought with it's creative output almost seamlessly. The combination is pretty powerful in my initial tests.
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwentile-labmda-2.5-32b-instruct,
title = {Qwentile Λ 2.5 32B Instruct},
url = {https://huggingface.co/maldv/QwentileLambda2.5-32B-Instruct},
author = {Praxis Maldevide},
month = {May},
year = {2025}
}
``` |