GGUF
Inference Endpoints
conversational

GGUF F16 quant of an experimental finetune of Qwen 1.5B with Kalomaze Opus Instruct first and then Claude C2 logs. Model may produce NSFW results.

Built with Axolotl

Downloads last month
39
GGUF
Model size
1.54B params
Architecture
qwen2

16-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Dataset used to train NewEden/Qwen-1.5B-Claude-F16-GGUF