EXL3 quantization of OpenReasoning-Nemotron-1.5B, 4 bits per weight.

Downloads last month
9
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for isogen/OpenReasoning-Nemotron-1.5B-exl3-4bpw

Base model

Qwen/Qwen2.5-1.5B
Quantized
(27)
this model