prithivMLmods's picture
Update README.md
727efb5 verified
---
license: cc-by-4.0
language:
- en
base_model:
- nvidia/OpenReasoning-Nemotron-1.5B
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
- math
- code
- nvidia
---
# **OpenReasoning-Nemotron-1.5B-F32-GGUF**
> OpenReasoning-Nemotron-1.5B is a large language model (LLM) which is a derivative of Qwen2.5-1.5B-Instruct (AKA the reference model). It is a reasoning model that is post-trained for reasoning about math, code and science solution generation. We evaluated this model with up to 64K output tokens. OpenReasoning-Nemotron models can be used in a "heavy" mode by starting multiple parallel generations and combining them together via generative solution selection (GenSelect). To add this "skill" we follow the original GenSelect training pipeline except we do not train on the selection summary but use the full reasoning trace of DeepSeek R1 0528 671B instead. We only train models to select the best solution for math problems but surprisingly find that this capability directly generalizes to code and science questions! With this "heavy" GenSelect inference mode, OpenReasoning-Nemotron-32B model surpasses O3 (High) on math and coding benchmarks.
## Model File
| Quant Type | File Size | Filename |
|------------|-----------|----------|
| F32 | 6.18 GB | OpenReasoning-Nemotron-1.5B.F32.gguf |
| F16 | 3.09 GB | OpenReasoning-Nemotron-1.5B.F16.gguf |
| BF16 | 3.09 GB | OpenReasoning-Nemotron-1.5B.BF16.gguf |
| Q8_0 | 1.65 GB | OpenReasoning-Nemotron-1.5B.Q8_0.gguf |
| Q6_K | 1.27 GB | OpenReasoning-Nemotron-1.5B.Q6_K.gguf |
| Q5_K_M | 1.13 GB | OpenReasoning-Nemotron-1.5B.Q5_K_M.gguf |
| Q5_K_S | 1.1 GB | OpenReasoning-Nemotron-1.5B.Q5_K_S.gguf |
| Q4_K_M | 986 MB | OpenReasoning-Nemotron-1.5B.Q4_K_M.gguf |
| Q4_K_S | 940 MB | OpenReasoning-Nemotron-1.5B.Q4_K_S.gguf |
| Q3_K_L | 880 MB | OpenReasoning-Nemotron-1.5B.Q3_K_L.gguf |
| Q3_K_M | 824 MB | OpenReasoning-Nemotron-1.5B.Q3_K_M.gguf |
| Q3_K_S | 761 MB | OpenReasoning-Nemotron-1.5B.Q3_K_S.gguf |
| Q2_K | 676 MB | OpenReasoning-Nemotron-1.5B.Q2_K.gguf |
## Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)