prithivMLmods commited on
Commit
727efb5
·
verified ·
1 Parent(s): d3257b8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -1
README.md CHANGED
@@ -11,4 +11,35 @@ tags:
11
  - math
12
  - code
13
  - nvidia
14
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  - math
12
  - code
13
  - nvidia
14
+ ---
15
+
16
+ # **OpenReasoning-Nemotron-1.5B-F32-GGUF**
17
+
18
+ > OpenReasoning-Nemotron-1.5B is a large language model (LLM) which is a derivative of Qwen2.5-1.5B-Instruct (AKA the reference model). It is a reasoning model that is post-trained for reasoning about math, code and science solution generation. We evaluated this model with up to 64K output tokens. OpenReasoning-Nemotron models can be used in a "heavy" mode by starting multiple parallel generations and combining them together via generative solution selection (GenSelect). To add this "skill" we follow the original GenSelect training pipeline except we do not train on the selection summary but use the full reasoning trace of DeepSeek R1 0528 671B instead. We only train models to select the best solution for math problems but surprisingly find that this capability directly generalizes to code and science questions! With this "heavy" GenSelect inference mode, OpenReasoning-Nemotron-32B model surpasses O3 (High) on math and coding benchmarks.
19
+
20
+ ## Model File
21
+
22
+ | Quant Type | File Size | Filename |
23
+ |------------|-----------|----------|
24
+ | F32 | 6.18 GB | OpenReasoning-Nemotron-1.5B.F32.gguf |
25
+ | F16 | 3.09 GB | OpenReasoning-Nemotron-1.5B.F16.gguf |
26
+ | BF16 | 3.09 GB | OpenReasoning-Nemotron-1.5B.BF16.gguf |
27
+ | Q8_0 | 1.65 GB | OpenReasoning-Nemotron-1.5B.Q8_0.gguf |
28
+ | Q6_K | 1.27 GB | OpenReasoning-Nemotron-1.5B.Q6_K.gguf |
29
+ | Q5_K_M | 1.13 GB | OpenReasoning-Nemotron-1.5B.Q5_K_M.gguf |
30
+ | Q5_K_S | 1.1 GB | OpenReasoning-Nemotron-1.5B.Q5_K_S.gguf |
31
+ | Q4_K_M | 986 MB | OpenReasoning-Nemotron-1.5B.Q4_K_M.gguf |
32
+ | Q4_K_S | 940 MB | OpenReasoning-Nemotron-1.5B.Q4_K_S.gguf |
33
+ | Q3_K_L | 880 MB | OpenReasoning-Nemotron-1.5B.Q3_K_L.gguf |
34
+ | Q3_K_M | 824 MB | OpenReasoning-Nemotron-1.5B.Q3_K_M.gguf |
35
+ | Q3_K_S | 761 MB | OpenReasoning-Nemotron-1.5B.Q3_K_S.gguf |
36
+ | Q2_K | 676 MB | OpenReasoning-Nemotron-1.5B.Q2_K.gguf |
37
+
38
+ ## Quants Usage
39
+
40
+ (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
41
+
42
+ Here is a handy graph by ikawrakow comparing some lower-quality quant
43
+ types (lower is better):
44
+
45
+ ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)