morriszms commited on
Commit
b21dd26
·
verified ·
1 Parent(s): 6542b8d

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ OpenReasoning-Nemotron-1.5B-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ OpenReasoning-Nemotron-1.5B-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ OpenReasoning-Nemotron-1.5B-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ OpenReasoning-Nemotron-1.5B-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ OpenReasoning-Nemotron-1.5B-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ OpenReasoning-Nemotron-1.5B-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ OpenReasoning-Nemotron-1.5B-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ OpenReasoning-Nemotron-1.5B-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ OpenReasoning-Nemotron-1.5B-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ OpenReasoning-Nemotron-1.5B-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ OpenReasoning-Nemotron-1.5B-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
47
+ OpenReasoning-Nemotron-1.5B-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
OpenReasoning-Nemotron-1.5B-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e01727d0f4417c2524ae8adfd9bd0216f5e302fe11bb95ca346644f74abb843d
3
+ size 676302848
OpenReasoning-Nemotron-1.5B-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e3e57d6e418ed8cba6cfc688e111df2c6cef43d7f335bdcd8759f88d815a88bd
3
+ size 880160768
OpenReasoning-Nemotron-1.5B-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d9641731a2b754ddbd47193834ad118ffee2ca73b95fa2d68ddaad08066b7b89
3
+ size 824176640
OpenReasoning-Nemotron-1.5B-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:311962656cd9f63d97429bd9fee95b375f261eed0f5f915b5d93635bb16dface
3
+ size 760942592
OpenReasoning-Nemotron-1.5B-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:91d7b46d7d26d3affda1d902e3c0fedd322d186a6b78fb06d70a708e3756e896
3
+ size 934952960
OpenReasoning-Nemotron-1.5B-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2984cbc55905c2e0a42c3b70a70af3922abb3a0c6112f8b84536e16e5d543734
3
+ size 986046464
OpenReasoning-Nemotron-1.5B-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:48af2bc6e7bc8ec7f459e643992b3acfe16ae04c0989498bb2c781251fbb29c5
3
+ size 940310528
OpenReasoning-Nemotron-1.5B-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:590b790272ce3e40f88479f50c54a56346f60a1544ea30f36264f3ab20c77736
3
+ size 1098727424
OpenReasoning-Nemotron-1.5B-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1ee0604ef61da879ee046453181e576729a14b1d13b2c7749b517ea7be6aecd7
3
+ size 1125048320
OpenReasoning-Nemotron-1.5B-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d621602ace042757daefcf85219ff20dc24c5dfd3e9265bbef65bc1b8db8b8f6
3
+ size 1098727424
OpenReasoning-Nemotron-1.5B-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0252e00738e24767e0978219e7d2c353f5dee3825fb95f4355c6a40d71dac861
3
+ size 1272737792
OpenReasoning-Nemotron-1.5B-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6fcc5abf20c198bc43d85b47afb2c8a6b6101ffd37d8b13cfbf6b2a818d2d3bd
3
+ size 1646571008
README.md ADDED
@@ -0,0 +1,154 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ language:
4
+ - en
5
+ base_model: nvidia/OpenReasoning-Nemotron-1.5B
6
+ pipeline_tag: text-generation
7
+ library_name: transformers
8
+ tags:
9
+ - nvidia
10
+ - code
11
+ - TensorBlock
12
+ - GGUF
13
+ ---
14
+
15
+ <div style="width: auto; margin-left: auto; margin-right: auto">
16
+ <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
17
+ </div>
18
+
19
+ [![Website](https://img.shields.io/badge/Website-tensorblock.co-blue?logo=google-chrome&logoColor=white)](https://tensorblock.co)
20
+ [![Twitter](https://img.shields.io/twitter/follow/tensorblock_aoi?style=social)](https://twitter.com/tensorblock_aoi)
21
+ [![Discord](https://img.shields.io/badge/Discord-Join%20Us-5865F2?logo=discord&logoColor=white)](https://discord.gg/Ej5NmeHFf2)
22
+ [![GitHub](https://img.shields.io/badge/GitHub-TensorBlock-black?logo=github&logoColor=white)](https://github.com/TensorBlock)
23
+ [![Telegram](https://img.shields.io/badge/Telegram-Group-blue?logo=telegram)](https://t.me/TensorBlock)
24
+
25
+
26
+ ## nvidia/OpenReasoning-Nemotron-1.5B - GGUF
27
+
28
+ <div style="text-align: left; margin: 20px 0;">
29
+ <a href="https://discord.com/invite/Ej5NmeHFf2" style="display: inline-block; padding: 10px 20px; background-color: #5865F2; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
30
+ Join our Discord to learn more about what we're building ↗
31
+ </a>
32
+ </div>
33
+
34
+ This repo contains GGUF format model files for [nvidia/OpenReasoning-Nemotron-1.5B](https://huggingface.co/nvidia/OpenReasoning-Nemotron-1.5B).
35
+
36
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5753](https://github.com/ggml-org/llama.cpp/commit/73e53dc834c0a2336cd104473af6897197b96277).
37
+
38
+ ## Our projects
39
+ <table border="1" cellspacing="0" cellpadding="10">
40
+ <tr>
41
+ <th colspan="2" style="font-size: 25px;">Forge</th>
42
+ </tr>
43
+ <tr>
44
+ <th colspan="2">
45
+ <img src="https://imgur.com/faI5UKh.jpeg" alt="Forge Project" width="900"/>
46
+ </th>
47
+ </tr>
48
+ <tr>
49
+ <th colspan="2">An OpenAI-compatible multi-provider routing layer.</th>
50
+ </tr>
51
+ <tr>
52
+ <th colspan="2">
53
+ <a href="https://github.com/TensorBlock/forge" target="_blank" style="
54
+ display: inline-block;
55
+ padding: 8px 16px;
56
+ background-color: #FF7F50;
57
+ color: white;
58
+ text-decoration: none;
59
+ border-radius: 6px;
60
+ font-weight: bold;
61
+ font-family: sans-serif;
62
+ ">🚀 Try it now! 🚀</a>
63
+ </th>
64
+ </tr>
65
+
66
+ <tr>
67
+ <th style="font-size: 25px;">Awesome MCP Servers</th>
68
+ <th style="font-size: 25px;">TensorBlock Studio</th>
69
+ </tr>
70
+ <tr>
71
+ <th><img src="https://imgur.com/2Xov7B7.jpeg" alt="MCP Servers" width="450"/></th>
72
+ <th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Studio" width="450"/></th>
73
+ </tr>
74
+ <tr>
75
+ <th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
76
+ <th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
77
+ </tr>
78
+ <tr>
79
+ <th>
80
+ <a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
81
+ display: inline-block;
82
+ padding: 8px 16px;
83
+ background-color: #FF7F50;
84
+ color: white;
85
+ text-decoration: none;
86
+ border-radius: 6px;
87
+ font-weight: bold;
88
+ font-family: sans-serif;
89
+ ">👀 See what we built 👀</a>
90
+ </th>
91
+ <th>
92
+ <a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
93
+ display: inline-block;
94
+ padding: 8px 16px;
95
+ background-color: #FF7F50;
96
+ color: white;
97
+ text-decoration: none;
98
+ border-radius: 6px;
99
+ font-weight: bold;
100
+ font-family: sans-serif;
101
+ ">👀 See what we built 👀</a>
102
+ </th>
103
+ </tr>
104
+ </table>
105
+
106
+ ## Prompt template
107
+
108
+ ```
109
+ <|im_start|>system
110
+ {system_prompt}<|im_end|>
111
+ <|im_start|>user
112
+ {prompt}<|im_end|>
113
+ <|im_start|>assistant
114
+ ```
115
+
116
+ ## Model file specification
117
+
118
+ | Filename | Quant type | File Size | Description |
119
+ | -------- | ---------- | --------- | ----------- |
120
+ | [OpenReasoning-Nemotron-1.5B-Q2_K.gguf](https://huggingface.co/tensorblock/nvidia_OpenReasoning-Nemotron-1.5B-GGUF/blob/main/OpenReasoning-Nemotron-1.5B-Q2_K.gguf) | Q2_K | 0.676 GB | smallest, significant quality loss - not recommended for most purposes |
121
+ | [OpenReasoning-Nemotron-1.5B-Q3_K_S.gguf](https://huggingface.co/tensorblock/nvidia_OpenReasoning-Nemotron-1.5B-GGUF/blob/main/OpenReasoning-Nemotron-1.5B-Q3_K_S.gguf) | Q3_K_S | 0.761 GB | very small, high quality loss |
122
+ | [OpenReasoning-Nemotron-1.5B-Q3_K_M.gguf](https://huggingface.co/tensorblock/nvidia_OpenReasoning-Nemotron-1.5B-GGUF/blob/main/OpenReasoning-Nemotron-1.5B-Q3_K_M.gguf) | Q3_K_M | 0.824 GB | very small, high quality loss |
123
+ | [OpenReasoning-Nemotron-1.5B-Q3_K_L.gguf](https://huggingface.co/tensorblock/nvidia_OpenReasoning-Nemotron-1.5B-GGUF/blob/main/OpenReasoning-Nemotron-1.5B-Q3_K_L.gguf) | Q3_K_L | 0.880 GB | small, substantial quality loss |
124
+ | [OpenReasoning-Nemotron-1.5B-Q4_0.gguf](https://huggingface.co/tensorblock/nvidia_OpenReasoning-Nemotron-1.5B-GGUF/blob/main/OpenReasoning-Nemotron-1.5B-Q4_0.gguf) | Q4_0 | 0.935 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
125
+ | [OpenReasoning-Nemotron-1.5B-Q4_K_S.gguf](https://huggingface.co/tensorblock/nvidia_OpenReasoning-Nemotron-1.5B-GGUF/blob/main/OpenReasoning-Nemotron-1.5B-Q4_K_S.gguf) | Q4_K_S | 0.940 GB | small, greater quality loss |
126
+ | [OpenReasoning-Nemotron-1.5B-Q4_K_M.gguf](https://huggingface.co/tensorblock/nvidia_OpenReasoning-Nemotron-1.5B-GGUF/blob/main/OpenReasoning-Nemotron-1.5B-Q4_K_M.gguf) | Q4_K_M | 0.986 GB | medium, balanced quality - recommended |
127
+ | [OpenReasoning-Nemotron-1.5B-Q5_0.gguf](https://huggingface.co/tensorblock/nvidia_OpenReasoning-Nemotron-1.5B-GGUF/blob/main/OpenReasoning-Nemotron-1.5B-Q5_0.gguf) | Q5_0 | 1.099 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
128
+ | [OpenReasoning-Nemotron-1.5B-Q5_K_S.gguf](https://huggingface.co/tensorblock/nvidia_OpenReasoning-Nemotron-1.5B-GGUF/blob/main/OpenReasoning-Nemotron-1.5B-Q5_K_S.gguf) | Q5_K_S | 1.099 GB | large, low quality loss - recommended |
129
+ | [OpenReasoning-Nemotron-1.5B-Q5_K_M.gguf](https://huggingface.co/tensorblock/nvidia_OpenReasoning-Nemotron-1.5B-GGUF/blob/main/OpenReasoning-Nemotron-1.5B-Q5_K_M.gguf) | Q5_K_M | 1.125 GB | large, very low quality loss - recommended |
130
+ | [OpenReasoning-Nemotron-1.5B-Q6_K.gguf](https://huggingface.co/tensorblock/nvidia_OpenReasoning-Nemotron-1.5B-GGUF/blob/main/OpenReasoning-Nemotron-1.5B-Q6_K.gguf) | Q6_K | 1.273 GB | very large, extremely low quality loss |
131
+ | [OpenReasoning-Nemotron-1.5B-Q8_0.gguf](https://huggingface.co/tensorblock/nvidia_OpenReasoning-Nemotron-1.5B-GGUF/blob/main/OpenReasoning-Nemotron-1.5B-Q8_0.gguf) | Q8_0 | 1.647 GB | very large, extremely low quality loss - not recommended |
132
+
133
+
134
+ ## Downloading instruction
135
+
136
+ ### Command line
137
+
138
+ Firstly, install Huggingface Client
139
+
140
+ ```shell
141
+ pip install -U "huggingface_hub[cli]"
142
+ ```
143
+
144
+ Then, downoad the individual model file the a local directory
145
+
146
+ ```shell
147
+ huggingface-cli download tensorblock/nvidia_OpenReasoning-Nemotron-1.5B-GGUF --include "OpenReasoning-Nemotron-1.5B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
148
+ ```
149
+
150
+ If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
151
+
152
+ ```shell
153
+ huggingface-cli download tensorblock/nvidia_OpenReasoning-Nemotron-1.5B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
154
+ ```