mradermacher commited on
Commit
6151f10
·
verified ·
1 Parent(s): 828e68b

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -1
README.md CHANGED
@@ -5,6 +5,8 @@ language:
5
  library_name: transformers
6
  license: apache-2.0
7
  license_link: https://huggingface.co/Qwen/Qwen2.5-32B/blob/main/LICENSE
 
 
8
  quantized_by: mradermacher
9
  ---
10
  ## About
@@ -17,6 +19,9 @@ quantized_by: mradermacher
17
  static quants of https://huggingface.co/Qwen/Qwen2.5-32B
18
 
19
  <!-- provided-files -->
 
 
 
20
  weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-32B-i1-GGUF
21
  ## Usage
22
 
@@ -71,6 +76,6 @@ questions you might have and/or if you want some other model quantized.
71
 
72
  I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
73
  me use its servers and providing upgrades to my workstation to enable
74
- this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
75
 
76
  <!-- end -->
 
5
  library_name: transformers
6
  license: apache-2.0
7
  license_link: https://huggingface.co/Qwen/Qwen2.5-32B/blob/main/LICENSE
8
+ mradermacher:
9
+ readme_rev: 1
10
  quantized_by: mradermacher
11
  ---
12
  ## About
 
19
  static quants of https://huggingface.co/Qwen/Qwen2.5-32B
20
 
21
  <!-- provided-files -->
22
+
23
+ ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen2.5-32B-GGUF).***
24
+
25
  weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-32B-i1-GGUF
26
  ## Usage
27
 
 
76
 
77
  I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
78
  me use its servers and providing upgrades to my workstation to enable
79
+ this work in my free time.
80
 
81
  <!-- end -->