dyogatama commited on
Commit
20dd421
·
verified ·
1 Parent(s): 2a806c5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -8,6 +8,8 @@ license: apache-2.0
8
  > The GGUF corresponds to Q3_K_S quantization.
9
  >
10
  > You can find the half-precision version [here](https://huggingface.co/RekaAI/reka-flash-3.1), and the Reka Quant quantization library [here](https://github.com/reka-ai/rekaquant)
 
 
11
 
12
  ## Quick Start
13
  Reka Flash 3.1 Quantized is released in a llama.cpp-compatible Q3_K_S format. You may use any library compatible with GGUF to run the model.
 
8
  > The GGUF corresponds to Q3_K_S quantization.
9
  >
10
  > You can find the half-precision version [here](https://huggingface.co/RekaAI/reka-flash-3.1), and the Reka Quant quantization library [here](https://github.com/reka-ai/rekaquant)
11
+ >
12
+ > [Learn more](https://reka.ai/news/reka-quantization-technology) about our quantization technology.
13
 
14
  ## Quick Start
15
  Reka Flash 3.1 Quantized is released in a llama.cpp-compatible Q3_K_S format. You may use any library compatible with GGUF to run the model.