dyogatama commited on
Commit
2a806c5
·
verified ·
1 Parent(s): 0af1d1b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -12
README.md CHANGED
@@ -9,18 +9,6 @@ license: apache-2.0
9
  >
10
  > You can find the half-precision version [here](https://huggingface.co/RekaAI/reka-flash-3.1), and the Reka Quant quantization library [here](https://github.com/reka-ai/rekaquant)
11
 
12
- Reka Flash 3.1 is a 21B general-purpose reasoning model that was trained from scratch. It was trained in synthetic and public datasets for supervised finetuning, followed by large-scale RLOO with rule-based rewards. Reka Flash 3.1 is an improved version of Reka Flash 3 due to significant advances in our reinforcement learning stack and curated high-qaulity RL data. Reka Flash 3.1 is particularly strong on coding and as a base model to be finetuned on agentic tasks.
13
- Reka Flash 3.1 improves by 10 points on LiveCodeBench v5 (Full set) from Reka Flash 3. For coding related tasks, Reka Flash 3.1 is competitive with models such as Qwen3-32B. o3-mini, and Gemini 2.5 Flash Thinking. If you want to learn more about how we do reinforcement learning for Reka Flash 3.1 that results in these improvements, please check out this post.
14
-
15
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65a776598ee30c06716aa380/e5SsUG4vTgslFtcJxM9DT.png)
16
-
17
- Try it out at [Reka Space](https://space.reka.ai).
18
-
19
- Strong reasoning and coding skills are important capabilities to support multimodal agentic use cases, and near-lossless quantization allows us to deploy our models anywhere. A multimodal version of Reka-Flash-3.1 serves as a base model for our core products Reka Research and Reka Vision. Please contact us for more information about how you can use them in your organizations.
20
-
21
- Model efficiency is critical for the local deployment. We also release a quantized version of Reka Flash 3.1 in this link. Meanwhile, we opensource the corresponding quantizatioon library at this link.
22
-
23
-
24
  ## Quick Start
25
  Reka Flash 3.1 Quantized is released in a llama.cpp-compatible Q3_K_S format. You may use any library compatible with GGUF to run the model.
26
 
 
9
  >
10
  > You can find the half-precision version [here](https://huggingface.co/RekaAI/reka-flash-3.1), and the Reka Quant quantization library [here](https://github.com/reka-ai/rekaquant)
11
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  ## Quick Start
13
  Reka Flash 3.1 Quantized is released in a llama.cpp-compatible Q3_K_S format. You may use any library compatible with GGUF to run the model.
14