File size: 1,894 Bytes
6bb9e97
 
 
 
 
 
0af1d1b
6bb9e97
 
0af1d1b
20dd421
 
6bb9e97
 
 
 
 
 
 
 
d5d9cd2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
---
license: apache-2.0
---
# Reka Flash 3.1 (3.5 bit)

> [!Note]
> This repository corresponds to the quantized version of Reka Flash 3.1. It has been quantized using our Reka Quant method, which leverages calibrated error reduction and online self-distillation to reduce quantization loss. 
> The GGUF corresponds to Q3_K_S quantization.
>
> You can find the half-precision version [here](https://huggingface.co/RekaAI/reka-flash-3.1), and the Reka Quant quantization library [here](https://github.com/reka-ai/rekaquant)
>
> [Learn more](https://reka.ai/news/reka-quantization-technology) about our quantization technology.

## Quick Start
Reka Flash 3.1 Quantized is released in a llama.cpp-compatible Q3_K_S format. You may use any library compatible with GGUF to run the model.  

### Via llama.cpp
```
./llama-cli -hf rekaai/reka-flash-3.1-rekaquant-q3_k_s -p "Who are you?"

```


## Model Details

### Prompt Format

Reka Flash 3.1 uses cl100k_base tokenizer and adds no additional special tokens. Its prompt format is as follows:
```
human: this is round 1 prompt <sep> assistant: this is round 1 response <sep> ...
```
Generation should stop on seeing the string `<sep>` or seeing the special token `<|endoftext|>`.
System prompt can be added by prepending to the first user round.
```
human: You are a friendly assistant blah ... this is round 1 user prompt <sep> assistant: this is round 1 response <sep> ...
```
For multi-round conversations, it is recommended to drop the reasoning traces in the previous assistant round to save tokens for the model to think.
If you are using HF or vLLM, the built-in chat_template shall handle prompt formatting automatically.



### Language Support

This model is primarily built for the English language, and you should consider this an English only model. However, the model is able to converse and understand other languages to some degree.