File size: 781 Bytes
0697f2f
 
 
 
 
 
 
 
19f2af5
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
---
base_model: janhq/Jan-v1-4B
---

[EXL3](https://github.com/turboderp-org/exllamav3) quantization of [Jan-v1-4B](https://huggingface.co/janhq/Jan-v1-4B), 8 bits per weight, including output layers.

### HumanEval (argmax)

| Model                                                                          | Q4   | Q6   | Q8   | FP16 |
| ------------------------------------------------------------------------------ | ---- | ---- | ---- | ---- |
| [Jan-v1-4B-exl3-4bpw](https://huggingface.co/isogen/Jan-v1-4B-exl3-4bpw)       | 82.3 | 79.3 | 78.0 | 78.0 |
| [Jan-v1-4B-exl3-6bpw](https://huggingface.co/isogen/Jan-v1-4B-exl3-6bpw)       | 78.0 | 76.8 | 77.4 | 76.8 |
| [Jan-v1-4B-exl3-8bpw-h8](https://huggingface.co/isogen/Jan-v1-4B-exl3-8bpw-h8) | 79.9 | 78.7 | 78.0 | 77.4 |