Update README.md
Browse files
README.md
CHANGED
@@ -40,7 +40,7 @@ Meta-Llama-3.1-70B-Instruct-quantized.w4a16 achieves 100.0% recovery for the Are
|
|
40 |
This model was obtained by quantizing the weights of [Meta-Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct) to INT4 data type.
|
41 |
This optimization reduces the number of bits per parameter from 16 to 4, reducing the disk size and GPU memory requirements by approximately 75%.
|
42 |
|
43 |
-
Only the weights of the linear operators within transformers blocks are quantized. Symmetric per-
|
44 |
The [GPTQ](https://arxiv.org/abs/2210.17323) algorithm is applied for quantization, as implemented in the [AutoGPTQ](https://github.com/AutoGPTQ/AutoGPTQ) library. GPTQ used a 1% damping factor and 512 sequences of 8,192 random tokens.
|
45 |
|
46 |
|
|
|
40 |
This model was obtained by quantizing the weights of [Meta-Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct) to INT4 data type.
|
41 |
This optimization reduces the number of bits per parameter from 16 to 4, reducing the disk size and GPU memory requirements by approximately 75%.
|
42 |
|
43 |
+
Only the weights of the linear operators within transformers blocks are quantized. Symmetric per-group quantization is applied, in which a linear scaling per group of 128 parameters maps the INT4 and floating point representations of the quantized weights.
|
44 |
The [GPTQ](https://arxiv.org/abs/2210.17323) algorithm is applied for quantization, as implemented in the [AutoGPTQ](https://github.com/AutoGPTQ/AutoGPTQ) library. GPTQ used a 1% damping factor and 512 sequences of 8,192 random tokens.
|
45 |
|
46 |
|