alexmarques commited on
Commit
b8c4487
·
verified ·
1 Parent(s): 5ce1373

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -32,7 +32,7 @@ base_model: meta-llama/Meta-Llama-3.1-70B-Instruct
32
  - **Model Developers:** Neural Magic
33
 
34
  Quantized version of [Meta-Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct).
35
- It achieves scores within 1% of the scores of the unquantized model for MMLU, ARC-Challenge, GSM-8k, Hellaswag and Winogrande, and within 3.2% for TruthfulQA.
36
 
37
  ### Model Optimizations
38
 
@@ -196,9 +196,9 @@ This version of the lm-evaluation-harness includes versions of MMLU, ARC-Challen
196
  </td>
197
  <td>86.66
198
  </td>
199
- <td>86.06
200
  </td>
201
- <td>99.3%
202
  </td>
203
  </tr>
204
  <tr>
@@ -206,9 +206,9 @@ This version of the lm-evaluation-harness includes versions of MMLU, ARC-Challen
206
  </td>
207
  <td>85.32
208
  </td>
209
- <td>85.16
210
  </td>
211
- <td>99.8%
212
  </td>
213
  </tr>
214
  <tr>
@@ -216,9 +216,9 @@ This version of the lm-evaluation-harness includes versions of MMLU, ARC-Challen
216
  </td>
217
  <td>60.65
218
  </td>
219
- <td>58.74
220
  </td>
221
- <td>96.8%
222
  </td>
223
  </tr>
224
  <tr>
@@ -226,9 +226,9 @@ This version of the lm-evaluation-harness includes versions of MMLU, ARC-Challen
226
  </td>
227
  <td><strong>84.50</strong>
228
  </td>
229
- <td><strong>83.76</strong>
230
  </td>
231
- <td><strong>99.1%</strong>
232
  </td>
233
  </tr>
234
  </table>
 
32
  - **Model Developers:** Neural Magic
33
 
34
  Quantized version of [Meta-Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct).
35
+ It achieves scores within 1.4% of the scores of the unquantized model for MMLU, ARC-Challenge, GSM-8k, Hellaswag, Winogrande, and TruthfulQA.
36
 
37
  ### Model Optimizations
38
 
 
196
  </td>
197
  <td>86.66
198
  </td>
199
+ <td>86.25
200
  </td>
201
+ <td>99.5%
202
  </td>
203
  </tr>
204
  <tr>
 
206
  </td>
207
  <td>85.32
208
  </td>
209
+ <td>85.48
210
  </td>
211
+ <td>100.2%
212
  </td>
213
  </tr>
214
  <tr>
 
216
  </td>
217
  <td>60.65
218
  </td>
219
+ <td>59.82
220
  </td>
221
+ <td>98.6%
222
  </td>
223
  </tr>
224
  <tr>
 
226
  </td>
227
  <td><strong>84.50</strong>
228
  </td>
229
+ <td><strong>83.98</strong>
230
  </td>
231
+ <td><strong>99.4%</strong>
232
  </td>
233
  </tr>
234
  </table>