Update README.md
Browse files
README.md
CHANGED
@@ -79,11 +79,7 @@ configs:
|
|
79 |
path: "data/reading_comprehension-00000-of-00001-f9c8df20c22e46d0.parquet"
|
80 |
---
|
81 |
|
82 |
-
|
83 |
-
|
84 |
-
|
85 |
-
|
86 |
-
The HAE_RAE_BENCH is an ongoing project to develop a suite of evaluation tasks designed to test the
|
87 |
understanding of models regarding Korean cultural and contextual nuances.
|
88 |
Currently, it comprises 13 distinct tasks, with a total of 4900 instances.
|
89 |
|
@@ -92,7 +88,6 @@ the contents are not completely identical. Specifically, the reading comprehensi
|
|
92 |
In its place, an updated reading comprehension subset has been introduced, sourced from the CSAT, the Korean university entrance examination.
|
93 |
To replicate the studies from the paper, please use this [code](https://github.com/EleutherAI/lm-evaluation-harness/blob/master/lm_eval/tasks/haerae.py).
|
94 |
|
95 |
-
For the latest version of the code, refer to [this](https://github.com/guijinSON/HAE-RAE-Bench.v2/blob/main/HAE_RAE_Bench_Evaluation.ipynb).
|
96 |
|
97 |
### Dataset Overview
|
98 |
|
@@ -114,47 +109,6 @@ For the latest version of the code, refer to [this](https://github.com/guijinSON
|
|
114 |
| **Total** | **4900** | | |
|
115 |
|
116 |
|
117 |
-
### Evaluation Results
|
118 |
-
|
119 |
-
| Models | correct_definition_matching| csat_geo | csat_law | csat_socio | date_understanding | general_knowledge | history | loan_words | reading_comprehension | rare_words | standard_nomenclature |
|
120 |
-
|----------|---------|----------|----------|----------|--------|--------|--------|--------|--------|--------|--------|
|
121 |
-
| daekeun-ml/Llama-2-ko-DPO-13B | 0.5421 | 0.1800 | 0.1613 | 0.2181 | 0.4905 | 0.3523 | 0.7500 | 0.8107 | 0.2382 | 0.6963 | 0.7908 |
|
122 |
-
|
123 |
-
|
124 |
-
|
125 |
-
|
126 |
-
### How to use (LM-Eval-Harness)
|
127 |
-
```Python
|
128 |
-
!git clone https://github.com/guijinSON/lm-evaluation-harness.git
|
129 |
-
!pip install sentencepiece
|
130 |
-
%cd lm-evaluation-harness
|
131 |
-
!pip install -e .
|
132 |
-
!pip install -e ".[multilingual]"
|
133 |
-
!pip install huggingface_hub
|
134 |
-
!python -c "from huggingface_hub.hf_api import HfFolder; HfFolder.save_token('<YOUR_HF_TOKEN>')"
|
135 |
-
|
136 |
-
!python main.py \
|
137 |
-
--model hf-causal \
|
138 |
-
--model_args pretrained=daekeun-ml/Llama-2-ko-DPO-13B,dtype=bfloat16\
|
139 |
-
--num_fewshot 1\
|
140 |
-
--batch_size 2\
|
141 |
-
--tasks hr2_cdm,hr2_cgeo,hr2_claw,hr2_csoc,hr2_du,hr2_gk,hr2_hi,hr2_lw,hr2_rw,hr2_rc,hr2_sn\
|
142 |
-
--alteration ""\
|
143 |
-
--device cuda:0
|
144 |
-
```
|
145 |
-
|
146 |
-
### Release Notes
|
147 |
-
__2023.12.03__: All errors fixed! 11 Tasks available via LM-Eval Harness refer to the code above to run evaluation. (List of Available Tasks: correct_definition_matching, csat_geo, csat_law, csat_socio, date_understanding, general_knowledge, history, loan_words, rare_words, reading_comprehension, standard_nomenclature)
|
148 |
-
|
149 |
-
__2023.11.06__: 3 tasks added (csat_geo, csat_law, csat_socio)
|
150 |
-
|
151 |
-
__2023.09.28__: [LM-Eval-Harness](https://github.com/EleutherAI/lm-evaluation-harness) support added for the following 8 tasks:
|
152 |
-
Loan Words, Rare Words, Standard Nomenclature, History, General Knowledge,correct_definition_matching, date_understanding,reading_comprehension.
|
153 |
-
Refer to the following [document](https://github.com/guijinSON/HAE-RAE-Bench.v2/blob/main/HAE_RAE_Bench_Evaluation.ipynb) to run the evaluation yourself.
|
154 |
-
|
155 |
-
__2023.09.16__: 10 tasks added, 5 from original HAE-RAE Bench(Loan Words, Rare Words, Standard Nomenclature, History, General Knowledge),
|
156 |
-
5 new tasks (correct_definition_matching, date_understanding, lyrics_denoising, proverbs_denoising, reading_comprehension)
|
157 |
-
|
158 |
### Point of Contact
|
159 |
For any questions contact us via the following email:)
|
160 |
```
|
|
|
79 |
path: "data/reading_comprehension-00000-of-00001-f9c8df20c22e46d0.parquet"
|
80 |
---
|
81 |
|
82 |
+
The HAE_RAE_BENCH 1.1 is an ongoing project to develop a suite of evaluation tasks designed to test the
|
|
|
|
|
|
|
|
|
83 |
understanding of models regarding Korean cultural and contextual nuances.
|
84 |
Currently, it comprises 13 distinct tasks, with a total of 4900 instances.
|
85 |
|
|
|
88 |
In its place, an updated reading comprehension subset has been introduced, sourced from the CSAT, the Korean university entrance examination.
|
89 |
To replicate the studies from the paper, please use this [code](https://github.com/EleutherAI/lm-evaluation-harness/blob/master/lm_eval/tasks/haerae.py).
|
90 |
|
|
|
91 |
|
92 |
### Dataset Overview
|
93 |
|
|
|
109 |
| **Total** | **4900** | | |
|
110 |
|
111 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
112 |
### Point of Contact
|
113 |
For any questions contact us via the following email:)
|
114 |
```
|