update readme
Browse files
README.md
CHANGED
@@ -9,7 +9,8 @@ In this paper, we propose DictBERT, which is a novel pre-trained language model
|
|
9 |
|
10 |
## Evaluation results
|
11 |
|
12 |
-
|
|
|
13 |
|
14 |
| | MNLI | QNLI | QQP | SST-2 | CoLA | MRPC | RTE | STS-B | Average |
|
15 |
|:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:|
|
@@ -18,6 +19,13 @@ When fine-tuned BERT and our DictBERT on GLEU benchmarks tasks. CoLA is evaluate
|
|
18 |
|
19 |
HF: huggingface checkpoint for BERT-base uncased
|
20 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
### BibTeX entry and citation info
|
22 |
|
23 |
```bibtex
|
|
|
9 |
|
10 |
## Evaluation results
|
11 |
|
12 |
+
We show performance of fine-tuning BERT and DictBERT on the GLEU benchmarks tasks. CoLA is evaluated by matthews, STS-B is evaluated by pearson, and other tasks are evaluated by accuracy. The models achieve the following results:
|
13 |
+
|
14 |
|
15 |
| | MNLI | QNLI | QQP | SST-2 | CoLA | MRPC | RTE | STS-B | Average |
|
16 |
|:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:|
|
|
|
19 |
|
20 |
HF: huggingface checkpoint for BERT-base uncased
|
21 |
|
22 |
+
If no dictionary if provided during fine-tuning, DictBERT can still achieve better performance than BERT.
|
23 |
+
|
24 |
+
| | MNLI | QNLI | QQP | SST-2 | CoLA | MRPC | RTE | STS-B | Average |
|
25 |
+
|:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:|
|
26 |
+
| w/o dict | 84.24 | 90.99 | 90.80 | 92.51 | 60.50 | 87.04 | 73.75 | 89.37 | 83.69 |
|
27 |
+
|
28 |
+
|
29 |
### BibTeX entry and citation info
|
30 |
|
31 |
```bibtex
|