Philip May
commited on
Commit
·
8e40b5d
1
Parent(s):
a6b3dfb
Update README.md
Browse files
README.md
CHANGED
@@ -68,7 +68,7 @@ This model is trained on the following datasets:
|
|
68 |
| Model | rouge1 | rouge2 | rougeL | rougeLsum
|
69 |
|-------|--------|--------|--------|----------
|
70 |
| [ml6team/mt5-small-german-finetune-mlsum](https://huggingface.co/ml6team/mt5-small-german-finetune-mlsum) | 18.3607 | 5.3604 | 14.5456 | 16.1946
|
71 |
-
| **mT5-small-sum-de-en-01 (this)** | **21.7336** | **7.2614** | **17.1323** | **19.3977**
|
72 |
|
73 |
## Evaluation on CNN Daily English Test Set (no beams)
|
74 |
|
@@ -77,7 +77,7 @@ This model is trained on the following datasets:
|
|
77 |
| [sshleifer/distilbart-xsum-12-6](https://huggingface.co/sshleifer/distilbart-xsum-12-6) | 26.7664 | 8.8243 | 18.3703 | 23.2614
|
78 |
| [facebook/bart-large-xsum](https://huggingface.co/facebook/bart-large-xsum) | 28.5374 | 9.8565 | 19.4829 | 24.7364
|
79 |
| [mrm8488/t5-base-finetuned-summarize-news](https://huggingface.co/mrm8488/t5-base-finetuned-summarize-news) | 37.576 | 14.7389 | 24.0254 | 34.4634
|
80 |
-
| **mT5-small-sum-de-en-01 (this)** | **37.6339** | **16.5317** | **27.1418** | **34.9951**
|
81 |
|
82 |
|
83 |
## Evaluation on Extreme Summarization (XSum) English Test Set (no beams)
|
@@ -86,7 +86,7 @@ This model is trained on the following datasets:
|
|
86 |
|-------|--------|--------|--------|----------
|
87 |
| [mrm8488/t5-base-finetuned-summarize-news](https://huggingface.co/mrm8488/t5-base-finetuned-summarize-news) | 18.6204 | 3.535 | 12.3997 | 15.2111
|
88 |
| [facebook/bart-large-xsum](https://huggingface.co/facebook/bart-large-xsum) | 28.5374 | 9.8565 | 19.4829 | 24.7364
|
89 |
-
| mT5-small-sum-de-en-01 (this) | 32.3416 | 10.6191 | 25.3799 | 25.3908
|
90 |
| [sshleifer/distilbart-xsum-12-6](https://huggingface.co/sshleifer/distilbart-xsum-12-6) | 44.2553 ♣ | 21.4289 ♣ | 36.2639 ♣ | 36.2696 ♣
|
91 |
|
92 |
♣: These values seem to be unusually high. It could be that the test set was used in the training data.
|
|
|
68 |
| Model | rouge1 | rouge2 | rougeL | rougeLsum
|
69 |
|-------|--------|--------|--------|----------
|
70 |
| [ml6team/mt5-small-german-finetune-mlsum](https://huggingface.co/ml6team/mt5-small-german-finetune-mlsum) | 18.3607 | 5.3604 | 14.5456 | 16.1946
|
71 |
+
| **deutsche-telekom/mT5-small-sum-de-en-01 (this)** | **21.7336** | **7.2614** | **17.1323** | **19.3977**
|
72 |
|
73 |
## Evaluation on CNN Daily English Test Set (no beams)
|
74 |
|
|
|
77 |
| [sshleifer/distilbart-xsum-12-6](https://huggingface.co/sshleifer/distilbart-xsum-12-6) | 26.7664 | 8.8243 | 18.3703 | 23.2614
|
78 |
| [facebook/bart-large-xsum](https://huggingface.co/facebook/bart-large-xsum) | 28.5374 | 9.8565 | 19.4829 | 24.7364
|
79 |
| [mrm8488/t5-base-finetuned-summarize-news](https://huggingface.co/mrm8488/t5-base-finetuned-summarize-news) | 37.576 | 14.7389 | 24.0254 | 34.4634
|
80 |
+
| **deutsche-telekom/mT5-small-sum-de-en-01 (this)** | **37.6339** | **16.5317** | **27.1418** | **34.9951**
|
81 |
|
82 |
|
83 |
## Evaluation on Extreme Summarization (XSum) English Test Set (no beams)
|
|
|
86 |
|-------|--------|--------|--------|----------
|
87 |
| [mrm8488/t5-base-finetuned-summarize-news](https://huggingface.co/mrm8488/t5-base-finetuned-summarize-news) | 18.6204 | 3.535 | 12.3997 | 15.2111
|
88 |
| [facebook/bart-large-xsum](https://huggingface.co/facebook/bart-large-xsum) | 28.5374 | 9.8565 | 19.4829 | 24.7364
|
89 |
+
| deutsche-telekom/mT5-small-sum-de-en-01 (this) | 32.3416 | 10.6191 | 25.3799 | 25.3908
|
90 |
| [sshleifer/distilbart-xsum-12-6](https://huggingface.co/sshleifer/distilbart-xsum-12-6) | 44.2553 ♣ | 21.4289 ♣ | 36.2639 ♣ | 36.2696 ♣
|
91 |
|
92 |
♣: These values seem to be unusually high. It could be that the test set was used in the training data.
|