Specify number of parameters
#23
by
BossBoss2021
- opened
README.md
CHANGED
@@ -9,7 +9,7 @@ license: mit
|
|
9 |
|
10 |
DialoGPT is a SOTA large-scale pretrained dialogue response generation model for multiturn conversations.
|
11 |
The [human evaluation results](https://github.com/dreasysnail/Dialogpt_dev#human-evaluation) indicate that the response generated from DialoGPT is comparable to human response quality under a single-turn conversation Turing test.
|
12 |
-
The model is trained on 147M multi-turn dialogue from Reddit discussion thread.
|
13 |
|
14 |
* Multi-turn generation examples from an interactive environment:
|
15 |
|
|
|
9 |
|
10 |
DialoGPT is a SOTA large-scale pretrained dialogue response generation model for multiturn conversations.
|
11 |
The [human evaluation results](https://github.com/dreasysnail/Dialogpt_dev#human-evaluation) indicate that the response generated from DialoGPT is comparable to human response quality under a single-turn conversation Turing test.
|
12 |
+
The model is trained on 147M multi-turn dialogue from Reddit discussion thread, having 354M parameters (354,823,168).
|
13 |
|
14 |
* Multi-turn generation examples from an interactive environment:
|
15 |
|