Update README.md
Browse files
README.md
CHANGED
@@ -6,7 +6,7 @@ tags: []
|
|
6 |
# Model Card
|
7 |
|
8 |
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
-
Our **JpharmaBERT** is a continually pre-trained version of the BERT model ([tohoku-nlp/bert-large-japanese-v2](https://huggingface.co/tohoku-nlp/bert-large-japanese-v2)), further trained on pharmaceutical data — the same dataset used for [eques/jpharmatron](https://huggingface.co/EQUES/JPharmatron-7B).
|
10 |
|
11 |
# Examoke Usage
|
12 |
|
|
|
6 |
# Model Card
|
7 |
|
8 |
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
+
Our **JpharmaBERT (large)** is a continually pre-trained version of the BERT model ([tohoku-nlp/bert-large-japanese-v2](https://huggingface.co/tohoku-nlp/bert-large-japanese-v2)), further trained on pharmaceutical data — the same dataset used for [eques/jpharmatron](https://huggingface.co/EQUES/JPharmatron-7B).
|
10 |
|
11 |
# Examoke Usage
|
12 |
|