metadata
library_name: transformers
tags: []
Model Card
Our JpharmaBERT (large) is a continually pre-trained version of the BERT model (tohoku-nlp/bert-large-japanese-v2), further trained on pharmaceutical data — the same dataset used for eques/jpharmatron.
Examoke Usage
import torch
from transformers import AutoModelForMaskedLM, AutoTokenizer, pipeline
model = AutoModelForMaskedLM.from_pretrained("EQUES/jpharma-bert-large", torch_dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained("EQUES/jpharma-bert-large")
fill_mask = pipeline("fill-mask", model=model, tokenizer=tokenizer)
results = fill_mask("水は化学式で[MASK]2Oです。")
for result in results:
print(result)
# {'score': 0.49609375, 'token': 55, 'token_str': 'H', 'sequence': '水は化学式でH2Oです。'}
# {'score': 0.11767578125, 'token': 29257, 'token_str': 'Na', 'sequence': '水は化学式でNa2Oです。'}
# {'score': 0.047607421875, 'token': 61, 'token_str': 'N', 'sequence': '水は化学式でN2Oです。'}
# {'score': 0.038330078125, 'token': 16966, 'token_str': 'CH', 'sequence': '水は化学式でCH2Oです。'}
# {'score': 0.0255126953125, 'token': 66, 'token_str': 'S', 'sequence': '水は化学式でS2Oです。'}
Training Details
Training Data
We used the same dataset as eques/jpharmatron for training our JpharmaBERT, which consists of:
- Japanese text data (2B tokens) collected from pharmaceutical documents such as academic papers and package inserts
- English data (8B tokens) obtained from PubMed abstracts
- Pharmaceutical-related data (1.2B tokens) extracted from the multilingual CC100 dataset
After removing duplicate entries across these sources, the final dataset contains approximately 9 billion tokens.
(For details, please refer to our paper about Jpharmatron: link)
Training Hyperparameters
The model was continually pre-trained with the following settings:
- Mask probability: 15%
- Maximum sequence length: 512 tokens
- Number of training epochs: 6
- Learning rate: 5e-5
- Warm-up steps: 10,000
- Per-device training batch size: 64
Model Card Authors
Created by Takuro Fujii ([email protected])