DBBErt
Our DBBErt model is a sub-word BERT model developed mainly for Byzantine but also Ancient Greek. It is the only model that is not only trained on Ancient and Modern Greek data but also on unedited Byzantine data.
Pre-trained weights are made available for a standard 12 layer, 768d BERT model.
How to use
Requirements:
pip install transformers
pip install flair
Can be directly used from the HuggingFace Model Hub with:
from transformers import AutoTokenizer, AutoModel
tokeniser = AutoTokenizer.from_pretrained("colinswaelens/DBBErt")
model = AutoModel.from_pretrained("colinswaelens/DBBErt")
WIP
Currently developping a fine-tuned model of DBBErt to perform part-of-speech tagging and morphological analysis.
- Downloads last month
- 750
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.