🆕 A new commercially permissible multilingual version is available urchade/gliner_multiv2.1
🐛 A subtle bug that causes performance degradation on some models has been corrected. Thanks to @yyDing1 for raising the issue.
from gliner import GLiNER
# Initialize GLiNER
model = GLiNER.from_pretrained("urchade/gliner_multiv2.1")
text = "This is a text about Bill Gates and Microsoft."# Labels for entity prediction
labels = ["person", "organization", "email"]
entities = model.predict_entities(text, labels, threshold=0.5)
for entity in entities:
print(entity["text"], "=>", entity["label"])
10 replies
·
reacted to urchade's
post with ❤️almost 2 years ago
I'd like to share our project on open-type Named Entity Recognition (NER). Our model uses a transformer encoder (BERT-like), making the computation overhead very minimal compared to use of LLMs. I've developed a demo that runs on CPU on Google Colab.