π³π± dutch-gte-multilingual-base
This model is a 53.2% smaller version of Alibaba-NLP/gte-multilingual-base for the Dutch language, created using the mtem-pruner space.
This pruned model should perform similarly to the original model for Dutch language tasks with a much smaller memory footprint. However, it may not perform well for other languages present in the original multilingual model as tokens not commonly used in Dutch were removed from the original multilingual model's vocabulary.
Usage
You can use this model with the Transformers library:
from transformers import AutoModel, AutoTokenizer
model_name = "denniscraandijk/dutch-gte-multilingual-base"
model = AutoModel.from_pretrained(model_name, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True, use_fast=True)
Or with the sentence-transformers library:
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("denniscraandijk/dutch-gte-multilingual-base")
Credits: cc @antoinelouis
- Downloads last month
- 6
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for denniscraandijk/dutch-gte-multilingual-base
Base model
Alibaba-NLP/gte-multilingual-base