Description

Finetuned facebook/nllb-200-3.3B model to translate between Spanish ("spa_Latn") and Mapuzungún. We support the following languages/graphemaries:

  • Spanish (spa_Latn)
  • Azümchefe (quy_Latn)
  • Ragileo (nso_Latn)
  • Unificado (fra_Latn)

Example

from transformers import NllbTokenizerFast, AutoModelForSeq2SeqLM

tokenizer = NllbTokenizerFast.from_pretrained("CenIA/nllb-200-3.3B-spa-arn")
model = AutoModelForSeq2SeqLM.from_pretrained("CenIA/nllb-200-3.3B-spa-arn")

def translate(sentence: str, translate_from="spa_Latn", translate_to="quy_Latn") -> str:
    tokenizer.src_lang = translate_from
    tokenizer.tgt_lang = translate_to

    inputs = tokenizer(sentence, return_tensors="pt")
    result = model.generate(**inputs, forced_bos_token_id=tokenizer.convert_tokens_to_ids(translate_to))
    decoded = tokenizer.batch_decode(result, skip_special_tokens=True)[0]

    return decoded

traduction = translate("Hola, ¿cómo estás?")

print(traduction)
Downloads last month
280
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model authors have turned it off explicitly.