മലയാളം - English ULMFit translationmodel. (Working in Progress)

മലയാളം: kaggle notebook


malayalam-ULMFit-Seq2Seq (Traslation model)

malayalam-ULMFit-Seq2Seq model is pre-trained on Malyalam_Language_Model_ULMFiT using fastai Language Model using fastai

Tokenized using Sentencepiece with a vocab size of 10000 the language model is upload to kaggle dataset

Usage

!pip install -Uqq huggingface_hub["fastai"]

from huggingface_hub import from_pretrained_fastai
learner = from_pretrained_fastai(repo_id)

original_xtext = 'കേൾക്കുന്ന എല്ലാ കാര്യങ്ങളും എനിക്കു മനസിലായില്ല'
original_ytext = 'I didnt understand all this'
predicted_text = learner.predict(original_xtext)
print(f'original text: {original_xtext}')
print(f'original answer: {original_ytext}')
print(f'predicted text: {predicted_text}')

Intended uses & limitations

It's not fine tuned to the state of the art accuracy

Training and evaluation data

Malayalam Samanantar Dataset - uploaded to kaggle with english - malayalam

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.