MISHANM/telugu_langmodel_Translator_Llama-3.2-3B-Instruct

Model Details

This model is based on meta-llama/Llama-3.2-3B-Instruct and has been LoRA finetuned on telugu dataset

Training Details

The model is trained on approx 29K instruction samples.

  1. GPU: AMD Instinct MI210
  2. Training Time: 4 Hours

Inference with HuggingFace


import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

# Set the device
device = "cuda" if torch.cuda.is_available() else "cpu"

# Load the fine-tuned model and tokenizer
model_path = "MISHANM/telugu_langmodel_Translator_Llama-3.2-3B-Instruct"
model = AutoModelForCausalLM.from_pretrained(model_path)

# Wrap the model with DataParallel if multiple GPUs are available
if torch.cuda.device_count() > 1:
   print(f"Using {torch.cuda.device_count()} GPUs")
   model = torch.nn.DataParallel(model)

# Move the model to the appropriate device
model.to(device)

tokenizer = AutoTokenizer.from_pretrained(model_path)

# Function to generate text
def generate_text(prompt, max_length=1000, temperature=0.9):
   # Format the prompt according to the chat template
   messages = [
       {
           "role": "system",
           "content": "You are a telugu language expert and linguist, with same knowledge give response in telugu language.",
       },
       {"role": "user", "content": prompt}
   ]

   # Apply the chat template
   formatted_prompt = f"<|system|>{messages[0]['content']}<|user|>{messages[1]['content']}<|assistant|>"

   # Tokenize and generate output
   inputs = tokenizer(formatted_prompt, return_tensors="pt").to(device)
   output = model.module.generate(  # Use model.module for DataParallel
       **inputs, max_new_tokens=max_length, temperature=temperature, do_sample=True
   )
   return tokenizer.decode(output[0], skip_special_tokens=True)

# Example usage
prompt = """One morning the busy doctor invited Sue into the hallway with a shaggy, gray eyebrow. ."""
translated_text = generate_text(prompt)
print(translated_text)

Citation Information

@misc{MISHANM/telugu_langmodel_Translator_Llama-3.2-3B-Instruct,
  author = {Mishan Maurya},
  title = {Introducing Fine Tuned LLM for Telugu Language},
  year = {2024},
  publisher = {Hugging Face},
  journal = {Hugging Face repository},
  
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for MISHANM/telugu_langmodel_Translator_Llama-3.2-3B-Instruct

Finetuned
(260)
this model

Dataset used to train MISHANM/telugu_langmodel_Translator_Llama-3.2-3B-Instruct