Nikitha-logics's picture
Update README.md
0a33982 verified
metadata
license: apache-2.0

Nikhitha Telugu Dataset Model Model ID: Nikitha-logics/Nikhitha_telugu_dataset_model

Model Type: ALBERT-based Language Model

License: Apache-2.0

Model Overview The Nikhitha Telugu Dataset Model is an ALBERT-based language model trained on a Telugu language dataset. ALBERT (A Lite BERT) is a transformer-based model designed for natural language processing tasks, optimized for efficiency and performance.

Model Details Model Size: 33.2 million parameters

Tensor Type: Float32 (F32)

Format: Safetensors

Usage To utilize this model in your projects, you can load it using the Hugging Face Transformers library:

from transformers import AlbertTokenizer, AlbertForMaskedLM

Load the tokenizer

tokenizer = AlbertTokenizer.from_pretrained("Nikitha-logics/Nikhitha_telugu_dataset_model")

Load the model

model = AlbertForMaskedLM.from_pretrained("Nikitha-logics/Nikhitha_telugu_dataset_model")