YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

This model is a fine-tuned version of LLaMA 3.2-3B, trained on a carefully curated dataset of 500 samples selected using Facility Location (FL) optimization. The dataset was refined from a larger corpus through representative sample selection, ensuring that the most informative and diverse data points were retained while redundant and uninformative samples were removed.

Fine-tuning was conducted to improve task-specific performance while significantly reducing training cost and data inefficiencies. By leveraging FL-based data selection, we ensured that the final dataset maintained high coverage and diversity while requiring only 5% of the original dataset size.

Downloads last month
23
Safetensors
Model size
3.21B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.