🧠 TinyLLaMA-1.1B LoRA Fine-tuned on SciQ Dataset

This is a TinyLLaMA-1.1B model fine-tuned using LoRA (Low-Rank Adaptation) on the SciQ multiple-choice question answering dataset. It uses 4-bit quantization via bitsandbytes to reduce memory usage and improve inference efficiency.

🧪 Use Cases

This model is suitable for:

  • Educational QA bots
  • MCQ-style reasoning
  • Lightweight inference on constrained hardware (e.g., GPUs with <8GB VRAM)

🛠️ Training Details

  • Base Model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
  • Dataset: allenai/sciq (Science QA)
  • Method: Parameter-Efficient Fine-Tuning using LoRA
  • Quantization: 4-bit using bitsandbytes
  • Framework: 🤗 Transformers + PEFT + Datasets

🧬 Model Architecture

  • Model: Causal Language Model
  • Fine-tuned layers: q_proj, v_proj (via LoRA)
  • Quantization: 4-bit (bnb config)

📊 Evaluation

  • Accuracy: 100% on a 1000-sample SciQ subset
  • Eval Loss: ~0.19

💡 How to Use

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("TechyCode/tinyllama-sciq-lora")
tokenizer = AutoTokenizer.from_pretrained("TechyCode/tinyllama-sciq-lora")

prompt = """Question: What is the boiling point of water?\nChoices:\nA. 50°C\nB. 75°C\nC. 90°C\nD. 100°C\nAnswer:"""
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

🔐 License

This model is released under the MIT License.

🙌 Credits

FineTuned By - Uditanshu Pandey
Linkedin - UditanshuPandey
GitHub - UditanshuPandey
Based on - TinyLLaMA-1.1B-Chat-v1.0

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train TechyCode/tinyllama-sciq-lora