🧠 dnai-humour-0.5B-instruct
A lightweight, fast, and surprisingly witty instruction-tuned language model fine-tuned on curated OpenAssistant conversations. Built to respond clearly, efficiently, and with a touch of humor — without pretending to be a superintelligence.
🔍 Overview
dnai-humour-0.5B-instruct is a fine-tuned variant of Qwen2.5-0.5B-Instruct, trained using a carefully selected subset of the OpenAssistant v1 dataset.
The focus is instruction following, conversational clarity, low-latency responses, and efficient deployment on modest hardware.
This model is small, fast, and does its job without unnecessary drama.
🎯 Main Capabilities
- 🧾 Instruction following
- 💬 Conversational AI & chatbots
- 🧠 Reasonable reasoning (for 0.5B — let’s stay honest)
- 😄 Light humor & friendly tone
- ⚡ Fast inference and low memory usage
- 🖥️ Suitable for edge devices & low-resource systems
🧠 Model Details
| Item | Description |
|---|---|
| Base Model | Qwen2.5-0.5B-Instruct |
| Model Type | Decoder-only Transformer |
| Parameters | ~0.5 Billion |
| Fine-Tuning Method | Supervised Fine-Tuning (SFT) |
| Frameworks | PyTorch, Hugging Face Transformers, TRL |
| Precision Support | FP16 / INT8 (quantization-friendly) |
📚 Dataset
OpenAssistant v1 (OASST1)
- Source: OpenAssistant Project
- Type: Human-written multi-turn conversations
- Domains:
- Question answering
- Reasoning
- Coding help
- General knowledge
- Casual chat
🔢 Data Used for Fine-Tuning
- Subset Size: ~15,000 conversations (smallest curated split)
- Selection Goal:
- High-quality instruction-response pairs
- Reduced noise
- Faster convergence
- Better alignment per token
Less data, more discipline.
⚡ Performance & Efficiency
- 🚀 Fast inference due to small parameter size
- 🧠 Low VRAM usage (runs comfortably on consumer GPUs)
- 📦 Easy to deploy on:
- Google Colab
- Lightning AI
- Local machines
- Edge setups
This model won’t melt your GPU or your patience.
😄 Personality & Humor
- Polite, friendly, and occasionally funny
- Avoids being robotic when possible
- Does not hallucinate confidence like it knows everything
- Knows when to explain and when to shut up
Basically: helpful, not annoying.
🚫 Limitations
- Not designed for:
- Medical or legal advice
- High-stakes reasoning
- Large-context document analysis
- Still a 0.5B model — expectations should match reality
Small brain, well-trained.
🛠️ Intended Use Cases
- Educational chatbots
- Personal AI assistants
- Instruction-based tools
- Lightweight LLM experiments
- Fine-tuning & research demos
📜 License & Ethics
- Base model and dataset licenses apply
- Trained on publicly available, human-generated data
- No intentional harmful or restricted content
Use responsibly. Don’t blame the model for human mistakes.
🧪 Training Note
This model was fine-tuned using a minimal but high-quality dataset to balance performance and efficiency.
The goal was alignment per token, not brute-force scaling.
Quality > Quantity.
👤 Author
Fine-tuned by DarkNeuronAI
Built by a student. Powered by curiosity.
Optimized because resources are expensive.
⭐ Final Words
If you need a small, fast, instruction-following model that doesn’t pretend to be GPT-4 — this one knows its place and performs it well.
- Downloads last month
- 2