--- license: mit tags: - medical - dental - lora - qwen - instruction-tuning - unsloth - adapter - peft - transformers --- # 🦷 doctor-dental-implant-LoRA-Qwen2.5-7B-Instruct This is a **LoRA adapter** fine-tuned with [Unsloth](https://github.com/unslothai/unsloth) on a domain-specific dataset that combines: - **Realistic doctor–patient conversations** - **Dental implant Q&A** extracted from Straumann® technical manuals > 🔬 Designed to make Qwen2.5-7B-Instruct capable of answering both general health questions and dental-specific scenarios. --- ## 🧠 Base Model - **Base**: [`Qwen/Qwen2.5-VL-7B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) - **Adapter**: LoRA (PEFT-based) This repo contains only the **LoRA adapter weights**, not the full model. --- ## 🗂 Files | File | Purpose | |-------------------------|-------------------------------------| | `adapter_model.safetensors` | LoRA weight file (for PEFT loading) | | `adapter_config.json` | LoRA hyperparameter configuration | | `tokenizer.json`, `vocab.json`, `merges.txt` | Tokenizer (shared with base model) | --- ## 📦 How to Use ```python from transformers import AutoTokenizer, AutoModelForCausalLM from peft import PeftModel # Load base model base = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct", trust_remote_code=True) # Load adapter model = PeftModel.from_pretrained(base, "BirdieByte1024/doctor-dental-implant-LoRA-Qwen2.5-7B-Instruct")