Pk3112 commited on
Commit
db00ad1
·
verified ·
1 Parent(s): 7ba7c54

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +84 -5
README.md CHANGED
@@ -1,5 +1,84 @@
1
- ---
2
- license: other
3
- license_name: other
4
- license_link: LICENSE
5
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: other
4
+ library_name: peft
5
+ pipeline_tag: text-generation
6
+ base_model: meta-llama/Meta-Llama-3-8B-Instruct
7
+ datasets:
8
+ - openlifescienceai/medmcqa
9
+ tags:
10
+ - lora
11
+ - qlora
12
+ - peft
13
+ - unsloth
14
+ - medmcqa
15
+ - medical
16
+ - instruction-tuning
17
+ - llama
18
+ metrics:
19
+ - accuracy
20
+ ---
21
+
22
+ # MedMCQA LoRA — Meta-Llama-3-8B-Instruct
23
+
24
+ **Adapter weights only** for `meta-llama/Meta-Llama-3-8B-Instruct`, fine-tuned to answer **medical multiple-choice questions (A/B/C/D)**.
25
+ Subjects used for fine-tuning and evaluation: **Biochemistry** and **Physiology**.
26
+ > Educational use only. Not medical advice.
27
+
28
+ > **Access note:** Llama-3 base is a **public gated** model on HF.
29
+ > Accept the base model license on its page and use a **fine-grained token** that allows **public gated repos**.
30
+
31
+ ## Quick use (Transformers + PEFT)
32
+ ```python
33
+ import os, re
34
+ from transformers import AutoTokenizer, AutoModelForCausalLM
35
+ from peft import PeftModel
36
+
37
+ BASE = "meta-llama/Meta-Llama-3-8B-Instruct"
38
+ ADAPTER = "Pk3112/medmcqa-lora-llama3-8b-instruct"
39
+ hf_token = os.getenv("HUGGINGFACE_HUB_TOKEN") # required if not logged in
40
+
41
+ tok = AutoTokenizer.from_pretrained(BASE, use_fast=True, token=hf_token)
42
+ base = AutoModelForCausalLM.from_pretrained(BASE, device_map="auto", token=hf_token)
43
+ model = PeftModel.from_pretrained(base, ADAPTER, token=hf_token).eval()
44
+
45
+ prompt = (
46
+ "Question: Which vitamin is absorbed in the ileum?\n"
47
+ "A. Vitamin D\nB. Vitamin B12\nC. Iron\nD. Fat\n\n"
48
+ "Answer:"
49
+ )
50
+ inputs = tok(prompt, return_tensors="pt").to(model.device)
51
+ out = model.generate(**inputs, max_new_tokens=8, do_sample=False)
52
+ text = tok.decode(out[0], skip_special_tokens=True)
53
+
54
+ m = re.search(r"Answer:\s*([A-D])\b", text)
55
+ print(f"Answer: {m.group(1)}" if m else text.strip())
56
+ ```
57
+
58
+ *Tip:* For rich explanations, increase `max_new_tokens`. For answer-only, keep it small and stop after the letter to reduce latency.
59
+
60
+ ## Results (Biochemistry + Physiology)
61
+ | Model | Internal val acc (%) | Original val acc (%) | TTFT (ms) | Gen time (ms) | In/Out tokens |
62
+ |---|---:|---:|---:|---:|---:|
63
+ | **Llama-3-8B (LoRA)** | **83.83** | **65.20** | 567 | 14874 | 148 / 80 |
64
+
65
+ ## Training (summary)
66
+ - Frameworks: **Unsloth + PEFT/LoRA** (QLoRA NF4)
67
+ - LoRA: `r=32, alpha=64, dropout=0.0`; targets `q_proj,k_proj,v_proj,o_proj,gate_proj,up_proj,down_proj`
68
+ - Max seq length: `768`
69
+ - Objective: **answer-only** target (`Answer: <A/B/C/D>`)
70
+ - Split: stratified **70/30** on `subject_name` (Biochemistry, Physiology)
71
+
72
+ ## Training code & reproducibility
73
+ - **GitHub repo:** https://github.com/PranavKumarAV/MedMCQA-Chatbot-Finetune-Medical-AI
74
+ - **Release (code snapshot):** https://github.com/PranavKumarAV/MedMCQA-Chatbot-Finetune-Medical-AI/releases/tag/v1.0-medmcqa
75
+
76
+ ## Files provided
77
+ - `adapter_model.safetensors`
78
+ - `adapter_config.json`
79
+
80
+ ## License & usage
81
+ - **Adapter:** “Other” — adapter weights only; **use requires access to the base model** under the **Meta Llama 3 Community License** (accept on base model page)
82
+ - **Base model:** `meta-llama/Meta-Llama-3-8B-Instruct` (public gated on HF)
83
+ - **Dataset:** `openlifescienceai/medmcqa` — follow dataset license
84
+ - **Safety:** Educational use only. Not medical advice.