Quantization made by Richard Erkhov.
Apollo-0.5B - EXL2
- Model creator: https://huggingface.co/FreedomIntelligence/
- Original model: https://huggingface.co/FreedomIntelligence/Apollo-0.5B/
Available sizes
| Branch | Bits | Description | | ----- | ---- | ------- | ------ | ------ | ------ | ------ | ------------ | | 8_0 | 8.0 | Maximum quality that ExLlamaV2 can produce, near unquantized performance. | | 6_5 | 6.5 | Very similar to 8.0, good tradeoff of size vs performance, recommended. | | 5_0 | 5.0 | Slightly lower quality vs 6.5, but usable on 8GB cards. | | 4_25 | 4.25 | GPTQ equivalent bits per weight, slightly higher quality. | | 3_5 | 3.5 | Lower quality, only use if you have to. |
Download instructions
With git:
git clone --single-branch --branch 6_5 https://huggingface.co/FreedomIntelligence_-_Apollo-0.5B-exl2 Apollo-0.5B-6_5
With huggingface hub:
pip3 install huggingface-hub
To download a specific branch, use the --revision
parameter. For example, to download the 6.5 bpw branch:
Linux:
huggingface-cli download FreedomIntelligence_-_Apollo-0.5B-exl2 --revision 6_5 --local-dir Apollo-0.5B-6_5 --local-dir-use-symlinks False
Windows (which apparently doesn't like _ in folders sometimes?):
huggingface-cli download FreedomIntelligence_-_Apollo-0.5B-exl2 --revision 6_5 --local-dir Apollo-0.5B-6.5 --local-dir-use-symlinks False
Original model description:
license: apache-2.0
Multilingual Medicine: Model, Dataset, Benchmark, Code
Covering English, Chinese, French, Hindi, Spanish, Hindi, Arabic So far
π¨π»βπ»Github β’π Paper β’ π Demo β’ π€ ApolloCorpus β’ π€ XMedBench
δΈζ | English
π Update
- [2024.04.25] MedJamba released, train and evaluation code refer to repo.
- [2024.03.07] Paper released.
- [2024.02.12] ApolloCorpus and XMedBench is publishedοΌπ
- [2024.01.23] Apollo repo is publishedοΌπ
Results
π€ Apollo-0.5B β’ π€ Apollo-1.8B β’ π€ Apollo-2B β’ π€ Apollo-6B β’ π€ Apollo-7B β’ π€ Apollo-34B β’ π€ Apollo-72B
π€ MedJamba
π€ Apollo-0.5B-GGUF β’ π€ Apollo-2B-GGUF β’ π€ Apollo-6B-GGUF β’ π€ Apollo-7B-GGUF
Usage Format
User:{query}\nAssistant:{response}<|endoftext|>
Dataset & Evaluation
Dataset π€ ApolloCorpus
Click to expand
- Zip File
- Data category
- Pretrain:
- data item:
- json_name: {data_source}{language}{data_type}.json
- data_type: medicalBook, medicalGuideline, medicalPaper, medicalWeb(from online forum), medicalWiki
- language: en(English), zh(chinese), es(spanish), fr(french), hi(Hindi)
- data_type: qa(generated qa from text)
- data_type==text: list of string
[ "string1", "string2", ... ]
- data_type==qa: list of qa pairs(list of string)
[ [ "q1", "a1", "q2", "a2", ... ], ... ]
- data item:
- SFT:
- json_name: {data_source}_{language}.json
- data_type: code, general, math, medicalExam, medicalPatient
- data item: list of qa pairs(list of string)
[ [ "q1", "a1", "q2", "a2", ... ], ... ]
- Pretrain:
Evaluation π€ XMedBench
Click to expand
EN:
- MedQA-USMLE
- MedMCQA
- PubMedQA: Because the results fluctuated too much, they were not used in the paper.
- MMLU-Medical
- Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
ZH:
- MedQA-MCMLE
- CMB-single: Not used in the paper
- Randomly sample 2,000 multiple-choice questions with single answer.
- CMMLU-Medical
- Anatomy, Clinical_knowledge, College_medicine, Genetics, Nutrition, Traditional_chinese_medicine, Virology
- CExam: Not used in the paper
- Randomly sample 2,000 multiple-choice questions
ES: Head_qa
FR: Frenchmedmcqa
HI: MMLU_HI
- Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
AR: MMLU_Ara
- Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
Results reproduction
Click to expand
Waiting for Update
Citation
Please use the following citation if you intend to use our dataset for training or evaluation:
@misc{wang2024apollo,
title={Apollo: Lightweight Multilingual Medical LLMs towards Democratizing Medical AI to 6B People},
author={Xidong Wang and Nuo Chen and Junyin Chen and Yan Hu and Yidong Wang and Xiangbo Wu and Anningzhe Gao and Xiang Wan and Haizhou Li and Benyou Wang},
year={2024},
eprint={2403.03640},
archivePrefix={arXiv},
primaryClass={cs.CL}
}