YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

Apollo-0.5B - EXL2

Available sizes

| Branch | Bits | Description | | ----- | ---- | ------- | ------ | ------ | ------ | ------ | ------------ | | 8_0 | 8.0 | Maximum quality that ExLlamaV2 can produce, near unquantized performance. | | 6_5 | 6.5 | Very similar to 8.0, good tradeoff of size vs performance, recommended. | | 5_0 | 5.0 | Slightly lower quality vs 6.5, but usable on 8GB cards. | | 4_25 | 4.25 | GPTQ equivalent bits per weight, slightly higher quality. | | 3_5 | 3.5 | Lower quality, only use if you have to. |

Download instructions

With git:

git clone --single-branch --branch 6_5 https://huggingface.co/FreedomIntelligence_-_Apollo-0.5B-exl2 Apollo-0.5B-6_5

With huggingface hub:

pip3 install huggingface-hub

To download a specific branch, use the --revision parameter. For example, to download the 6.5 bpw branch: Linux:

huggingface-cli download FreedomIntelligence_-_Apollo-0.5B-exl2 --revision 6_5 --local-dir Apollo-0.5B-6_5 --local-dir-use-symlinks False

Windows (which apparently doesn't like _ in folders sometimes?):

huggingface-cli download FreedomIntelligence_-_Apollo-0.5B-exl2 --revision 6_5 --local-dir Apollo-0.5B-6.5 --local-dir-use-symlinks False

Original model description:

license: apache-2.0

Multilingual Medicine: Model, Dataset, Benchmark, Code

Covering English, Chinese, French, Hindi, Spanish, Hindi, Arabic So far

πŸ‘¨πŸ»β€πŸ’»Github β€’πŸ“ƒ Paper β€’ 🌐 Demo β€’ πŸ€— ApolloCorpus β€’ πŸ€— XMedBench
δΈ­ζ–‡ | English

Apollo

🌈 Update

  • [2024.04.25] MedJamba released, train and evaluation code refer to repo.
  • [2024.03.07] Paper released.
  • [2024.02.12] ApolloCorpus and XMedBench is publishedοΌπŸŽ‰
  • [2024.01.23] Apollo repo is publishedοΌπŸŽ‰

Results

πŸ€— Apollo-0.5B β€’ πŸ€— Apollo-1.8B β€’ πŸ€— Apollo-2B β€’ πŸ€— Apollo-6B β€’ πŸ€— Apollo-7B β€’ πŸ€— Apollo-34B β€’ πŸ€— Apollo-72B

πŸ€— MedJamba

πŸ€— Apollo-0.5B-GGUF β€’ πŸ€— Apollo-2B-GGUF β€’ πŸ€— Apollo-6B-GGUF β€’ πŸ€— Apollo-7B-GGUF

Apollo

Usage Format

User:{query}\nAssistant:{response}<|endoftext|>

Dataset & Evaluation

  • Dataset πŸ€— ApolloCorpus

    Click to expand

    Apollo

    • Zip File
    • Data category
      • Pretrain:
        • data item:
          • json_name: {data_source}{language}{data_type}.json
          • data_type: medicalBook, medicalGuideline, medicalPaper, medicalWeb(from online forum), medicalWiki
          • language: en(English), zh(chinese), es(spanish), fr(french), hi(Hindi)
          • data_type: qa(generated qa from text)
          • data_type==text: list of string
            [
              "string1",
              "string2",
              ...
            ]
            
          • data_type==qa: list of qa pairs(list of string)
            [
              [
                "q1",
                "a1",
                "q2",
                "a2",
                ...
              ],
              ...
            ]
            
      • SFT:
        • json_name: {data_source}_{language}.json
        • data_type: code, general, math, medicalExam, medicalPatient
        • data item: list of qa pairs(list of string)
            [
              [
                "q1",
                "a1",
                "q2",
                "a2",
                ...
              ],
              ...
            ]
          
  • Evaluation πŸ€— XMedBench

    Click to expand
    • EN:

      • MedQA-USMLE
      • MedMCQA
      • PubMedQA: Because the results fluctuated too much, they were not used in the paper.
      • MMLU-Medical
        • Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
    • ZH:

      • MedQA-MCMLE
      • CMB-single: Not used in the paper
        • Randomly sample 2,000 multiple-choice questions with single answer.
      • CMMLU-Medical
        • Anatomy, Clinical_knowledge, College_medicine, Genetics, Nutrition, Traditional_chinese_medicine, Virology
      • CExam: Not used in the paper
        • Randomly sample 2,000 multiple-choice questions
    • ES: Head_qa

    • FR: Frenchmedmcqa

    • HI: MMLU_HI

      • Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
    • AR: MMLU_Ara

      • Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine

Results reproduction

Click to expand

Waiting for Update

Citation

Please use the following citation if you intend to use our dataset for training or evaluation:

@misc{wang2024apollo,
   title={Apollo: Lightweight Multilingual Medical LLMs towards Democratizing Medical AI to 6B People},
   author={Xidong Wang and Nuo Chen and Junyin Chen and Yan Hu and Yidong Wang and Xiangbo Wu and Anningzhe Gao and Xiang Wan and Haizhou Li and Benyou Wang},
   year={2024},
   eprint={2403.03640},
   archivePrefix={arXiv},
   primaryClass={cs.CL}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.