metadata
license: llama3.1
base_model:
- meta-llama/Llama-3.1-405B-Instruct
Model Overview
- Model Architecture: Llama-3.1
- Input: Text
- Output: Text
- Supported Hardware Microarchitecture: AMD MI350/MI355
- ROCm: 7.0
- Preferred Operating System(s): Linux
- Inference Engine: vLLM
- Model Optimizer: AMD-Quark
- Weight quantization: OCP MXFP4, Static
- Activation quantization: OCP MXFP4, Dynamic
- KV cache quantization: OCP FP8, Static
- Calibration Dataset: Pile
This model was built with Meta Llama by applying AMD-Quark for MXFP4 quantization.
Model Quantization
The model was quantized from meta-llama/Llama-3.1-405B-Instruct using AMD-Quark. Weights and activations were quantized to MXFP4, and KV caches were quantized to FP8. The AutoSmoothQuant algorithm was applied to enhance accuracy during quantization.
Quantization scripts:
cd Quark/examples/torch/language_modeling/llm_ptq/
python3 quantize_quark.py --model_dir "meta-llama/Llama-3.1-405B-Instruct" \
--model_attn_implementation "sdpa" \
--quant_scheme w_mxfp4_a_mxfp4 \
--group_size 32 \
--kv_cache_dtype fp8 \
--quant_algo autosmoothquant \
--min_kv_scale 1.0 \
--model_export hf_format \
--output_dir amd/Llama-3.1-405B-Instruct-MXFP4 \
--multi_gpu
Deployment
Use with vLLM
This model can be deployed efficiently using the vLLM backend.
Evaluation
The model was evaluated on MMLU and GSM8K_COT. Evaluation was conducted using the framework lm-evaluation-harness and the vLLM engine.
Accuracy
Benchmark | Llama-3.1-405B-Instruct | Llama-3.1-405B-Instruct-MXFP4(this model) | Recovery |
MMLU (5-shot) | 87.63 | 86.62 | 98.85% |
GSM8K_COT (8-shot, strict-match) | 96.51 | 96.06 | 99.53% |
Reproduction
The results were obtained using the following commands:
MMLU
lm_eval \
--model vllm \
--model_args pretrained="amd/Llama-3.1-405B-Instruct-MXFP4-Preview",gpu_memory_utilization=0.85,tensor_parallel_size=8,kv_cache_dtype='fp8' \
--tasks mmlu_llama \
--fewshot_as_multiturn \
--apply_chat_template \
--num_fewshot 5 \
--batch_size auto
GSM8K_COT
lm_eval \
--model vllm \
--model_args pretrained="amd/Llama-3.1-405B-Instruct-MXFP4-Preview",gpu_memory_utilization=0.85,tensor_parallel_size=8,kv_cache_dtype='fp8' \
--tasks gsm8k_llama \
--fewshot_as_multiturn \
--apply_chat_template \
--num_fewshot 8 \
--batch_size auto
License
Modifications copyright(c) 2024 Advanced Micro Devices,Inc. All rights reserved.