metadata
language: en
license: apache-2.0
library_name: transformers
SQFT Base Model: sqft-mistral-7b-v0.3-50-base-gptq
- Source Model: mistralai/Mistral-7B-v0.3
- Sparse Method: Wanda
- Sparsity: 50%
- Quantization: GPTQ-INT4
Model Sources
Repository: https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/SQFT
Paper:
- SQFT: Low-cost Model Adaptation in Low-precision Sparse Foundation Models
- Low-Rank Adapters Meet Neural Architecture Search for LLM Compression
How to get this model
Refer to the commands in SQFT/run_command/mistral-7b-v0.3/sparse_quantization.sh.
Citation
@inproceedings{munoz-etal-2024-sqft,
title = "{SQFT}: Low-cost Model Adaptation in Low-precision Sparse Foundation Models",
author = "Munoz, Juan Pablo and
Yuan, Jinjie and
Jain, Nilesh",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2024",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-emnlp.749",
pages = "12817--12832",
}
Acknowledgement
Thanks to the sparse algorithm Wanda and the quantization method GPTQ.
License
Apache-2.0