File size: 249 Bytes
123a8b7
 
 
 
 
1
2
3
4
5
6
---
base_model: deepseek-ai/DeepSeek-Prover-V2-7B
---

[EXL3](https://github.com/turboderp-org/exllamav3) quantization of [DeepSeek-Prover-V2-7B](https://huggingface.co/deepseek-ai/DeepSeek-Prover-V2-7B), 8 bits per weight, including output layers.