metadata
base_model: deepseek-ai/DeepSeek-Prover-V2-7B
EXL3 quantization of DeepSeek-Prover-V2-7B, 8 bits per weight, including output layers.
base_model: deepseek-ai/DeepSeek-Prover-V2-7B
EXL3 quantization of DeepSeek-Prover-V2-7B, 8 bits per weight, including output layers.