Llamacpp imatrix Quantizations of Qwen2.5-14B-CIC-ACLARC

Using llama.cpp release b3772 for quantization.

Original model: https://huggingface.co/sknow-lab/Qwen2.5-14B-CIC-ACLARC

Prompt format

<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

Citation

@misc{koloveas2025llmspredictcitationintent,
      title={Can LLMs Predict Citation Intent? An Experimental Analysis of In-context Learning and Fine-tuning on Open LLMs}, 
      author={Paris Koloveas and Serafeim Chatzopoulos and Thanasis Vergoulis and Christos Tryfonopoulos},
      year={2025},
      eprint={2502.14561},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2502.14561}, 
}
Downloads last month
38
GGUF
Model size
14.8B params
Architecture
qwen2

8-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for sknow-lab/Qwen2.5-14B-CIC-ACLARC-GGUF

Base model

Qwen/Qwen2.5-14B
Quantized
(104)
this model

Dataset used to train sknow-lab/Qwen2.5-14B-CIC-ACLARC-GGUF

Collection including sknow-lab/Qwen2.5-14B-CIC-ACLARC-GGUF