Omartificial-Intelligence-Space/ALLaM-7B-Instruct-preview-Q8_0-GGUF

This model was converted to GGUF format from ALLaM-AI/ALLaM-7B-Instruct-preview using llama.cpp.

ALLaM-7B-Instruct-preview: [Base Model]

ALLaM is a series of powerful language models designed to advance Arabic Language Technology (ALT) developed by the National Center for Artificial Intelligence (NCAI) at the Saudi Data and AI Authority (SDAIA). ALLaM-AI/ALLaM-7B-Instruct-preview is trained from scratch. Our pretraining from scratch recipe consists of two steps: training on 4T English tokens followed by training on 1.2T mixed Arabic/English tokens. This retains the English capabilities of the model without catastrophic forgetting, effectively transferring knowledge from one language distribution to another.

Example Usages

Use with llama.cpp

Install llama.cpp through brew (works on Mac and Linux)

brew install llama.cpp

Invoke the llama.cpp server or the CLI.

CLI:

llama-cli --hf-repo Omartificial-Intelligence-Space/ALLaM-7B-Instruct-preview-Q4_K_M-GGUF --hf-file allam-7b-instruct-preview-q4_k_m.gguf -p "The meaning to life and the universe is"

Server:

llama-server --hf-repo Omartificial-Intelligence-Space/ALLaM-7B-Instruct-preview-Q4_K_M-GGUF --hf-file allam-7b-instruct-preview-q4_k_m.gguf -c 2048

Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.

Step 1: Clone llama.cpp from GitHub.

git clone https://github.com/ggerganov/llama.cpp

Step 2: Move into the llama.cpp folder and build it with LLAMA_CURL=1 flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).

cd llama.cpp && LLAMA_CURL=1 make

Step 3: Run inference through the main binary.

./llama-cli --hf-repo Omartificial-Intelligence-Space/ALLaM-7B-Instruct-preview-Q4_K_M-GGUF --hf-file allam-7b-instruct-preview-q4_k_m.gguf -p "The meaning to life and the universe is"

or

./llama-server --hf-repo Omartificial-Intelligence-Space/ALLaM-7B-Instruct-preview-Q4_K_M-GGUF --hf-file allam-7b-instruct-preview-q4_k_m.gguf -c 2048

Ethical Considerations and Limitations

ALLaM is a generative model that comes with inherent uncertainties. Trials cannot encompass every possible use case. Hence, predicting ALLaM's responses in every context is not possible, leading on occasion to incorrect or biased outputs. Developers must conduct thorough safety evaluations and make specific adjustments to ensure the model is suitable for the intended purposes.

The output generated by this model is not considered a statement of NCAI, SDAIA, or any other organization.

Citation

If you found this work helpful or used any part of this work, please include the following citation:

@inproceedings{
bari2025allam,
title={{ALL}aM: Large Language Models for Arabic and English},
author={M Saiful Bari and Yazeed Alnumay and Norah A. Alzahrani and Nouf M. Alotaibi and Hisham Abdullah Alyahya and Sultan AlRashed and Faisal Abdulrahman Mirza and Shaykhah Z. Alsubaie and Hassan A. Alahmed and Ghadah Alabduljabbar and Raghad Alkhathran and Yousef Almushayqih and Raneem Alnajim and Salman Alsubaihi and Maryam Al Mansour and Saad Amin Hassan and Dr. Majed Alrubaian and Ali Alammari and Zaki Alawami and Abdulmohsen Al-Thubaity and Ahmed Abdelali and Jeril Kuriakose and Abdalghani Abujabal and Nora Al-Twairesh and Areeb Alowisheq and Haidar Khan},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025},
url={https://openreview.net/forum?id=MscdsFVZrN}
}
Downloads last month
43
GGUF
Model size
7B params
Architecture
llama

8-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for Omartificial-Intelligence-Space/ALLaM-7B-Instruct-preview-Q8_0-GGUF

Quantized
(5)
this model