Disclaimer
This model is currently in the research phase and is not a final production-ready version. It is undergoing continuous testing, validation, and refinement. Performance, accuracy, and stability may vary, and results should not be considered definitive. Use at your own discretion. I do not guarantee the reliability of outputs, and any implementation or decision-making based on this model should be done with caution. Future updates may introduce significant changes or improvements. For inquiries or feedback, please reach out to me at Linkedin.
Usages:
install ollama
then run ollama run hf.co/adeebaldkheel/ALLaM-7B-Instruct-preview-reasoning
from openai import OpenAI
client = OpenAI(
base_url='http://localhost:11434/v1/',
# required but ignored
api_key='ollama',
)
# Create a chat completion request to the Ollama server.
response = client.chat.completions.create(
model="hf.co/adeebaldkheel/ALLaM-7B-Instruct-preview-reasoning:latest", # Specify the model available on your Ollama server.
messages=[
{"role": "system", "content": """أجب بالصيغة التالية:
<think>
خطوات ايجاد الحل هنا
</think>
<answer>
الإجابة النهائية هنا
</answer>"""},
{"role": "user", "content": """أوجد قيمة x في المعادلة التالية:
2x + 3 = 7"""}
],
max_tokens=4096,
temperature=0.4,
top_p=0.95,
)
# Output the response from the AI.
print(response.choices[0].message.content)
# Output:
"""
<think>
لحل المعادلة، نتبع الخطوات التالية:
1. نطرح 3 من كلا الجانبين للتخلص من العدد الثابت على الجانب الأيسر:
2x + 3 - 3 = 7 - 3
2x = 4
2. نقسم كلا الجانبين على 2 لحل x:
(2x)/2 = 4/2
x = 2
</think>
<answer>
قيمة x هي 2.
</answer>
"""
example:
- Downloads last month
- 40
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.