prince-canuma commited on
Commit
dcf4c48
·
verified ·
1 Parent(s): 5559bde

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -0
README.md ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ library_name: mlx
4
+ base_model: deepseek-ai/DeepSeek-v3.1-Base
5
+ tags:
6
+ - mlx
7
+ pipeline_tag: text-generation
8
+ ---
9
+
10
+ # mlx-community/DeepSeek-V3.1-Base-4bit
11
+
12
+ This model [mlx-community/DeepSeek-V3.1-Base-4bit](https://huggingface.co/mlx-community/DeepSeek-V3.1-Base-4bit) was
13
+ converted to MLX format from [deepseek-ai/DeepSeek-v3.1-Base](https://huggingface.co/deepseek-ai/DeepSeek-v3.1-Base)
14
+ using mlx-lm version **0.26.3**.
15
+
16
+ ## Use with mlx
17
+
18
+ ```bash
19
+ pip install mlx-lm
20
+ ```
21
+
22
+ ```python
23
+ from mlx_lm import load, generate
24
+
25
+ model, tokenizer = load("mlx-community/DeepSeek-v3.1-Base-4bit")
26
+
27
+ prompt = "hello"
28
+
29
+ if tokenizer.chat_template is not None:
30
+ messages = [{"role": "user", "content": prompt}]
31
+ prompt = tokenizer.apply_chat_template(
32
+ messages, add_generation_prompt=True
33
+ )
34
+
35
+ response = generate(model, tokenizer, prompt=prompt, verbose=True)
36
+ ```