awni commited on
Commit
ca8dbf4
·
verified ·
1 Parent(s): 6fe25ea

Add files using upload-large-folder tool

Browse files
README.md ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: mlx
3
+ license: apache-2.0
4
+ license_link: https://huggingface.co/Qwen/Qwen3-Coder-480B-A35B-Instruct/blob/main/LICENSE
5
+ pipeline_tag: text-generation
6
+ base_model: Qwen/Qwen3-Coder-480B-A35B-Instruct
7
+ tags:
8
+ - mlx
9
+ ---
10
+
11
+ # mlx-community/Qwen3-Coder-480B-A35B-Instruct-4bit
12
+
13
+ This model [mlx-community/Qwen3-Coder-480B-A35B-Instruct-4bit](https://huggingface.co/mlx-community/Qwen3-Coder-480B-A35B-Instruct-4bit) was
14
+ converted to MLX format from [Qwen/Qwen3-Coder-480B-A35B-Instruct](https://huggingface.co/Qwen/Qwen3-Coder-480B-A35B-Instruct)
15
+ using mlx-lm version **0.26.0**.
16
+
17
+ ## Use with mlx
18
+
19
+ ```bash
20
+ pip install mlx-lm
21
+ ```
22
+
23
+ ```python
24
+ from mlx_lm import load, generate
25
+
26
+ model, tokenizer = load("mlx-community/Qwen3-Coder-480B-A35B-Instruct-4bit")
27
+
28
+ prompt = "hello"
29
+
30
+ if tokenizer.chat_template is not None:
31
+ messages = [{"role": "user", "content": prompt}]
32
+ prompt = tokenizer.apply_chat_template(
33
+ messages, add_generation_prompt=True
34
+ )
35
+
36
+ response = generate(model, tokenizer, prompt=prompt, verbose=True)
37
+ ```
model-00011-of-00062.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:89df3f49970c9c6a87b17fd27fc4f903099cd34dcfca9479770369f376248592
3
+ size 4339326676
model-00029-of-00062.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bebb05172e4315664d5f40fc659c697b76910d622c3c8b14b27fc1775cfb066e
3
+ size 4339326690
model-00050-of-00062.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d136dce2ada0c7b4f661f898b45b5251f0b930a7997df9f28ad34c804c3a28ef
3
+ size 4339326682