Update README.md
#39
by
up2u
- opened
README.md
CHANGED
@@ -1,39 +1,78 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
# Grok 2
|
2 |
|
3 |
This repository contains the weights of Grok 2, a model trained and used at xAI in 2024.
|
4 |
|
5 |
-
|
|
|
6 |
|
7 |
-
|
8 |
|
9 |
-
|
10 |
hf download xai-org/grok-2 --local-dir /local/grok-2
|
11 |
-
|
|
|
|
|
12 |
|
13 |
-
|
14 |
-
|
15 |
|
16 |
-
|
17 |
|
18 |
-
|
19 |
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
|
25 |
-
|
26 |
|
27 |
-
|
|
|
|
|
28 |
|
29 |
-
|
30 |
-
python3 -m sglang.test.send_one --prompt "Human: What is your name?<|separator|>\n\nAssistant:"
|
31 |
-
```
|
32 |
|
33 |
-
|
|
|
|
|
34 |
|
35 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
36 |
|
37 |
## License
|
38 |
|
39 |
-
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
language:
|
3 |
+
|
4 |
+
- en
|
5 |
+
license: other
|
6 |
+
library_name: sglang
|
7 |
+
pipeline_tag: text-generation
|
8 |
+
tags:
|
9 |
+
- grok-2
|
10 |
+
- xai
|
11 |
+
- sglang
|
12 |
+
- inference
|
13 |
+
- triton
|
14 |
+
base_model: xai-org/grok-2
|
15 |
+
model-index:
|
16 |
+
- name: grok-2
|
17 |
+
results: []
|
18 |
+
|
19 |
# Grok 2
|
20 |
|
21 |
This repository contains the weights of Grok 2, a model trained and used at xAI in 2024.
|
22 |
|
23 |
+
- License: Grok 2 Community License Agreement (./LICENSE)
|
24 |
+
- Ownership: xAI (no changes to license or weights in this PR)
|
25 |
|
26 |
+
## Weights
|
27 |
|
28 |
+
- Download from the Hub (≈500 GB total; 42 files):
|
29 |
hf download xai-org/grok-2 --local-dir /local/grok-2
|
30 |
+
If you see transient errors, retry until it completes.
|
31 |
+
|
32 |
+
## Hardware and Parallelism
|
33 |
|
34 |
+
- This checkpoint is configured for TP=8.
|
35 |
+
- Recommended: 8× GPUs (each > 40 GB memory).
|
36 |
|
37 |
+
## Serving with SGLang (>= v0.5.1)
|
38 |
|
39 |
+
Install SGLang from https://github.com/sgl-project/sglang/
|
40 |
|
41 |
+
Launch an inference server:
|
42 |
+
python3 -m sglang.launch_server \
|
43 |
+
--model /local/grok-2 \
|
44 |
+
--tokenizer-path /local/grok-2/tokenizer.tok.json \
|
45 |
+
--tp 8 \
|
46 |
+
--quantization fp8 \
|
47 |
+
--attention-backend triton
|
48 |
+
Send a test request (chat template aware):
|
49 |
+
python3 -m sglang.test.send_one --prompt \
|
50 |
+
"Human: What is your name?<|separator|>\n\nAssistant:"
|
51 |
+
You should see the model respond with its name: “Grok”.
|
52 |
|
53 |
+
More ways to send requests:
|
54 |
|
55 |
+
- https://docs.sglang.ai/basic_usage/send_request.html
|
56 |
+
- Note: this is a post-trained model; use the correct chat template:
|
57 |
+
https://github.com/sgl-project/sglang/blob/97a38.../tiktoken_tokenizer.py#L106
|
58 |
|
59 |
+
## Community Usage (Examples)
|
|
|
|
|
60 |
|
61 |
+
- Local-only serving behind VPN/Nginx allowlist
|
62 |
+
- Log and audit inference (timestamps and SHA-256 manifests)
|
63 |
+
- Optional cloud fallback to xAI’s API when local capacity is unavailable
|
64 |
|
65 |
+
These are usage patterns only; they don’t alter license or weights.
|
66 |
+
|
67 |
+
## Limitations and Safety
|
68 |
+
|
69 |
+
- Large memory footprint (multi-GPU recommended)
|
70 |
+
- Follow the Grok 2 Community License
|
71 |
+
- Redact any sensitive data before inference if routing via cloud services
|
72 |
|
73 |
## License
|
74 |
|
75 |
+
Weights are licensed under the Grok 2 Community License Agreement (./LICENSE).
|
76 |
+
|
77 |
+
خياراتك الآن
|
78 |
+
|