up2u commited on
Commit
4b3bfd3
·
verified ·
1 Parent(s): d60cbe2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +77 -21
README.md CHANGED
@@ -1,39 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # Grok 2
2
 
3
  This repository contains the weights of Grok 2, a model trained and used at xAI in 2024.
4
 
5
- ## Usage: Serving with SGLang
 
 
 
 
 
 
 
 
 
6
 
7
- - Download the weights. You can replace `/local/grok-2` with any other folder name you prefer.
 
8
 
9
- ```
10
- hf download xai-org/grok-2 --local-dir /local/grok-2
11
- ```
12
 
13
- You might encounter some errors during the download. Please retry until the download is successful.
14
- If the download succeeds, the folder should contain **42 files** and be approximately 500 GB.
15
 
16
- - Launch a server.
 
 
 
 
 
 
 
 
 
 
17
 
18
- Install the latest SGLang inference engine (>= v0.5.1) from https://github.com/sgl-project/sglang/
19
 
20
- Use the command below to launch an inference server. This checkpoint is TP=8, so you will need 8 GPUs (each with > 40GB of memory).
21
- ```
22
- python3 -m sglang.launch_server --model /local/grok-2 --tokenizer-path /local/grok-2/tokenizer.tok.json --tp 8 --quantization fp8 --attention-backend triton
23
- ```
24
 
25
- - Send a request.
26
 
27
- This is a post-trained model, so please use the correct [chat template](https://github.com/sgl-project/sglang/blob/97a38ee85ba62e268bde6388f1bf8edfe2ca9d76/python/sglang/srt/tokenizer/tiktoken_tokenizer.py#L106).
 
28
 
29
- ```
30
- python3 -m sglang.test.send_one --prompt "Human: What is your name?<|separator|>\n\nAssistant:"
31
- ```
32
 
33
- You should be able to see the model output its name, Grok.
 
 
34
 
35
- Learn more about other ways to send requests [here](https://docs.sglang.ai/basic_usage/send_request.html).
 
 
 
 
 
 
36
 
37
  ## License
38
 
39
- The weights are licensed under the [Grok 2 Community License Agreement](https://huggingface.co/xai-org/grok-2/blob/main/LICENSE).
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - ar
4
+ - en
5
+ base_model:
6
+ - xai-org/grok-2
7
+ tags:
8
+ - agent
9
+ - finance
10
+ - code
11
+ ---
12
+ language:
13
+
14
+ - en
15
+ license: other
16
+ library_name: sglang
17
+ pipeline_tag: text-generation
18
+ tags:
19
+ - grok-2
20
+ - xai
21
+ - sglang
22
+ - inference
23
+ - triton
24
+ base_model: xai-org/grok-2
25
+ model-index:
26
+ - name: grok-2
27
+ results: []
28
+
29
  # Grok 2
30
 
31
  This repository contains the weights of Grok 2, a model trained and used at xAI in 2024.
32
 
33
+ - License: Grok 2 Community License Agreement (./LICENSE)
34
+ - Ownership: xAI (this document does not change license or weights)
35
+
36
+ ## Weights
37
+
38
+ Download from the Hub (≈500 GB total; 42 files):
39
+ hf download xai-org/grok-2 --local-dir /local/grok-2
40
+ If you see transient errors, retry until it completes. On success, the directory should contain 42 files (~500 GB).
41
+
42
+ ## Hardware and Parallelism
43
 
44
+ - This checkpoint is configured for TP=8.
45
+ - Recommended: 8× GPUs (each > 40 GB memory).
46
 
47
+ ## Serving with SGLang (>= v0.5.1)
 
 
48
 
49
+ Install SGLang from https://github.com/sgl-project/sglang/
 
50
 
51
+ Launch an inference server:
52
+ python3 -m sglang.launch_server \
53
+ --model /local/grok-2 \
54
+ --tokenizer-path /local/grok-2/tokenizer.tok.json \
55
+ --tp 8 \
56
+ --quantization fp8 \
57
+ --attention-backend triton
58
+ Send a test request (chat template aware):
59
+ python3 -m sglang.test.send_one --prompt \
60
+ "Human: What is your name?<|separator|>\n\nAssistant:"
61
+ You should see the model respond with its name: “Grok”.
62
 
63
+ More ways to send requests:
64
 
65
+ - https://docs.sglang.ai/basic_usage/send_request.html
 
 
 
66
 
67
+ Note: this is a post-trained model; use the correct chat template:
68
 
69
+ - https://github.com/sgl-project/sglang/blob/97a38ee85ba62e268bde6388f1bf8edfe2ca9d76/python/sglang/srt/tokenizer/
70
+ tiktoken_tokenizer.py#L106
71
 
72
+ ## Community Usage (Examples)
 
 
73
 
74
+ - Local-only serving behind VPN/Nginx allowlist
75
+ - Log and audit inference (timestamps and SHA‑256 manifests)
76
+ - Optional fallback to xAI’s API when local capacity is unavailable
77
 
78
+ These examples describe usage patterns only; they do not alter license or weights.
79
+
80
+ ## Limitations and Safety
81
+
82
+ - Large memory footprint (multi-GPU recommended)
83
+ - Follow the Grok 2 Community License
84
+ - Redact any sensitive data before inference if routing via cloud services
85
 
86
  ## License
87
 
88
+ Weights are licensed under the Grok 2 Community License Agreement (./LICENSE).
89
+
90
+ تعليق PR جاهز (الصق في وصف الطلب)
91
+
92
+ - Summary: Fix model card metadata (YAML at top), remove duplicated sections, fence all code blocks, and keep license/ownership
93
+ unchanged.
94
+ - Scope: README.md only. No weights or license changes.
95
+ - Rationale: Makes the card copy‑paste runnable for SGLang and resolves Hub’s YAML metadata warning.