codemajesty commited on
Commit
2f42651
·
verified ·
1 Parent(s): fe17454

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -99
README.md CHANGED
@@ -1,99 +1,14 @@
1
- ---
2
- language:
3
- - en
4
- thumbnail: "https://your-thumbnail-url.com/image.png"
5
- tags:
6
- - quantization
7
- - gemma
8
- - 4bit
9
- - causal-lm
10
- license: "apache-2.0"
11
- datasets:
12
- - your-dataset-name
13
- metrics:
14
- - perplexity
15
- base_model: "google/gemma-2b"
16
- ---
17
-
18
- # Gemma 2B Quantized 4-bit
19
-
20
- This repository contains a 4-bit quantized version of the [Gemma 2B](https://huggingface.co/google/gemma-2b) model.
21
-
22
- ## Files
23
-
24
- - `config.json`: Model configuration.
25
- - `generation_config.json`: Generation parameters.
26
- - `model.safetensors`: Quantized model weights.
27
- - `special_tokens_map.json`: Special tokens mapping.
28
- - `tokenizer_config.json`: Tokenizer configuration.
29
- - `tokenizer.json`: Tokenizer vocabulary.
30
-
31
- ## Usage
32
-
33
- You can load this model using the Hugging Face Transformers library:
34
-
35
- ```python
36
- from transformers import AutoModelForCausalLM, AutoTokenizer
37
-
38
- model = AutoModelForCausalLM.from_pretrained("your-username/gemma-2b-quantized-4bit")
39
- tokenizer = AutoTokenizer.from_pretrained("your-username/gemma-2b-quantized-4bit")
40
- ```
41
-
42
- ## License
43
-
44
- This model is licensed under the Apache 2.0 License. See [LICENSE](./LICENSE) for details.
45
-
46
- ## Credits
47
-
48
- - Original model: [Gemma 2B](https://huggingface.co/google/gemma-2b)
49
- - Quantization: 4-bit
50
- ```
51
-
52
- ```text:LICENSE
53
- Apache License
54
- Version 2.0, January 2004
55
- http://www.apache.org/licenses/
56
-
57
- TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
58
-
59
- Licensed under the Apache License, Version 2.0 (the "License");
60
- you may not use this file except in compliance with the License.
61
- You may obtain a copy of the License at
62
-
63
- http://www.apache.org/licenses/LICENSE-2.0
64
-
65
- Unless required by applicable law or agreed to in writing, software
66
- distributed under the License is distributed on an "AS IS" BASIS,
67
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
68
- See the License for the specific language governing permissions and
69
- limitations under the License.
70
- ```
71
-
72
- ```gitignore
73
- # System files
74
- .DS_Store
75
- Thumbs.db
76
-
77
- # Python cache
78
- __pycache__/
79
- *.pyc
80
-
81
- # Jupyter Notebook checkpoints
82
- .ipynb_checkpoints/
83
-
84
- # VSCode settings
85
- .vscode/
86
-
87
- # Large files (remove if you want to include model weights in git)
88
- *.safetensors
89
- ```
90
-
91
- ---
92
-
93
- **What to do next:**
94
- - Fill in the placeholders in `README.md` (thumbnail, datasets, etc.).
95
- - If you want to include `model.safetensors` in your GitHub repo, remove `*.safetensors` from `.gitignore`.
96
- - Initialize git, commit, and push to GitHub.
97
- - You’re now ready to upload to Hugging Face!
98
-
99
- Let me know if you want any further customization or help with the git/Hugging Face steps!
 
1
+ ---
2
+ language:
3
+ - en
4
+ library_name: transformers
5
+ pipeline_tag: text-generation
6
+ base_model: google/gemma-2b
7
+ tags:
8
+ - gemma
9
+ - quantization
10
+ - 4bit
11
+ - text-generation
12
+ - causal-lm
13
+ license: apache-2.0
14
+ ---