MLDataScientist commited on
Commit
17613f1
·
verified ·
1 Parent(s): 1d7f12b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -9,7 +9,7 @@ tags:
9
  This is a 3bit AutoRound GPTQ version of Mistral-Large-Instruct-2407.
10
  This conversion used model-*.safetensors.
11
 
12
- Quantization script (it takes around 520 GB RAM and A40 GPU 40GB around 20 hours to convert):
13
  ```
14
  from transformers import AutoModelForCausalLM, AutoTokenizer
15
  import torch
 
9
  This is a 3bit AutoRound GPTQ version of Mistral-Large-Instruct-2407.
10
  This conversion used model-*.safetensors.
11
 
12
+ Quantization script (it takes around 520 GB RAM and A40 GPU 48GB around 20 hours to convert):
13
  ```
14
  from transformers import AutoModelForCausalLM, AutoTokenizer
15
  import torch