cicdatopea commited on
Commit
59f891b
·
verified ·
1 Parent(s): 9ca044c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -4
README.md CHANGED
@@ -22,13 +22,10 @@ Please follow the license of the original model.
22
 
23
  please note int2 **may be slower** than int4 on CUDA due to kernel issue.
24
 
25
- ~~~python
26
  ~~~python
27
  import transformers
28
- from transformers import AutoModelForCausalLM, AutoTokenizer
29
-
30
-
31
  import torch
 
32
 
33
  quantized_model_dir = "OPEA/DeepSeek-R1-int2-gptq-sym-inc"
34
 
 
22
 
23
  please note int2 **may be slower** than int4 on CUDA due to kernel issue.
24
 
 
25
  ~~~python
26
  import transformers
 
 
 
27
  import torch
28
+ from transformers import AutoModelForCausalLM, AutoTokenizer
29
 
30
  quantized_model_dir = "OPEA/DeepSeek-R1-int2-gptq-sym-inc"
31