Upload Qwen3ForCausalLM
9da3ea0
verified
-
1.57 kB
Upload tokenizer
-
9.66 kB
Upload README.md with huggingface_hub
-
707 Bytes
Upload tokenizer
-
4.12 kB
Upload tokenizer
-
4 kB
Upload Qwen3ForCausalLM
-
117 Bytes
Upload Qwen3ForCausalLM
-
1.67 MB
Upload tokenizer
pytorch_model-00001-of-00003.bin
Detected Pickle imports (16)
- "torchao.dtypes.affine_quantized_tensor.AffineQuantizedTensor",
- "torch._utils._rebuild_wrapper_subclass",
- "torch.int8",
- "torch.CharStorage",
- "torchao.quantization.linear_activation_quantized_tensor.LinearActivationQuantizedTensor",
- "torch.bfloat16",
- "torch._utils._rebuild_tensor_v2",
- "collections.OrderedDict",
- "torch.serialization._get_layout",
- "torchao.dtypes.uintx.q_dq_layout.QDQTensorImpl",
- "torchao.dtypes.uintx.q_dq_layout.QDQLayout",
- "torch.device",
- "torch._tensor._rebuild_from_type_v2",
- "torchao.quantization.quant_primitives.ZeroPointDomain",
- "torch.BFloat16Storage",
- "torchao.quantization.quant_api._int8_asymm_per_token_quant"
How to fix it?
4.99 GB
Upload Qwen3ForCausalLM
pytorch_model-00002-of-00003.bin
Detected Pickle imports (16)
- "torchao.dtypes.affine_quantized_tensor.AffineQuantizedTensor",
- "torch._utils._rebuild_wrapper_subclass",
- "torch.int8",
- "torch.CharStorage",
- "torchao.quantization.linear_activation_quantized_tensor.LinearActivationQuantizedTensor",
- "torch.bfloat16",
- "torch.serialization._get_layout",
- "torch._utils._rebuild_tensor_v2",
- "torchao.dtypes.uintx.q_dq_layout.QDQTensorImpl",
- "collections.OrderedDict",
- "torchao.dtypes.uintx.q_dq_layout.QDQLayout",
- "torch.device",
- "torch._tensor._rebuild_from_type_v2",
- "torchao.quantization.quant_primitives.ZeroPointDomain",
- "torch.BFloat16Storage",
- "torchao.quantization.quant_api._int8_asymm_per_token_quant"
How to fix it?
3.85 GB
Upload Qwen3ForCausalLM
-
1.24 GB
Upload Qwen3ForCausalLM
-
32.9 kB
Upload Qwen3ForCausalLM
-
616 Bytes
Upload tokenizer
-
11.4 MB
Upload tokenizer
-
5.41 kB
Upload tokenizer
-
2.78 MB
Upload tokenizer