SD 3.5-large-tensorrt FP8 is not working

#5
by richdaler04 - opened

When running the demo_txt2img_sd35.py to use fp8 engine, I get:

"fp8 quantization only supported for SDXL, SD1.5, SD2.1 and FLUX pipeline"

i also got same error

Did you manage to get it running?

The instructions on HuggingFace won't work. The TensorRT Github repo has been updated. Use 10.13 Release instead of 10.11.

https://github.com/NVIDIA/TensorRT/blob/release/10.13/demo/Diffusion/README.md

Sign up or log in to comment