FP8 quantized version of AuraFlow v0.3
Just casted to torch.float8_e4m3fn
all linear weights of the flow transformer except t_embedder
, final_linear
, modF
.
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.
Model tree for p1atdev/AuraFlow-v0.3-fp8
Base model
fal/AuraFlow-v0.3