🐷pig architecture gguf llama and t5 series encoder ++ plus clips l and g

  • text encoder base model from google
  • llama encoder base model from meta
  • pig architecture from connector
  • 50% faster at least; compare to safetensors version
  • save memory up to 50% as well; good for old machine
  • compatible with all model; no matter safetensors or gguf
  • tested on pig-1k/1k-aura/1k-turbo/cosmos, etc.; works fine
  • upgrade your node for pig🐷 encoder support
  • you could drag the picture below to your browser for example workflow
Prompt
close-up portrait of anime pig
Prompt
close-up portrait of anime pig
Prompt
close-up portrait of pig
Downloads last month
3,841
GGUF
Model size
695M params
Architecture
pig

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.