1 billion parameter models, quantized with AWQ.
SolidRusT Networks
company
Verified
AI & ML interests
Self-hosting, Open Source and AI Unalignment
These models are selected for their compatibility with small 12GB memory GPUs.
-
solidrust/Starling-LM-7B-beta-AWQ
Text Generation • 1B • Updated • 28 • 3 -
solidrust/dolphin-2.8-mistral-7b-v02-AWQ
Text Generation • 1B • Updated • 6 • 2 -
solidrust/Hermes-2-Pro-Mistral-7B-AWQ
Text Generation • 1B • Updated • 32 • 2 -
solidrust/dolphin-2.8-experiment26-7b-AWQ
Text Generation • 1B • Updated • 26 • 1
Mixture of experts 2 x 7B.
These models are selected for their compatibility with 2 small 12GB GPUs, or 1 medium 24GB GPU.
Exl2 models, with git branches for various quant bits.
1 billion parameter models, quantized with AWQ.
These models are selected for their compatibility with small 12GB memory GPUs.
-
solidrust/Starling-LM-7B-beta-AWQ
Text Generation • 1B • Updated • 28 • 3 -
solidrust/dolphin-2.8-mistral-7b-v02-AWQ
Text Generation • 1B • Updated • 6 • 2 -
solidrust/Hermes-2-Pro-Mistral-7B-AWQ
Text Generation • 1B • Updated • 32 • 2 -
solidrust/dolphin-2.8-experiment26-7b-AWQ
Text Generation • 1B • Updated • 26 • 1
Mixture of experts 2 x 7B.
Mixture of experts 2 x 7B
These models are selected for their compatibility with 2 small 12GB GPUs, or 1 medium 24GB GPU.
Mixture of experts 3 x 7B.
Exl2 models, with git branches for various quant bits.