-
-
-
-
-
-
Inference Providers
Active filters:
amd
dahara1/llama-translate-gguf
Updated
•
1.11k
•
15
dahara1/llama3-8b-amd-npu
dahara1/llama3.1-8b-Instruct-amd-npu
Tech-Meld/gpus-everywhere
Text-to-Image
•
Updated
•
5
•
•
1
dahara1/ALMA-Ja-V3-amd-npu
dahara1/llama-translate-amd-npu
Translation
•
Updated
•
3
amd/Llama-2-7b-hf-awq-g128-int4-asym-bf16-onnx-ryzen-strix
Text Generation
•
Updated
•
75
amd/Llama2-7b-chat-awq-g128-int4-asym-bf16-onnx-ryzen-strix
Text Generation
•
Updated
•
125
amd/Llama-3-8B-awq-g128-int4-asym-bf16-onnx-ryzen-strix
Text Generation
•
Updated
•
66
•
1
amd/Llama-3.1-8B-awq-g128-int4-asym-bf16-onnx-ryzen-strix
Text Generation
•
Updated
•
70
•
2
amd/Phi-3.5-mini-instruct-awq-g128-int4-asym-bf16-onnx-ryzen-strix
Text Generation
•
Updated
•
821
•
1
amd/Phi-3-mini-4k-instruct-awq-g128-int4-asym-bf16-onnx-ryzen-strix
Text Generation
•
Updated
•
52
•
1
uday610/Llama2-7b-chat-awq-g128-int4-asym-fp32-onnx-ryzen-strix-hybrid
Text Generation
•
Updated
amd/Phi-3-mini-4k-instruct-awq-g128-int4-asym-fp16-onnx-hybrid
Text Generation
•
Updated
•
22
amd/Phi-3.5-mini-instruct-awq-g128-int4-asym-fp16-onnx-hybrid
Text Generation
•
Updated
•
14
amd/Llama-2-7b-hf-awq-g128-int4-asym-fp16-onnx-hybrid
Text Generation
•
Updated
•
10
amd/Llama-2-7b-chat-hf-awq-g128-int4-asym-fp16-onnx-hybrid
Text Generation
•
Updated
•
25
amd/Llama-3-8B-awq-g128-int4-asym-fp16-onnx-hybrid
Text Generation
•
Updated
•
23