Full name
TobDeBer
AI & ML interests
Diffusion, Causality, LLM, LMM (Large Music Model), Quantization, AI Context Databases
Recent Activity
published
a Space
3 days ago
TobDeBer/gguf-local-server
updated
a Space
3 days ago
TobDeBer/gguf-local-server
new activity
9 days ago
unsloth/DeepSeek-R1-GGUF:Accuracy of the dynamic quants compared to usual quants?
Organizations
None yet
TobDeBer's activity
Accuracy of the dynamic quants compared to usual quants?
19
#21 opened 16 days ago
by
inputout

Saving to q5_k_m GGUF
4
#1 opened 11 days ago
by
sasha1234567
8bits quantization
5
#20 opened 17 days ago
by
ramkumarkoppu
Is there a model removing non-shared MoE experts?
4
#17 opened 23 days ago
by
ghostplant
Over 2 tok/sec agg backed by NVMe SSD on 96GB RAM + 24GB VRAM AM5 rig with llama.cpp
9
#13 opened 25 days ago
by
ubergarm
Quantizer Tool
2
#14 opened 24 days ago
by
TobDeBer
Where did the BF16 come from?
8
#10 opened 25 days ago
by
gshpychka
vram+ram
4
#7 opened 4 months ago
by
sdyy
11b instruct gguf?
3
#1 opened 5 months ago
by
celsowm

Why is it called 1.0b?
#4 opened 5 months ago
by
TobDeBer
Apply for community grant: Personal project (gpu)
#1 opened 5 months ago
by
TobDeBer
torch and llama.cpp integration
3
#1 opened 5 months ago
by
TobDeBer
Fine control for Turbo and Lightning models
1
#1 opened 6 months ago
by
TobDeBer
Please provide ggml variants for local execution
1
#3 opened 7 months ago
by
TobDeBer
Please host on zero
#2 opened 7 months ago
by
TobDeBer
RuntimeError: cutlassF: no kernel found to launch!
12
#11 opened about 1 year ago
by
mayonaisu
Better example code?
9
#13 opened about 1 year ago
by
Softology
Lower RAM requirements
#10 opened about 1 year ago
by
TobDeBer
add negative prompt
#3 opened about 1 year ago
by
TobDeBer