JJ
J22
AI & ML interests
None yet
Recent Activity
new activity
2 days ago
perplexity-ai/r1-1776:๐ฉ Report: Ethical issue(s)
new activity
5 days ago
watt-ai/watt-tool-70B:Vllm
new activity
16 days ago
tencent/Hunyuan-7B-Instruct:is rope_theta and max_pos_emb correct?
Organizations
None yet
J22's activity
๐ฉ Report: Ethical issue(s)
6
#176 opened 2 days ago
by
lzh7522
Vllm
1
#2 opened 26 days ago
by
TitanomTechnologies
is rope_theta and max_pos_emb correct?
#4 opened 16 days ago
by
J22
is `config.json` correct?
#4 opened about 1 month ago
by
J22
Quick start with chatllm.cpp
#4 opened about 1 month ago
by
J22
Upload tokenizer.json
1
#1 opened 4 months ago
by
J22
a horrible function in `modeling_mobilellm.py`
1
#5 opened 4 months ago
by
J22
Run this on CPU
#6 opened 5 months ago
by
J22
Run on CPU
1
#13 opened 6 months ago
by
J22
need gguf
19
#4 opened 6 months ago
by
windkkk
Best practice for tool calling with meta-llama/Meta-Llama-3.1-8B-Instruct
1
#33 opened 7 months ago
by
zzclynn
Run this on CPU and use tool calling
1
#38 opened 7 months ago
by
J22
My alternative quantizations.
5
#5 opened 8 months ago
by
ZeroWw
Tool calling is supported by ChatLLM.cpp
#36 opened 8 months ago
by
J22
can't say hello
1
#9 opened 9 months ago
by
J22
no system message?
8
#14 opened 9 months ago
by
mclassHF2023
"small" is so different from "mini" and "medium"
1
#8 opened 9 months ago
by
J22
how to set context in multi-turn QA?
6
#14 opened 10 months ago
by
J22
clarification on the usage of `short_factor` and `long_factor`?
1
#49 opened 10 months ago
by
J22
Continue the discussion: `long_factor` and `short_factor`
2
#32 opened 10 months ago
by
J22