default prompt vs business-related prompts
1
#31 opened 3 days ago
by
andrewdeeplearning
update config on right class
#30 opened 24 days ago
by
michaelfeil

Question about MTEB Benchmark Settings : 'max_seq_length'😭
1
#25 opened 2 months ago
by
george31
Different tokenizer silently being loaded based on `trust_remote_code`
#24 opened 3 months ago
by
DarkLight1337
is there possible get the sparse embedding?
3
#23 opened 3 months ago
by
weiminw
How to change the embedding dimision?
1
#19 opened 4 months ago
by
storm2008
用eval_mteb.py算出来的mteb指标和Leaderboard展示的差距很大,不清楚为什么?
2
#16 opened 5 months ago
by
YangGuang30
Customized Further Fine-Tuning by Users
#15 opened 5 months ago
by
fwj
Model keeps cache of generation in Transformers (fixed using torch.no_grad())
1
#14 opened 5 months ago
by
Pietroferr
gte-Qwen2-1.5B-instruct模型半精度推理时,模型输出结果NAN
2
#13 opened 5 months ago
by
Erin
Qwen 2.5 1.5B retrain?
4
#12 opened 5 months ago
by
tomaarsen

mteb 测试速度问题
2
#10 opened 6 months ago
by
xiaopli11
Support of Xformer and FlashAttnention
1
#9 opened 7 months ago
by
le723z
ONNX.data
#8 opened 7 months ago
by
Saugatkafley

Fine-tunning
#5 opened 7 months ago
by
deleted
sequence classification
1
#3 opened 8 months ago
by
prudant

score mteb french
3
#2 opened 8 months ago
by
abhamadi
"Bidirectional attention"
2
#1 opened 8 months ago
by
olivierdehaene
