id
stringlengths
9
104
author
stringlengths
3
36
task_category
stringclasses
32 values
tags
listlengths
1
4.05k
created_time
timestamp[ns, tz=UTC]date
2022-03-02 23:29:04
2025-03-18 02:34:30
last_modified
stringdate
2021-02-13 00:06:56
2025-03-18 09:30:19
downloads
int64
0
15.6M
likes
int64
0
4.86k
README
stringlengths
44
1.01M
matched_bigbio_names
listlengths
1
8
aimarsg/prueba
aimarsg
token-classification
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-03-25T16:59:45Z
2023-03-25T17:46:21+00:00
14
0
--- license: apache-2.0 metrics: - precision - recall - f1 - accuracy tags: - generated_from_trainer model-index: - name: prueba results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # prueba This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es-pharmaconer](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es-pharmaconer) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1440 - Precision: 0.6923 - Recall: 0.6096 - F1: 0.6483 - Accuracy: 0.9719 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 32 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 29 | 0.3513 | 0.0 | 0.0 | 0.0 | 0.9259 | | No log | 2.0 | 58 | 0.2696 | 0.0 | 0.0 | 0.0 | 0.9259 | | No log | 3.0 | 87 | 0.2879 | 0.0 | 0.0 | 0.0 | 0.9259 | | No log | 4.0 | 116 | 0.2318 | 0.0714 | 0.0080 | 0.0143 | 0.9361 | | No log | 5.0 | 145 | 0.2055 | 0.2222 | 0.0558 | 0.0892 | 0.9376 | | No log | 6.0 | 174 | 0.2076 | 0.3793 | 0.0876 | 0.1424 | 0.9464 | | No log | 7.0 | 203 | 0.1630 | 0.4831 | 0.2271 | 0.3089 | 0.9525 | | No log | 8.0 | 232 | 0.1529 | 0.5515 | 0.3625 | 0.4375 | 0.9573 | | No log | 9.0 | 261 | 0.1519 | 0.5972 | 0.3426 | 0.4354 | 0.9603 | | No log | 10.0 | 290 | 0.1399 | 0.6272 | 0.4223 | 0.5048 | 0.9639 | | No log | 11.0 | 319 | 0.1412 | 0.6096 | 0.4542 | 0.5205 | 0.9641 | | No log | 12.0 | 348 | 0.1320 | 0.5969 | 0.4661 | 0.5235 | 0.9646 | | No log | 13.0 | 377 | 0.1311 | 0.6515 | 0.5139 | 0.5746 | 0.9671 | | No log | 14.0 | 406 | 0.1300 | 0.6329 | 0.5219 | 0.5721 | 0.9656 | | No log | 15.0 | 435 | 0.1346 | 0.6345 | 0.4980 | 0.5580 | 0.9672 | | No log | 16.0 | 464 | 0.1361 | 0.6329 | 0.5219 | 0.5721 | 0.9669 | | No log | 17.0 | 493 | 0.1312 | 0.6532 | 0.5777 | 0.6131 | 0.9689 | | 0.1181 | 18.0 | 522 | 0.1327 | 0.6756 | 0.6056 | 0.6387 | 0.9694 | | 0.1181 | 19.0 | 551 | 0.1495 | 0.7234 | 0.5418 | 0.6196 | 0.9704 | | 0.1181 | 20.0 | 580 | 0.1328 | 0.6872 | 0.5777 | 0.6277 | 0.9707 | | 0.1181 | 21.0 | 609 | 0.1363 | 0.6667 | 0.6215 | 0.6433 | 0.9710 | | 0.1181 | 22.0 | 638 | 0.1392 | 0.6884 | 0.5896 | 0.6352 | 0.9712 | | 0.1181 | 23.0 | 667 | 0.1377 | 0.6437 | 0.6335 | 0.6386 | 0.9704 | | 0.1181 | 24.0 | 696 | 0.1434 | 0.6504 | 0.5857 | 0.6164 | 0.9697 | | 0.1181 | 25.0 | 725 | 0.1418 | 0.6944 | 0.5976 | 0.6424 | 0.9710 | | 0.1181 | 26.0 | 754 | 0.1426 | 0.6739 | 0.6175 | 0.6445 | 0.9715 | | 0.1181 | 27.0 | 783 | 0.1447 | 0.7085 | 0.6295 | 0.6667 | 0.9734 | | 0.1181 | 28.0 | 812 | 0.1432 | 0.6903 | 0.6215 | 0.6541 | 0.9727 | | 0.1181 | 29.0 | 841 | 0.1421 | 0.7162 | 0.6335 | 0.6723 | 0.9729 | | 0.1181 | 30.0 | 870 | 0.1431 | 0.6875 | 0.6135 | 0.6484 | 0.9720 | | 0.1181 | 31.0 | 899 | 0.1431 | 0.6844 | 0.6135 | 0.6471 | 0.9717 | | 0.1181 | 32.0 | 928 | 0.1440 | 0.6923 | 0.6096 | 0.6483 | 0.9719 | ### Framework versions - Transformers 4.27.3 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
[ "PHARMACONER" ]
daijin219/MLMA_lab9_task2
daijin219
token-classification
[ "transformers", "pytorch", "tensorboard", "gpt2", "token-classification", "generated_from_trainer", "dataset:ncbi_disease", "license:mit", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2023-04-15T12:19:47Z
2023-04-15T14:33:32+00:00
14
0
--- datasets: - ncbi_disease license: mit metrics: - precision - recall - f1 - accuracy tags: - generated_from_trainer model-index: - name: MLMA_lab9_task2 results: - task: type: token-classification name: Token Classification dataset: name: ncbi_disease type: ncbi_disease config: ncbi_disease split: validation args: ncbi_disease metrics: - type: precision value: 0.015873015873015872 name: Precision - type: recall value: 0.14866581956797967 name: Recall - type: f1 value: 0.028683500858053445 name: F1 - type: accuracy value: 0.6365342039100904 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MLMA_lab9_task2 This model is a fine-tuned version of [microsoft/biogpt](https://huggingface.co/microsoft/biogpt) on the ncbi_disease dataset. It achieves the following results on the evaluation set: - Loss: 1.2509 - Precision: 0.0159 - Recall: 0.1487 - F1: 0.0287 - Accuracy: 0.6365 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 1.153 | 1.0 | 680 | 1.0671 | 0.0122 | 0.1258 | 0.0223 | 0.5452 | | 1.02 | 2.0 | 1360 | 1.0418 | 0.0098 | 0.0203 | 0.0132 | 0.6791 | | 0.9552 | 3.0 | 2040 | 1.0269 | 0.0135 | 0.1677 | 0.0250 | 0.5282 | | 0.926 | 4.0 | 2720 | 1.0390 | 0.0143 | 0.0940 | 0.0248 | 0.6686 | | 0.9156 | 5.0 | 3400 | 1.0200 | 0.0135 | 0.2046 | 0.0253 | 0.4679 | | 0.8791 | 6.0 | 4080 | 1.0543 | 0.0131 | 0.2745 | 0.0250 | 0.3149 | | 0.8672 | 7.0 | 4760 | 1.0545 | 0.0141 | 0.2732 | 0.0267 | 0.3471 | | 0.8627 | 8.0 | 5440 | 1.0734 | 0.0145 | 0.0826 | 0.0246 | 0.7220 | | 0.8375 | 9.0 | 6120 | 1.1068 | 0.0156 | 0.1410 | 0.0281 | 0.6451 | | 0.8235 | 10.0 | 6800 | 1.0796 | 0.0158 | 0.1537 | 0.0286 | 0.6210 | | 0.8157 | 11.0 | 7480 | 1.1476 | 0.0143 | 0.1690 | 0.0263 | 0.5737 | | 0.7957 | 12.0 | 8160 | 1.1369 | 0.0143 | 0.1525 | 0.0262 | 0.6155 | | 0.7937 | 13.0 | 8840 | 1.2014 | 0.0151 | 0.1741 | 0.0278 | 0.5808 | | 0.7765 | 14.0 | 9520 | 1.2249 | 0.0160 | 0.1449 | 0.0289 | 0.6443 | | 0.7661 | 15.0 | 10200 | 1.2509 | 0.0159 | 0.1487 | 0.0287 | 0.6365 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
[ "NCBI DISEASE" ]
IIC/BETO_Galen-cantemist
IIC
text-classification
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "biomedical", "clinical", "eHR", "spanish", "BETO_Galen", "es", "dataset:PlanTL-GOB-ES/cantemist-ner", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-06-19T15:31:09Z
2024-11-25T10:40:57+00:00
14
0
--- datasets: - PlanTL-GOB-ES/cantemist-ner language: es license: mit metrics: - f1 tags: - biomedical - clinical - eHR - spanish - BETO_Galen widget: - text: El diagnóstico definitivo de nuestro paciente fue de un Adenocarcinoma de pulmón cT2a cN3 cM1a Estadio IV (por una única lesión pulmonar contralateral) PD-L1 90%, EGFR negativo, ALK negativo y ROS-1 negativo. - text: Durante el ingreso se realiza una TC, observándose un nódulo pulmonar en el LII y una masa renal derecha indeterminada. Se realiza punción biopsia del nódulo pulmonar, con hallazgos altamente sospechosos de carcinoma. - text: Trombosis paraneoplásica con sospecha de hepatocarcinoma por imagen, sobre hígado cirrótico, en paciente con índice Child-Pugh B. model-index: - name: IIC/BETO_Galen-cantemist results: - task: type: token-classification dataset: name: cantemist-ner type: PlanTL-GOB-ES/cantemist-ner metrics: - type: f1 value: 0.802 name: f1 --- # BETO_Galen-cantemist This model is a finetuned version of BETO_Galen for the cantemist dataset used in a benchmark in the paper `A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks`. The model has a F1 of 0.802 Please refer to the [original publication](https://doi.org/10.1093/jamia/ocae054) for more information. ## Parameters used | parameter | Value | |-------------------------|:-----:| | batch size | 16 | | learning rate | 3e05 | | classifier dropout | 0.1 | | warmup ratio | 0 | | warmup steps | 0 | | weight decay | 0 | | optimizer | AdamW | | epochs | 10 | | early stopping patience | 3 | ## BibTeX entry and citation info ```bibtext @article{10.1093/jamia/ocae054, author = {García Subies, Guillem and Barbero Jiménez, Álvaro and Martínez Fernández, Paloma}, title = {A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks}, journal = {Journal of the American Medical Informatics Association}, volume = {31}, number = {9}, pages = {2137-2146}, year = {2024}, month = {03}, issn = {1527-974X}, doi = {10.1093/jamia/ocae054}, url = {https://doi.org/10.1093/jamia/ocae054}, } ```
[ "CANTEMIST" ]
IIC/bsc-bio-ehr-es-pharmaconer
IIC
token-classification
[ "transformers", "pytorch", "safetensors", "roberta", "text-classification", "biomedical", "clinical", "spanish", "bsc-bio-ehr-es", "token-classification", "es", "dataset:PlanTL-GOB-ES/pharmaconer", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-06-21T16:11:44Z
2024-11-25T10:41:23+00:00
14
0
--- datasets: - PlanTL-GOB-ES/pharmaconer language: es license: apache-2.0 metrics: - f1 pipeline_tag: token-classification tags: - biomedical - clinical - spanish - bsc-bio-ehr-es widget: - text: Se realizó estudio analítico destacando incremento de niveles de PTH y vitamina D (103,7 pg/ml y 272 ng/ml, respectivamente), atribuidos al exceso de suplementación de vitamina D. - text: ' Por el hallazgo de múltiples fracturas por estrés, se procedió a estudio en nuestras consultas, realizándose análisis con función renal, calcio sérico y urinario, calcio iónico, magnesio y PTH, que fueron normales.' - text: Se solicitó una analítica que incluía hemograma, bioquímica, anticuerpos antinucleares (ANA) y serologías, examen de orina, así como biopsia de la lesión. Los resultados fueron normales, con ANA, anti-Sm, anti-RNP, anti-SSA, anti-SSB, anti-Jo1 y anti-Scl70 negativos. model-index: - name: IIC/bsc-bio-ehr-es-pharmaconer results: - task: type: token-classification dataset: name: pharmaconer type: PlanTL-GOB-ES/pharmaconer split: test metrics: - type: f1 value: 0.904 name: f1 --- # bsc-bio-ehr-es-pharmaconer This model is a finetuned version of bsc-bio-ehr-es for the pharmaconer dataset used in a benchmark in the paper `A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks`. The model has a F1 of 0.904 Please refer to the [original publication](https://doi.org/10.1093/jamia/ocae054) for more information. ## Parameters used | parameter | Value | |-------------------------|:-----:| | batch size | 16 | | learning rate | 4e-05 | | classifier dropout | 0.1 | | warmup ratio | 0 | | warmup steps | 0 | | weight decay | 0 | | optimizer | AdamW | | epochs | 10 | | early stopping patience | 3 | ## BibTeX entry and citation info ```bibtext @article{10.1093/jamia/ocae054, author = {García Subies, Guillem and Barbero Jiménez, Álvaro and Martínez Fernández, Paloma}, title = {A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks}, journal = {Journal of the American Medical Informatics Association}, volume = {31}, number = {9}, pages = {2137-2146}, year = {2024}, month = {03}, issn = {1527-974X}, doi = {10.1093/jamia/ocae054}, url = {https://doi.org/10.1093/jamia/ocae054}, } ```
[ "PHARMACONER" ]
Jumtra/calm-7b-tune-ep4
Jumtra
text-generation
[ "transformers", "pytorch", "gpt_neox", "text-generation", "ja", "lm", "nlp", "dataset:kunishou/databricks-dolly-15k-ja", "dataset:kunishou/hh-rlhf-49k-ja", "dataset:kunishou/cnn-dailymail-27k-ja", "dataset:Jumtra/oasst1_ja", "dataset:Jumtra/jglue_jnli", "dataset:Jumtra/jglue_jsquad", "dataset:Jumtra/jglue_jsquads_with_input", "license:cc-by-sa-4.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
2023-06-25T09:01:35Z
2023-07-03T07:09:11+00:00
14
1
--- datasets: - kunishou/databricks-dolly-15k-ja - kunishou/hh-rlhf-49k-ja - kunishou/cnn-dailymail-27k-ja - Jumtra/oasst1_ja - Jumtra/jglue_jnli - Jumtra/jglue_jsquad - Jumtra/jglue_jsquads_with_input language: - ja license: cc-by-sa-4.0 tags: - ja - gpt_neox - text-generation - lm - nlp inference: false --- # open-calm-7b このモデルは、MosaicMLのllm-foundryリポジトリを使用して[cyberagent/open-calm-7b](https://huggingface.co/cyberagent/open-calm-7b)をファインチューニングしたモデルです。 ## Model Date June 28, 2023 ## Model License cc-by-sa-4.0 ## 評価 [Jumtra/test_data_100QA](https://huggingface.co/datasets/Jumtra/test_data_100QA)を用いてモデルの正答率を評価した また、学習時のvalidateデータに対してのPerplexityを記載した。 | model name | 正答率 | Perplexity | | ---- | ---- | ---- | | [Jumtra/rinna-3.6b-tune-ep5](https://huggingface.co/Jumtra/rinna-3.6b-tune-ep5)| 40/100 | 8.105 | | [Jumtra/rinna-v1-tune-ep1](https://huggingface.co/Jumtra/rinna-v1-tune-ep1) | 42/100 | 7.458 | | [Jumtra/rinna-v1-tune-ep3](https://huggingface.co/Jumtra/rinna-v1-tune-ep3) | 41/100 | 7.034 | | [Jumtra/calm-7b-tune-ep4](https://huggingface.co/Jumtra/calm-7b-tune-ep4) | 40/100 | 9.766 | | [Jumtra/calm-v3-ep1](https://huggingface.co/Jumtra/calm-v3-ep1) | 35/100 | 9.305 | | [Jumtra/calm-v3-ep3](https://huggingface.co/Jumtra/calm-v3-ep3) | 37/100 | 13.276 | 以下のプロンプトを用いた ```python INSTRUCTION_KEY = "### 入力:" RESPONSE_KEY = "### 回答:" INTRO_BLURB = "以下はタスクを説明する指示と文脈のある文章が含まれた入力です。要求を適切に満たす回答を生成しなさい。" JP_PROMPT_FOR_GENERATION_FORMAT = """{intro} {instruction_key} {instruction} {response_key} """.format( intro=INTRO_BLURB, instruction_key=INSTRUCTION_KEY, instruction="{instruction}", response_key=RESPONSE_KEY, ) ```
[ "BLURB" ]
Panchovix/WizardLM-Uncensored-SuperCOT-StoryTelling-30b-SuperHOT-8k-4bit-32g
Panchovix
text-generation
[ "transformers", "pytorch", "llama", "text-generation", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2023-06-26T22:50:20Z
2023-07-06T18:09:47+00:00
14
1
--- license: other --- [WizardLM-Uncensored-SuperCOT-StoryTelling-30b](https://huggingface.co/Monero/WizardLM-Uncensored-SuperCOT-StoryTelling-30b) merged with kaiokendev's [33b SuperHOT 8k LoRA](https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test), quantized at 4 bit. It was created with GPTQ-for-LLaMA with group size 32 and act order true as parameters, to get the maximum perplexity vs FP16 model. I HIGHLY suggest to use exllama, to evade some VRAM issues. Use compress_pos_emb = 4 for any context up to 8192 context. If you have 2x24 GB VRAM GPUs cards, to not get Out of Memory errors at 8192 context, use: gpu_split: 9,21
[ "MONERO" ]
Leogrin/eleuther-pythia1b-hh-dpo
Leogrin
text-generation
[ "transformers", "pytorch", "gpt_neox", "text-generation", "causal-lm", "pythia", "en", "dataset:Anthropic/hh-rlhf", "arxiv:2305.18290", "arxiv:2101.00027", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2023-07-27T14:35:26Z
2023-07-27T18:21:11+00:00
14
1
--- datasets: - Anthropic/hh-rlhf language: - en license: apache-2.0 tags: - pytorch - causal-lm - pythia --- # Infos Pythia-1b supervised finetuned with Anthropic-hh-rlhf dataset for 1 epoch (sft-model), before DPO [(paper)](https://arxiv.org/abs/2305.18290) with same dataset for 1 epoch. [wandb log](https://wandb.ai/pythia_dpo/Pythia_DPO_new/runs/jk09pzqb) See [Pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) for model details [(paper)](https://arxiv.org/abs/2101.00027). # Benchmark raw results: Results for the base model are taken from the [Pythia paper](https://arxiv.org/abs/2101.00027). ## Zero shot | Task | 1B_base | 1B_sft | 1B_dpo | |------------------|----------------|----------------|-----------------| | Lambada (OpenAI) | 0.562 ± 0.007 | 0.563 ± 0.007 | 0.5575 ± 0.0069 | | PIQA | 0.707 ± 0.011 | 0.711 ± 0.011 | 0.7122 ± 0.0106 | | WinoGrande | 0.537 ± 0.014 | 0.534 ± 0.014 | 0.5525 ± 0.0140 | | WSC | 0.365 ± 0.047 | 0.365 ± 0.047 | 0.3654 ± 0.0474 | | ARC - Easy | 0.569 ± 0.010 | 0.583 ± 0.010 | 0.5901 ± 0.0101 | | ARC - Challenge | 0.244 ± 0.013 | 0.248 ± 0.013 | 0.2611 ± 0.0128 | | SciQ | 0.840 ± 0.012 | 0.847 ± 0.011 | 0.8530 ± 0.0112 | | LogiQA | 0.223 ± 0.016 | N/A | N/A | ## Five shot | Task | 1B_base | 1B_sft | 1B_dpo | |------------------|----------------|----------------|-----------------| | Lambada (OpenAI) | 0.507 ± 0.007 | 0.4722 ± 0.007 | 0.4669 ± 0.0070 | | PIQA | 0.705 ± 0.011 | 0.7165 ± 0.0105| 0.7138 ± 0.0105 | | WinoGrande | 0.532 ± 0.014 | 0.5343 ± 0.014 | 0.5525 ± 0.0140 | | WSC | 0.365 ± 0.047 | 0.5000 ± 0.0493| 0.5577 ± 0.0489 | | ARC - Easy | 0.594 ± 0.010 | 0.6010 ± 0.010 | 0.6170 ± 0.0100 | | ARC - Challenge | 0.259 ± 0.013 | 0.2679 ± 0.0129| 0.2833 ± 0.0132 | | SciQ | 0.920 ± 0.009 | 0.9100 ± 0.0091| 0.9020 ± 0.0094 | | LogiQA | 0.227 ± 0.016 | N/A | N/A |
[ "SCIQ" ]
usvsnsp/pythia-2.8b-sft
usvsnsp
text-generation
[ "transformers", "pytorch", "gpt_neox", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2023-08-08T17:30:13Z
2023-08-23T16:17:44+00:00
14
0
--- {} --- wandb run: https://wandb.ai/eleutherai/pythia-rlhf/runs/0c0pmvz8 | Task |Version|Filter| Metric |Value | |Stderr| |-------------|-------|------|--------|-----:|---|-----:| |arc_challenge|Yaml |none |acc |0.2961|± |0.0133| | | |none |acc_norm|0.3285|± |0.0137| |arc_easy |Yaml |none |acc |0.6452|± |0.0098| | | |none |acc_norm|0.5678|± |0.0102| |logiqa |Yaml |none |acc |0.2151|± |0.0161| | | |none |acc_norm|0.2857|± |0.0177| |piqa |Yaml |none |acc |0.7508|± |0.0101| | | |none |acc_norm|0.7503|± |0.0101| |sciq |Yaml |none |acc |0.8820|± |0.0102| | | |none |acc_norm|0.8140|± |0.0123| |winogrande |Yaml |none |acc |0.6038|± |0.0137|
[ "SCIQ" ]
VuongQuoc/longformer_sciq
VuongQuoc
multiple-choice
[ "transformers", "pytorch", "longformer", "multiple-choice", "generated_from_trainer", "endpoints_compatible", "region:us" ]
2023-08-29T02:34:13Z
2023-09-02T11:06:18+00:00
14
0
--- tags: - generated_from_trainer model-index: - name: longformer_sciq results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # longformer_sciq This model is a fine-tuned version of [VuongQuoc/longformer_sciq](https://huggingface.co/VuongQuoc/longformer_sciq) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5326 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 2 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.1931 | 0.2 | 20 | 0.7457 | | 0.7677 | 0.4 | 40 | 0.7063 | | 1.0391 | 0.6 | 60 | 0.6745 | | 1.2915 | 0.8 | 80 | 0.6316 | | 1.1399 | 1.0 | 100 | 0.6652 | | 0.9975 | 1.2 | 120 | 0.6134 | | 0.9232 | 1.4 | 140 | 0.5561 | | 0.8026 | 1.6 | 160 | 0.5422 | | 0.7188 | 1.8 | 180 | 0.5370 | | 0.7272 | 2.0 | 200 | 0.5326 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.0+cpu - Datasets 2.1.0 - Tokenizers 0.13.3
[ "SCIQ" ]
mmenendezg/xlm-roberta-base-pharmaconer
mmenendezg
token-classification
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:pharmaconer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-08-31T23:49:56Z
2023-09-04T17:32:10+00:00
14
0
--- base_model: xlm-roberta-base datasets: - pharmaconer license: mit tags: - generated_from_trainer model-index: - name: xlm-roberta-base-pharmaconer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-pharmaconer This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the pharmaconer dataset. It achieves the following results on the evaluation set: - Loss: 0.0243 - F1 Score: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0458 | 1.0 | 1017 | 0.0344 | 1.0 | | 0.0225 | 2.0 | 2034 | 0.0254 | 1.0 | | 0.0114 | 3.0 | 3051 | 0.0203 | 1.0 | | 0.0065 | 4.0 | 4068 | 0.0216 | 1.0 | | 0.0035 | 5.0 | 5085 | 0.0243 | 1.0 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1 - Datasets 2.14.4 - Tokenizers 0.13.3
[ "PHARMACONER" ]
medspaner/xlm-roberta-large-spanish-trials-cases-temp-ent
medspaner
token-classification
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-10-02T14:59:41Z
2024-10-01T06:33:48+00:00
14
0
--- license: cc-by-nc-4.0 metrics: - precision - recall - f1 - accuracy tags: - generated_from_trainer widget: - text: Edad ≥ 18 años (en todos los centros), o edad ≥12 y <18 años con peso igual o superior a 40kg - text: Estudio realizado en un hospital desde julio de 2010 hasta diciembre de 2011 (18 meses) - text: Pacientes que hayan recibido bifosfonatos diarios, semanales o mensuales durante al menos 3 años. - text: 50 g (40 g la noche anterior y 10 g por la mañana) de L-glutamina model-index: - name: xlm-roberta-large-spanish-trials-cases-temp-ents results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-large-spanish-trials-cases-medic-attr This named entity recognition model detects temporal expressions (TIMEX) according to the [TimeML scheme](https://en.wikipedia.org/wiki/ISO-TimeML) ([Pustejovsky et al. 2005](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.85.5610&rep=rep1&type=pdf)), in addition to Age entities: - Age: e.g. *18 años* - Date: e.g. *2022*, *26 de noviembre* - Duration: e.g. *3 horas* - Frequency: e.g. *semanal* - Time: e.g. *noche* The model achieves the following results on the test set (results are averaged over 5 evaluation rounds): - Precision: 0.906 (±0.006) - Recall: 0.901 (±0.006) - F1: 0.904 (±0.004) - Accuracy: 0.996 (±0.001) ## Model description This model adapts the pre-trained model [xlm-roberta-large-spanish-clinical](https://huggingface.co/llange/xlm-roberta-large-spanish-clinical), presented in [Lange et al. (2022)](https://academic.oup.com/bioinformatics/article/38/12/3267/6575884). It is fine-tuned to conduct medical named entity recognition on texts about in Spanish. The model is fine-tuned on the [CT-EBM-ES corpus (Campillos-Llanos et al. 2021)](https://bmcmedinformdecismak.biomedcentral.com/articles/10.1186/s12911-021-01395-z) and 100 clinical cases with Creative Commons License. If you use this model, please, cite as follows: ``` @article{campillosetal2024,         title = {{Hybrid tool for semantic annotation and concept extraction of medical texts in Spanish}},         author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n},         journal = {BMC Bioinformatics}, year={2024}, publisher={BioMed Central} } ``` ## Intended uses & limitations **Disclosure**: *This model is under development and needs to be improved. It should not be used for medical decision making without human assistance and supervision* This model is intended for a generalist purpose, and may have bias and/or any other undesirable distortions. Third parties who deploy or provide systems and/or services using any of these models (or using systems based on these models) should note that it is their responsibility to mitigate the risks arising from their use. Third parties, in any event, need to comply with applicable regulations, including regulations concerning the use of artificial intelligence. The owner or creator of the models will in no event be liable for any results arising from the use made by third parties of these models. **Descargo de responsabilidad**: *Esta herramienta se encuentra en desarrollo y no debe ser empleada para la toma de decisiones médicas* La finalidad de este modelo es generalista, y se advierte que puede tener sesgos y/u otro tipo de distorsiones indeseables. Terceras partes que desplieguen o proporcionen sistemas y/o servicios usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) han tener presente que es su responsabilidad abordar y minimizar los riesgos derivados de su uso. Las terceras partes, en cualquier circunstancia, deben cumplir con la normativa aplicable, incluyendo la normativa que concierne al uso de la inteligencia artificial. El propietario o creador de los modelos de ningún modo será responsable de los resultados derivados del uso que las terceras partes hagan de estos modelos. ## Training and evaluation data The data used for fine-tuning are the [Clinical Trials for Evidence-Based-Medicine in Spanish corpus](http://www.lllf.uam.es/ESP/nlpdata/wp2/). It is a collection of 1200 texts about clinical trials studies and clinical trials announcements: - 500 abstracts from journals published under a Creative Commons license, e.g. available in PubMed or the Scientific Electronic Library Online (SciELO) - 700 clinical trials announcements published in the European Clinical Trials Register and Repositorio Español de Estudios Clínicos If you use the CT-EBM-ES resource, please, cite as follows: ``` @article{campillosetal-midm2021,         title = {A clinical trials corpus annotated with UMLS© entities to enhance the access to Evidence-Based Medicine},         author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n and Moreno-Sandoval, Antonio},         journal = {BMC Medical Informatics and Decision Making},         volume={21}, number={1}, pages={1--19}, year={2021}, publisher={BioMed Central} } ``` To fine-tune the model, we also used 100 clinical cases with Creative Commons licence. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: we used different seeds for 5 evaluation rounds, and uploaded the model with the best results - optimizer: Adam - num_epochs: average 14.8 epochs (±2.39); trained with early stopping if no improvement after 5 epochs (early stopping patience: 5) ### Training results (test set; average and standard deviation of 5 rounds with different seeds) | Precision | Recall | F1 | Accuracy | |:--------------:|:--------------:|:--------------:|:--------------:| | 0.906 (±0.006) | 0.901 (±0.006) | 0.904 (±0.004) | 0.996 (±0.001) | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.2+cu113 - Datasets 1.18.4 - Tokenizers 0.11.6
[ "SCIELO" ]
medspaner/xlm-roberta-large-spanish-trials-cases-medic-attr
medspaner
token-classification
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-10-02T15:37:33Z
2024-10-01T06:34:07+00:00
14
0
--- license: cc-by-nc-4.0 metrics: - precision - recall - f1 - accuracy tags: - generated_from_trainer widget: - text: Azitromicina en suspensión oral, 10 mg/kg una vez al día durante siete días - text: A un grupo se le administró Ciprofloxacino 200 mg bid EV y al otro Cefazolina 1 g tid IV model-index: - name: xlm-roberta-large-spanish-trials-cases-medic-attr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-large-spanish-trials-cases-medic-attr This named entity recognition model detects medication-related information: - Contraindication: e.g. *contraindicación a **aspirina*** - Dose, strength or concentration: e.g. *14 mg*, *100.000 UI* - Form: e.g. *tabletas*, *comprimidos* - Route: e.g. *vía oral*, *i.v.* The model achieves the following results on the test set (results are averaged over 5 evaluation rounds): - Precision: 0.895 (±0.015) - Recall: 0.879 (±0.011) - F1: 0.887 (±0.012) - Accuracy: 0.997 (±0.001) ## Model description This model adapts the pre-trained model [xlm-roberta-large-spanish-clinical](https://huggingface.co/llange/xlm-roberta-large-spanish-clinical), presented in [Lange et al. (2022)](https://academic.oup.com/bioinformatics/article/38/12/3267/6575884). It is fine-tuned to conduct medical named entity recognition on texts about in Spanish. The model is fine-tuned on the [CT-EBM-ES corpus (Campillos-Llanos et al. 2021)](https://bmcmedinformdecismak.biomedcentral.com/articles/10.1186/s12911-021-01395-z) and 100 clinical cases with Creative Commons License. If you use this model, please, cite as follows: ``` @article{campillosetal2024,         title = {{Hybrid tool for semantic annotation and concept extraction of medical texts in Spanish}},         author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n},         journal = {BMC Bioinformatics}, year={2024}, publisher={BioMed Central} } ``` ## Intended uses & limitations **Disclosure**: *This model is under development and needs to be improved. It should not be used for medical decision making without human assistance and supervision* This model is intended for a generalist purpose, and may have bias and/or any other undesirable distortions. Third parties who deploy or provide systems and/or services using any of these models (or using systems based on these models) should note that it is their responsibility to mitigate the risks arising from their use. Third parties, in any event, need to comply with applicable regulations, including regulations concerning the use of artificial intelligence. The owner or creator of the models will in no event be liable for any results arising from the use made by third parties of these models. **Descargo de responsabilidad**: *Esta herramienta se encuentra en desarrollo y no debe ser empleada para la toma de decisiones médicas* La finalidad de este modelo es generalista, y se advierte que puede tener sesgos y/u otro tipo de distorsiones indeseables. Terceras partes que desplieguen o proporcionen sistemas y/o servicios usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) han tener presente que es su responsabilidad abordar y minimizar los riesgos derivados de su uso. Las terceras partes, en cualquier circunstancia, deben cumplir con la normativa aplicable, incluyendo la normativa que concierne al uso de la inteligencia artificial. El propietario o creador de los modelos de ningún modo será responsable de los resultados derivados del uso que las terceras partes hagan de estos modelos. ## Training and evaluation data The data used for fine-tuning are the [Clinical Trials for Evidence-Based-Medicine in Spanish corpus](http://www.lllf.uam.es/ESP/nlpdata/wp2/). It is a collection of 1200 texts about clinical trials studies and clinical trials announcements: - 500 abstracts from journals published under a Creative Commons license, e.g. available in PubMed or the Scientific Electronic Library Online (SciELO) - 700 clinical trials announcements published in the European Clinical Trials Register and Repositorio Español de Estudios Clínicos If you use the CT-EBM-ES resource, please, cite as follows: ``` @article{campillosetal-midm2021,         title = {A clinical trials corpus annotated with UMLS© entities to enhance the access to Evidence-Based Medicine},         author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n and Moreno-Sandoval, Antonio},         journal = {BMC Medical Informatics and Decision Making},         volume={21}, number={1}, pages={1--19}, year={2021}, publisher={BioMed Central} } ``` To fine-tune the model, we also used 100 clinical cases with Creative Commons licences and 265 text samples extracted from Summaries of Product Characteristics available at the [Spanish Drug Information Center (CIMA)](https://cima.aemps.es). ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: we used different seeds for 5 evaluation rounds, and uploaded the model with the best results - optimizer: Adam - num_epochs: average 14.8 epochs (±8.53); trained with early stopping if no improvement after 5 epochs (early stopping patience: 5) ### Training results (test set; average and standard deviation of 5 rounds with different seeds) | Precision | Recall | F1 | Accuracy | |:--------------:|:--------------:|:--------------:|:--------------:| | 0.895 (±0.015) | 0.879 (±0.011) | 0.887 (±0.012) | 0.997 (±0.001) | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.2+cu113 - Datasets 1.18.4 - Tokenizers 0.11.6
[ "SCIELO" ]
medspaner/roberta-es-clinical-trials-cases-medic-attr
medspaner
token-classification
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-10-03T09:08:20Z
2024-10-01T06:26:42+00:00
14
0
--- license: cc-by-nc-4.0 metrics: - precision - recall - f1 - accuracy tags: - generated_from_trainer widget: - text: Azitromicina en suspensión oral, 10 mg/kg una vez al día durante siete días - text: A un grupo se le administró Ciprofloxacino 200 mg bid EV y al otro Cefazolina 1 g tid IV - text: Administración de una solución de mantenimiento intravenosa isotónica (NaCl al 0,9% en dextrosa al 5%) - text: Se excluyen pacientes con contraindicación a aspirina o clopidogrel model-index: - name: roberta-es-clinical-trials-cases-medic-attr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-es-clinical-trials-cases-medic-attr This named entity recognition model detects medication-related information: - Contraindication: e.g. *contraindicación a **aspirina*** - Dose, strength or concentration: e.g. *14 mg*, *100.000 UI* - Form: e.g. *tabletas*, *comprimidos* - Route: e.g. *vía oral*, *i.v.* The model achieves the following results on the test set (when trained with the training and development set; results are averaged over 5 evaluation rounds): - Precision: 0.856 (±0.015) - Recall: 0.873 (±0.018) - F1: 0.864 (±0.007) - Accuracy: 0.996 (±0.001) ## Model description This model adapts the pre-trained model [bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es), presented in [Pio Carriño et al. (2022)](https://aclanthology.org/2022.bionlp-1.19/). It is fine-tuned to conduct temporal named entity recognition on Spanish texts about clinical trials and clinical cases. The model is fine-tuned on the [CT-EBM-ES corpus (Campillos-Llanos et al. 2021)](https://bmcmedinformdecismak.biomedcentral.com/articles/10.1186/s12911-021-01395-z), 265 text samples extracted from Summaries of Product Characteristics available at the [Spanish Drug Information Center (CIMA)](https://cima.aemps.es) and 100 clinical cases with a Creative Commons license. If you use this model, please, cite as follows: ``` @article{campillosetal2024,         title = {{Hybrid tool for semantic annotation and concept extraction of medical texts in Spanish}},         author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n},         journal = {BMC Bioinformatics}, year={2024}, publisher={BioMed Central} } ``` ## Intended uses & limitations **Disclosure**: *This model is under development and needs to be improved. It should not be used for medical decision making without human assistance and supervision* This model is intended for a generalist purpose, and may have bias and/or any other undesirable distortions. Third parties who deploy or provide systems and/or services using any of these models (or using systems based on these models) should note that it is their responsibility to mitigate the risks arising from their use. Third parties, in any event, need to comply with applicable regulations, including regulations concerning the use of artificial intelligence. The owner or creator of the models will in no event be liable for any results arising from the use made by third parties of these models. **Descargo de responsabilidad**: *Esta herramienta se encuentra en desarrollo y no debe ser empleada para la toma de decisiones médicas* La finalidad de este modelo es generalista, y se advierte que puede tener sesgos y/u otro tipo de distorsiones indeseables. Terceras partes que desplieguen o proporcionen sistemas y/o servicios usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) han tener presente que es su responsabilidad abordar y minimizar los riesgos derivados de su uso. Las terceras partes, en cualquier circunstancia, deben cumplir con la normativa aplicable, incluyendo la normativa que concierne al uso de la inteligencia artificial. El propietario o creador de los modelos de ningún modo será responsable de los resultados derivados del uso que las terceras partes hagan de estos modelos. ## Training and evaluation data To fine-tune the model, we used the [Clinical Trials for Evidence-Based-Medicine in Spanish (CT-EBM-SP) corpus](http://www.lllf.uam.es/ESP/nlpdata/wp2/), 265 text samples extracted from Summaries of Product Characteristics available at the [Spanish Drug Information Center (CIMA)](https://cima.aemps.es) and 100 clinical cases with Creative Commons license. The CT-EBM-SP corpus is a collection of 1200 texts about clinical trials studies and clinical trials announcements: - 500 abstracts from journals published under a Creative Commons license, e.g. available in PubMed or the Scientific Electronic Library Online (SciELO) - 700 clinical trials announcements published in the European Clinical Trials Register and Repositorio Español de Estudios Clínicos If you use the CT-EBM-ES resource, please, cite as follows: ``` @article{campillosetal-midm2021,         title = {A clinical trials corpus annotated with UMLS© entities to enhance the access to Evidence-Based Medicine},         author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n and Moreno-Sandoval, Antonio},         journal = {BMC Medical Informatics and Decision Making},         volume={21}, number={1}, pages={1--19}, year={2021}, publisher={BioMed Central} } ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: we used different seeds for 5 evaluation rounds, and uploaded the model with the best results - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: average 15.2 epochs (±2.39); trained with early stopping if no improvement after 5 epochs (early stopping patience: 5) ### Training results (test set; average and standard deviation of 5 rounds with different seeds) | Precision | Recall | F1 | Accuracy | |:--------------:|:--------------:|:--------------:|:--------------:| | 0.856 (±0.015) | 0.873 (±0.018) | 0.864 (±0.007) | 0.996 (±0.001) | **Results per class (test set; average and standard deviation of 5 rounds with different seeds)** | Class | Precision | Recall | F1 | Support | |:---------------:|:--------------:|:--------------:|:--------------:|:---------:| | Contraindicated | 0.731 (±0.017) | 0.818 (±0.047) | 0.772 (±0.029) | 76 | | Dose | 0.803 (±0.021) | 0.827 (±0.026) | 0.815 (±0.011) | 314 | | Form | 0.977 (±0.016) | 0.900 (±0.020) | 0.937 (±0.011) | 74 | | Route | 0.924 (±0.015) | 0.929 (±0.014) | 0.926 (±0.007) | 288 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.2+cu113 - Datasets 1.18.4 - Tokenizers 0.11.6
[ "CT-EBM-SP", "SCIELO" ]
medspaner/roberta-es-clinical-trials-cases-neg-spec
medspaner
token-classification
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-10-03T10:10:49Z
2024-10-01T06:25:08+00:00
14
0
--- license: cc-by-nc-4.0 metrics: - precision - recall - f1 - accuracy tags: - generated_from_trainer widget: - text: Pacientes sanos, sin ninguna enfermedad, que no tomen ningún medicamento - text: Sujetos adultos con cáncer de próstata asintomáticos y no tratados previamente - text: Probable infección por SARS-CoV-2 y sospecha de enfermedad autoinmune model-index: - name: roberta-es-clinical-trials-cases-neg-spec results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-es-clinical-trials-cases-neg-spec This named entity recognition model detects negation and speculation entities, and negated and speculated concepts: - Neg_cue: negation cue (e.g. *no*, *sin*) - Negated: negated entity or event (e.g. *sin **dolor***) - Spec_cue: speculation cue (e.g. *posiblemente*) - Speculated: speculated entity or event (e.g. *posiblemente **sobreviva***) The model achieves the following results on the test set (when trained with the training and development set; results are averaged over 5 evaluation rounds): - Precision: 0.857 (±0.003) - Recall: 0.874 (±0.001) - F1: 0.865 (±0.002) - Accuracy: 0.986 (±0.001) ## Model description This model adapts the pre-trained model [bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es), presented in [Pio Carriño et al. (2022)](https://aclanthology.org/2022.bionlp-1.19/). It is fine-tuned to conduct medical named entity recognition on Spanish texts about clinical trials and clinical cases. The model is fine-tuned on the [NUBEs corpus (Lima et al. 2020)](https://aclanthology.org/2020.lrec-1.708/), the [CT-EBM-ES corpus (Campillos-Llanos et al. 2021)](https://bmcmedinformdecismak.biomedcentral.com/articles/10.1186/s12911-021-01395-z) and 100 clinical cases with a Creative Commons license. If you use this model, please, cite as follows: ``` @article{campillosetal2024,         title = {{Hybrid tool for semantic annotation and concept extraction of medical texts in Spanish}},         author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n},         journal = {BMC Bioinformatics}, year={2024}, publisher={BioMed Central} } ``` ## Intended uses & limitations **Disclosure**: *This model is under development and needs to be improved. It should not be used for medical decision making without human assistance and supervision* This model is intended for a generalist purpose, and may have bias and/or any other undesirable distortions. Third parties who deploy or provide systems and/or services using any of these models (or using systems based on these models) should note that it is their responsibility to mitigate the risks arising from their use. Third parties, in any event, need to comply with applicable regulations, including regulations concerning the use of artificial intelligence. The owner or creator of the models will in no event be liable for any results arising from the use made by third parties of these models. **Descargo de responsabilidad**: *Esta herramienta se encuentra en desarrollo y no debe ser empleada para la toma de decisiones médicas* La finalidad de este modelo es generalista, y se advierte que puede tener sesgos y/u otro tipo de distorsiones indeseables. Terceras partes que desplieguen o proporcionen sistemas y/o servicios usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) han tener presente que es su responsabilidad abordar y minimizar los riesgos derivados de su uso. Las terceras partes, en cualquier circunstancia, deben cumplir con la normativa aplicable, incluyendo la normativa que concierne al uso de la inteligencia artificial. El propietario o creador de los modelos de ningún modo será responsable de los resultados derivados del uso que las terceras partes hagan de estos modelos. ## Training and evaluation data The data used for fine-tuning are: 1) The [Negation and Uncertainty in Spanish Corpus (NUBes)](https://github.com/Vicomtech/NUBes-negation-uncertainty-biomedical-corpus) It is a collection of 29 682 sentences (518 068 tokens) from anonymised health records in Spanish, annotated with negation and uncertainty cues and their scopes. 2) The [Clinical Trials for Evidence-Based-Medicine in Spanish corpus](http://www.lllf.uam.es/ESP/nlpdata/wp2/). It is a collection of 1200 texts about clinical trials studies and clinical trials announcements: - 500 abstracts from journals published under a Creative Commons license, e.g. available in PubMed or the Scientific Electronic Library Online (SciELO) - 700 clinical trials announcements published in the European Clinical Trials Register and Repositorio Español de Estudios Clínicos 3) 100 clinical cases with a Creative Commons license If you use the CT-EBM-ES resource, please, cite as follows: ``` @article{campillosetal-midm2021,         title = {A clinical trials corpus annotated with UMLS© entities to enhance the access to Evidence-Based Medicine},         author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n and Moreno-Sandoval, Antonio},         journal = {BMC Medical Informatics and Decision Making},         volume={21}, number={1}, pages={1--19}, year={2021}, publisher={BioMed Central} } ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: we used different seeds for 5 evaluation rounds, and uploaded the model with the best results - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: average 14.60 epochs (±2.70); trained with early stopping if no improvement after 5 epochs (early stopping patience: 5) ### Training results (test set; average and standard deviation of 5 rounds with different seeds) | Precision | Recall | F1 | Accuracy | |:--------------:|:--------------:|:--------------:|:--------------:| | 0.857 (±0.003) | 0.874 (±0.001) | 0.865 (±0.002) | 0.986 (±0.001) | **Results per class (test set; average and standard deviation of 5 rounds with different seeds)** | Class | Precision | Recall | F1 | Support | |:-----------:|:--------------:|:--------------:|:--------------:|:---------:| | Neg_cue | 0.957 (±0.002) | 0.962 (±0.002) | 0.959 (±0.001) | 2484 | | Negated | 0.829 (±0.007) | 0.853 (±0.006) | 0.841 (±0.006) | 3132 | | Spec_cue | 0.832 (±0.004) | 0.860 (±0.010) | 0.846 (±0.005) | 756 | | Speculated | 0.718 (±0.014) | 0.731 (±0.008) | 0.724 (±0.009) | 984 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.2+cu113 - Datasets 1.18.4 - Tokenizers 0.11.6
[ "SCIELO" ]
vectoriseai/multilingual-e5-large
vectoriseai
feature-extraction
[ "sentence-transformers", "pytorch", "onnx", "safetensors", "xlm-roberta", "mteb", "Sentence Transformers", "sentence-similarity", "feature-extraction", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh", "arxiv:2212.03533", "arxiv:2108.08787", "arxiv:2104.08663", "arxiv:2210.07316", "license:mit", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2023-10-10T14:35:34Z
2023-10-10T15:52:06+00:00
14
0
--- language: - multilingual - af - am - ar - as - az - be - bg - bn - br - bs - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - hu - hy - id - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - 'no' - om - or - pa - pl - ps - pt - ro - ru - sa - sd - si - sk - sl - so - sq - sr - su - sv - sw - ta - te - th - tl - tr - ug - uk - ur - uz - vi - xh - yi - zh license: mit tags: - mteb - Sentence Transformers - sentence-similarity - feature-extraction - sentence-transformers model-index: - name: multilingual-e5-large results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 79.05970149253731 - type: ap value: 43.486574390835635 - type: f1 value: 73.32700092140148 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (de) type: mteb/amazon_counterfactual config: de split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 71.22055674518201 - type: ap value: 81.55756710830498 - type: f1 value: 69.28271787752661 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en-ext) type: mteb/amazon_counterfactual config: en-ext split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 80.41979010494754 - type: ap value: 29.34879922376344 - type: f1 value: 67.62475449011278 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (ja) type: mteb/amazon_counterfactual config: ja split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 77.8372591006424 - type: ap value: 26.557560591210738 - type: f1 value: 64.96619417368707 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 93.489875 - type: ap value: 90.98758636917603 - type: f1 value: 93.48554819717332 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 47.564 - type: f1 value: 46.75122173518047 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (de) type: mteb/amazon_reviews_multi config: de split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 45.400000000000006 - type: f1 value: 44.17195682400632 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (es) type: mteb/amazon_reviews_multi config: es split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 43.068 - type: f1 value: 42.38155696855596 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (fr) type: mteb/amazon_reviews_multi config: fr split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 41.89 - type: f1 value: 40.84407321682663 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (ja) type: mteb/amazon_reviews_multi config: ja split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 40.120000000000005 - type: f1 value: 39.522976223819114 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (zh) type: mteb/amazon_reviews_multi config: zh split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 38.832 - type: f1 value: 38.0392533394713 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 30.725 - type: map_at_10 value: 46.055 - type: map_at_100 value: 46.900999999999996 - type: map_at_1000 value: 46.911 - type: map_at_3 value: 41.548 - type: map_at_5 value: 44.297 - type: mrr_at_1 value: 31.152 - type: mrr_at_10 value: 46.231 - type: mrr_at_100 value: 47.07 - type: mrr_at_1000 value: 47.08 - type: mrr_at_3 value: 41.738 - type: mrr_at_5 value: 44.468999999999994 - type: ndcg_at_1 value: 30.725 - type: ndcg_at_10 value: 54.379999999999995 - type: ndcg_at_100 value: 58.138 - type: ndcg_at_1000 value: 58.389 - type: ndcg_at_3 value: 45.156 - type: ndcg_at_5 value: 50.123 - type: precision_at_1 value: 30.725 - type: precision_at_10 value: 8.087 - type: precision_at_100 value: 0.9769999999999999 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 18.54 - type: precision_at_5 value: 13.542000000000002 - type: recall_at_1 value: 30.725 - type: recall_at_10 value: 80.868 - type: recall_at_100 value: 97.653 - type: recall_at_1000 value: 99.57300000000001 - type: recall_at_3 value: 55.619 - type: recall_at_5 value: 67.71000000000001 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 44.30960650674069 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 38.427074197498996 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 60.28270056031872 - type: mrr value: 74.38332673789738 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 84.05942144105269 - type: cos_sim_spearman value: 82.51212105850809 - type: euclidean_pearson value: 81.95639829909122 - type: euclidean_spearman value: 82.3717564144213 - type: manhattan_pearson value: 81.79273425468256 - type: manhattan_spearman value: 82.20066817871039 - task: type: BitextMining dataset: name: MTEB BUCC (de-en) type: mteb/bucc-bitext-mining config: de-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 99.46764091858039 - type: f1 value: 99.37717466945023 - type: precision value: 99.33194154488518 - type: recall value: 99.46764091858039 - task: type: BitextMining dataset: name: MTEB BUCC (fr-en) type: mteb/bucc-bitext-mining config: fr-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 98.29407880255337 - type: f1 value: 98.11248073959938 - type: precision value: 98.02443319392472 - type: recall value: 98.29407880255337 - task: type: BitextMining dataset: name: MTEB BUCC (ru-en) type: mteb/bucc-bitext-mining config: ru-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 97.79009352268791 - type: f1 value: 97.5176076665512 - type: precision value: 97.38136473848286 - type: recall value: 97.79009352268791 - task: type: BitextMining dataset: name: MTEB BUCC (zh-en) type: mteb/bucc-bitext-mining config: zh-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 99.26276987888363 - type: f1 value: 99.20133403545726 - type: precision value: 99.17500438827453 - type: recall value: 99.26276987888363 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 84.72727272727273 - type: f1 value: 84.67672206031433 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 35.34220182511161 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 33.4987096128766 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 25.558249999999997 - type: map_at_10 value: 34.44425000000001 - type: map_at_100 value: 35.59833333333333 - type: map_at_1000 value: 35.706916666666665 - type: map_at_3 value: 31.691749999999995 - type: map_at_5 value: 33.252916666666664 - type: mrr_at_1 value: 30.252666666666666 - type: mrr_at_10 value: 38.60675 - type: mrr_at_100 value: 39.42666666666666 - type: mrr_at_1000 value: 39.48408333333334 - type: mrr_at_3 value: 36.17441666666665 - type: mrr_at_5 value: 37.56275 - type: ndcg_at_1 value: 30.252666666666666 - type: ndcg_at_10 value: 39.683 - type: ndcg_at_100 value: 44.68541666666667 - type: ndcg_at_1000 value: 46.94316666666668 - type: ndcg_at_3 value: 34.961749999999995 - type: ndcg_at_5 value: 37.215666666666664 - type: precision_at_1 value: 30.252666666666666 - type: precision_at_10 value: 6.904166666666667 - type: precision_at_100 value: 1.0989999999999995 - type: precision_at_1000 value: 0.14733333333333334 - type: precision_at_3 value: 16.037666666666667 - type: precision_at_5 value: 11.413583333333333 - type: recall_at_1 value: 25.558249999999997 - type: recall_at_10 value: 51.13341666666666 - type: recall_at_100 value: 73.08366666666667 - type: recall_at_1000 value: 88.79483333333334 - type: recall_at_3 value: 37.989083333333326 - type: recall_at_5 value: 43.787833333333325 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 10.338 - type: map_at_10 value: 18.360000000000003 - type: map_at_100 value: 19.942 - type: map_at_1000 value: 20.134 - type: map_at_3 value: 15.174000000000001 - type: map_at_5 value: 16.830000000000002 - type: mrr_at_1 value: 23.257 - type: mrr_at_10 value: 33.768 - type: mrr_at_100 value: 34.707 - type: mrr_at_1000 value: 34.766000000000005 - type: mrr_at_3 value: 30.977 - type: mrr_at_5 value: 32.528 - type: ndcg_at_1 value: 23.257 - type: ndcg_at_10 value: 25.733 - type: ndcg_at_100 value: 32.288 - type: ndcg_at_1000 value: 35.992000000000004 - type: ndcg_at_3 value: 20.866 - type: ndcg_at_5 value: 22.612 - type: precision_at_1 value: 23.257 - type: precision_at_10 value: 8.124 - type: precision_at_100 value: 1.518 - type: precision_at_1000 value: 0.219 - type: precision_at_3 value: 15.679000000000002 - type: precision_at_5 value: 12.117 - type: recall_at_1 value: 10.338 - type: recall_at_10 value: 31.154 - type: recall_at_100 value: 54.161 - type: recall_at_1000 value: 75.21900000000001 - type: recall_at_3 value: 19.427 - type: recall_at_5 value: 24.214 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 8.498 - type: map_at_10 value: 19.103 - type: map_at_100 value: 27.375 - type: map_at_1000 value: 28.981 - type: map_at_3 value: 13.764999999999999 - type: map_at_5 value: 15.950000000000001 - type: mrr_at_1 value: 65.5 - type: mrr_at_10 value: 74.53800000000001 - type: mrr_at_100 value: 74.71799999999999 - type: mrr_at_1000 value: 74.725 - type: mrr_at_3 value: 72.792 - type: mrr_at_5 value: 73.554 - type: ndcg_at_1 value: 53.37499999999999 - type: ndcg_at_10 value: 41.286 - type: ndcg_at_100 value: 45.972 - type: ndcg_at_1000 value: 53.123 - type: ndcg_at_3 value: 46.172999999999995 - type: ndcg_at_5 value: 43.033 - type: precision_at_1 value: 65.5 - type: precision_at_10 value: 32.725 - type: precision_at_100 value: 10.683 - type: precision_at_1000 value: 1.978 - type: precision_at_3 value: 50 - type: precision_at_5 value: 41.349999999999994 - type: recall_at_1 value: 8.498 - type: recall_at_10 value: 25.070999999999998 - type: recall_at_100 value: 52.383 - type: recall_at_1000 value: 74.91499999999999 - type: recall_at_3 value: 15.207999999999998 - type: recall_at_5 value: 18.563 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 46.5 - type: f1 value: 41.93833713984145 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 67.914 - type: map_at_10 value: 78.10000000000001 - type: map_at_100 value: 78.333 - type: map_at_1000 value: 78.346 - type: map_at_3 value: 76.626 - type: map_at_5 value: 77.627 - type: mrr_at_1 value: 72.74199999999999 - type: mrr_at_10 value: 82.414 - type: mrr_at_100 value: 82.511 - type: mrr_at_1000 value: 82.513 - type: mrr_at_3 value: 81.231 - type: mrr_at_5 value: 82.065 - type: ndcg_at_1 value: 72.74199999999999 - type: ndcg_at_10 value: 82.806 - type: ndcg_at_100 value: 83.677 - type: ndcg_at_1000 value: 83.917 - type: ndcg_at_3 value: 80.305 - type: ndcg_at_5 value: 81.843 - type: precision_at_1 value: 72.74199999999999 - type: precision_at_10 value: 10.24 - type: precision_at_100 value: 1.089 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 31.268 - type: precision_at_5 value: 19.706000000000003 - type: recall_at_1 value: 67.914 - type: recall_at_10 value: 92.889 - type: recall_at_100 value: 96.42699999999999 - type: recall_at_1000 value: 97.92 - type: recall_at_3 value: 86.21 - type: recall_at_5 value: 90.036 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 22.166 - type: map_at_10 value: 35.57 - type: map_at_100 value: 37.405 - type: map_at_1000 value: 37.564 - type: map_at_3 value: 30.379 - type: map_at_5 value: 33.324 - type: mrr_at_1 value: 43.519000000000005 - type: mrr_at_10 value: 51.556000000000004 - type: mrr_at_100 value: 52.344 - type: mrr_at_1000 value: 52.373999999999995 - type: mrr_at_3 value: 48.868 - type: mrr_at_5 value: 50.319 - type: ndcg_at_1 value: 43.519000000000005 - type: ndcg_at_10 value: 43.803 - type: ndcg_at_100 value: 50.468999999999994 - type: ndcg_at_1000 value: 53.111 - type: ndcg_at_3 value: 38.893 - type: ndcg_at_5 value: 40.653 - type: precision_at_1 value: 43.519000000000005 - type: precision_at_10 value: 12.253 - type: precision_at_100 value: 1.931 - type: precision_at_1000 value: 0.242 - type: precision_at_3 value: 25.617 - type: precision_at_5 value: 19.383 - type: recall_at_1 value: 22.166 - type: recall_at_10 value: 51.6 - type: recall_at_100 value: 76.574 - type: recall_at_1000 value: 92.192 - type: recall_at_3 value: 34.477999999999994 - type: recall_at_5 value: 41.835 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 39.041 - type: map_at_10 value: 62.961999999999996 - type: map_at_100 value: 63.79899999999999 - type: map_at_1000 value: 63.854 - type: map_at_3 value: 59.399 - type: map_at_5 value: 61.669 - type: mrr_at_1 value: 78.082 - type: mrr_at_10 value: 84.321 - type: mrr_at_100 value: 84.49600000000001 - type: mrr_at_1000 value: 84.502 - type: mrr_at_3 value: 83.421 - type: mrr_at_5 value: 83.977 - type: ndcg_at_1 value: 78.082 - type: ndcg_at_10 value: 71.229 - type: ndcg_at_100 value: 74.10900000000001 - type: ndcg_at_1000 value: 75.169 - type: ndcg_at_3 value: 66.28699999999999 - type: ndcg_at_5 value: 69.084 - type: precision_at_1 value: 78.082 - type: precision_at_10 value: 14.993 - type: precision_at_100 value: 1.7239999999999998 - type: precision_at_1000 value: 0.186 - type: precision_at_3 value: 42.737 - type: precision_at_5 value: 27.843 - type: recall_at_1 value: 39.041 - type: recall_at_10 value: 74.96300000000001 - type: recall_at_100 value: 86.199 - type: recall_at_1000 value: 93.228 - type: recall_at_3 value: 64.105 - type: recall_at_5 value: 69.608 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 90.23160000000001 - type: ap value: 85.5674856808308 - type: f1 value: 90.18033354786317 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 24.091 - type: map_at_10 value: 36.753 - type: map_at_100 value: 37.913000000000004 - type: map_at_1000 value: 37.958999999999996 - type: map_at_3 value: 32.818999999999996 - type: map_at_5 value: 35.171 - type: mrr_at_1 value: 24.742 - type: mrr_at_10 value: 37.285000000000004 - type: mrr_at_100 value: 38.391999999999996 - type: mrr_at_1000 value: 38.431 - type: mrr_at_3 value: 33.440999999999995 - type: mrr_at_5 value: 35.75 - type: ndcg_at_1 value: 24.742 - type: ndcg_at_10 value: 43.698 - type: ndcg_at_100 value: 49.145 - type: ndcg_at_1000 value: 50.23800000000001 - type: ndcg_at_3 value: 35.769 - type: ndcg_at_5 value: 39.961999999999996 - type: precision_at_1 value: 24.742 - type: precision_at_10 value: 6.7989999999999995 - type: precision_at_100 value: 0.95 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 15.096000000000002 - type: precision_at_5 value: 11.183 - type: recall_at_1 value: 24.091 - type: recall_at_10 value: 65.068 - type: recall_at_100 value: 89.899 - type: recall_at_1000 value: 98.16 - type: recall_at_3 value: 43.68 - type: recall_at_5 value: 53.754999999999995 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 93.66621067031465 - type: f1 value: 93.49622853272142 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (de) type: mteb/mtop_domain config: de split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 91.94702733164272 - type: f1 value: 91.17043441745282 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (es) type: mteb/mtop_domain config: es split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 92.20146764509674 - type: f1 value: 91.98359080555608 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (fr) type: mteb/mtop_domain config: fr split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 88.99780770435328 - type: f1 value: 89.19746342724068 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (hi) type: mteb/mtop_domain config: hi split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 89.78486912871998 - type: f1 value: 89.24578823628642 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (th) type: mteb/mtop_domain config: th split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 88.74502712477394 - type: f1 value: 89.00297573881542 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 77.9046967624259 - type: f1 value: 59.36787125785957 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (de) type: mteb/mtop_intent config: de split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 74.5280360664976 - type: f1 value: 57.17723440888718 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (es) type: mteb/mtop_intent config: es split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 75.44029352901934 - type: f1 value: 54.052855531072964 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (fr) type: mteb/mtop_intent config: fr split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 70.5606013153774 - type: f1 value: 52.62215934386531 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (hi) type: mteb/mtop_intent config: hi split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 73.11581211903908 - type: f1 value: 52.341291845645465 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (th) type: mteb/mtop_intent config: th split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 74.28933092224233 - type: f1 value: 57.07918745504911 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (af) type: mteb/amazon_massive_intent config: af split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.38063214525892 - type: f1 value: 59.46463723443009 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (am) type: mteb/amazon_massive_intent config: am split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 56.06926698049766 - type: f1 value: 52.49084283283562 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ar) type: mteb/amazon_massive_intent config: ar split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 60.74983187626093 - type: f1 value: 56.960640620165904 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (az) type: mteb/amazon_massive_intent config: az split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.86550100874243 - type: f1 value: 62.47370548140688 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (bn) type: mteb/amazon_massive_intent config: bn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.971082716879636 - type: f1 value: 61.03812421957381 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (cy) type: mteb/amazon_massive_intent config: cy split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 54.98318762609282 - type: f1 value: 51.51207916008392 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (da) type: mteb/amazon_massive_intent config: da split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.45527908540686 - type: f1 value: 66.16631905400318 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (de) type: mteb/amazon_massive_intent config: de split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.32750504371216 - type: f1 value: 66.16755288646591 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (el) type: mteb/amazon_massive_intent config: el split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.09213180901143 - type: f1 value: 66.95654394661507 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 73.75588433086752 - type: f1 value: 71.79973779656923 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (es) type: mteb/amazon_massive_intent config: es split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.49428379287154 - type: f1 value: 68.37494379215734 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fa) type: mteb/amazon_massive_intent config: fa split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.90921318090115 - type: f1 value: 66.79517376481645 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fi) type: mteb/amazon_massive_intent config: fi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.12104909213181 - type: f1 value: 67.29448842879584 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fr) type: mteb/amazon_massive_intent config: fr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.34095494283793 - type: f1 value: 67.01134288992947 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (he) type: mteb/amazon_massive_intent config: he split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.61264290517822 - type: f1 value: 64.68730512660757 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hi) type: mteb/amazon_massive_intent config: hi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.79757901815738 - type: f1 value: 65.24938539425598 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hu) type: mteb/amazon_massive_intent config: hu split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.68728984532616 - type: f1 value: 67.0487169762553 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hy) type: mteb/amazon_massive_intent config: hy split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.07464694014795 - type: f1 value: 59.183532276789286 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (id) type: mteb/amazon_massive_intent config: id split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.04707464694015 - type: f1 value: 67.66829629003848 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (is) type: mteb/amazon_massive_intent config: is split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.42434431741762 - type: f1 value: 59.01617226544757 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (it) type: mteb/amazon_massive_intent config: it split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.53127101546738 - type: f1 value: 68.10033760906255 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ja) type: mteb/amazon_massive_intent config: ja split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 72.50504371217215 - type: f1 value: 69.74931103158923 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (jv) type: mteb/amazon_massive_intent config: jv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.91190316072628 - type: f1 value: 54.05551136648796 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ka) type: mteb/amazon_massive_intent config: ka split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 51.78211163416275 - type: f1 value: 49.874888544058535 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (km) type: mteb/amazon_massive_intent config: km split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 47.017484868863484 - type: f1 value: 44.53364263352014 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (kn) type: mteb/amazon_massive_intent config: kn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.16207128446537 - type: f1 value: 59.01185692320829 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ko) type: mteb/amazon_massive_intent config: ko split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.42501681237391 - type: f1 value: 67.13169450166086 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (lv) type: mteb/amazon_massive_intent config: lv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.0780094149294 - type: f1 value: 64.41720167850707 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ml) type: mteb/amazon_massive_intent config: ml split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.57162071284466 - type: f1 value: 62.414138683804424 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (mn) type: mteb/amazon_massive_intent config: mn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 61.71149966375252 - type: f1 value: 58.594805125087234 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ms) type: mteb/amazon_massive_intent config: ms split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.03900470746471 - type: f1 value: 63.87937257883887 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (my) type: mteb/amazon_massive_intent config: my split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 60.8776059179556 - type: f1 value: 57.48587618059131 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (nb) type: mteb/amazon_massive_intent config: nb split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.87895090786819 - type: f1 value: 66.8141299430347 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (nl) type: mteb/amazon_massive_intent config: nl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.45057162071285 - type: f1 value: 67.46444039673516 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (pl) type: mteb/amazon_massive_intent config: pl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.546738399462 - type: f1 value: 68.63640876702655 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (pt) type: mteb/amazon_massive_intent config: pt split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.72965702757229 - type: f1 value: 68.54119560379115 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ro) type: mteb/amazon_massive_intent config: ro split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 68.35574983187625 - type: f1 value: 65.88844917691927 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ru) type: mteb/amazon_massive_intent config: ru split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.70477471418964 - type: f1 value: 69.19665697061978 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sl) type: mteb/amazon_massive_intent config: sl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.0880968392737 - type: f1 value: 64.76962317666086 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sq) type: mteb/amazon_massive_intent config: sq split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.18493611297916 - type: f1 value: 62.49984559035371 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sv) type: mteb/amazon_massive_intent config: sv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.75857431069265 - type: f1 value: 69.20053687623418 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sw) type: mteb/amazon_massive_intent config: sw split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 58.500336247478145 - type: f1 value: 55.2972398687929 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ta) type: mteb/amazon_massive_intent config: ta split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.68997982515132 - type: f1 value: 59.36848202755348 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (te) type: mteb/amazon_massive_intent config: te split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.01950235373235 - type: f1 value: 60.09351954625423 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (th) type: mteb/amazon_massive_intent config: th split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 68.29186281102892 - type: f1 value: 67.57860496703447 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (tl) type: mteb/amazon_massive_intent config: tl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.77471418964357 - type: f1 value: 61.913983147713836 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (tr) type: mteb/amazon_massive_intent config: tr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.87222595830532 - type: f1 value: 66.03679033708141 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ur) type: mteb/amazon_massive_intent config: ur split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.04505716207127 - type: f1 value: 61.28569169817908 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (vi) type: mteb/amazon_massive_intent config: vi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.38466711499663 - type: f1 value: 67.20532357036844 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-CN) type: mteb/amazon_massive_intent config: zh-CN split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.12306657700067 - type: f1 value: 68.91251226588182 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-TW) type: mteb/amazon_massive_intent config: zh-TW split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.20040349697378 - type: f1 value: 66.02657347714175 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (af) type: mteb/amazon_massive_scenario config: af split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.73907195696032 - type: f1 value: 66.98484521791418 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (am) type: mteb/amazon_massive_scenario config: am split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 60.58843308675185 - type: f1 value: 58.95591723092005 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ar) type: mteb/amazon_massive_scenario config: ar split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.22730329522528 - type: f1 value: 66.0894499712115 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (az) type: mteb/amazon_massive_scenario config: az split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.48285137861465 - type: f1 value: 65.21963176785157 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (bn) type: mteb/amazon_massive_scenario config: bn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.74714189643578 - type: f1 value: 66.8212192745412 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (cy) type: mteb/amazon_massive_scenario config: cy split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 59.09213180901143 - type: f1 value: 56.70735546356339 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (da) type: mteb/amazon_massive_scenario config: da split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.05716207128448 - type: f1 value: 74.8413712365364 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (de) type: mteb/amazon_massive_scenario config: de split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.69737726967047 - type: f1 value: 74.7664341963 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (el) type: mteb/amazon_massive_scenario config: el split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.90383322125084 - type: f1 value: 73.59201554448323 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.51176866173503 - type: f1 value: 77.46104434577758 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (es) type: mteb/amazon_massive_scenario config: es split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.31069266980496 - type: f1 value: 74.61048660675635 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fa) type: mteb/amazon_massive_scenario config: fa split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 72.95225285810356 - type: f1 value: 72.33160006574627 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fi) type: mteb/amazon_massive_scenario config: fi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.12373907195696 - type: f1 value: 73.20921012557481 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fr) type: mteb/amazon_massive_scenario config: fr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.86684599865501 - type: f1 value: 73.82348774610831 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (he) type: mteb/amazon_massive_scenario config: he split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.40215198386012 - type: f1 value: 71.11945183971858 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hi) type: mteb/amazon_massive_scenario config: hi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 72.12844653665098 - type: f1 value: 71.34450495911766 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hu) type: mteb/amazon_massive_scenario config: hu split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.52252858103566 - type: f1 value: 73.98878711342999 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hy) type: mteb/amazon_massive_scenario config: hy split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 64.93611297915265 - type: f1 value: 63.723200467653385 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (id) type: mteb/amazon_massive_scenario config: id split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.11903160726295 - type: f1 value: 73.82138439467096 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (is) type: mteb/amazon_massive_scenario config: is split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.15198386012105 - type: f1 value: 66.02172193802167 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (it) type: mteb/amazon_massive_scenario config: it split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.32414256893072 - type: f1 value: 74.30943421170574 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ja) type: mteb/amazon_massive_scenario config: ja split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.46805648957633 - type: f1 value: 77.62808409298209 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (jv) type: mteb/amazon_massive_scenario config: jv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.318762609280434 - type: f1 value: 62.094284066075076 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ka) type: mteb/amazon_massive_scenario config: ka split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 58.34902488231338 - type: f1 value: 57.12893860987984 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (km) type: mteb/amazon_massive_scenario config: km split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 50.88433086751849 - type: f1 value: 48.2272350802058 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (kn) type: mteb/amazon_massive_scenario config: kn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.4425016812374 - type: f1 value: 64.61463095996173 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ko) type: mteb/amazon_massive_scenario config: ko split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.04707464694015 - type: f1 value: 75.05099199098998 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (lv) type: mteb/amazon_massive_scenario config: lv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.50437121721586 - type: f1 value: 69.83397721096314 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ml) type: mteb/amazon_massive_scenario config: ml split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.94283792871553 - type: f1 value: 68.8704663703913 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (mn) type: mteb/amazon_massive_scenario config: mn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 64.79488903833222 - type: f1 value: 63.615424063345436 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ms) type: mteb/amazon_massive_scenario config: ms split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.88231338264963 - type: f1 value: 68.57892302593237 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (my) type: mteb/amazon_massive_scenario config: my split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.248150638870214 - type: f1 value: 61.06680605338809 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (nb) type: mteb/amazon_massive_scenario config: nb split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.84196368527236 - type: f1 value: 74.52566464968763 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (nl) type: mteb/amazon_massive_scenario config: nl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.8285137861466 - type: f1 value: 74.8853197608802 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (pl) type: mteb/amazon_massive_scenario config: pl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.13248150638869 - type: f1 value: 74.3982040999179 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (pt) type: mteb/amazon_massive_scenario config: pt split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.49024882313383 - type: f1 value: 73.82153848368573 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ro) type: mteb/amazon_massive_scenario config: ro split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.72158708809684 - type: f1 value: 71.85049433180541 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ru) type: mteb/amazon_massive_scenario config: ru split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.137861466039 - type: f1 value: 75.37628348188467 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sl) type: mteb/amazon_massive_scenario config: sl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.86953597848016 - type: f1 value: 71.87537624521661 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sq) type: mteb/amazon_massive_scenario config: sq split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.27572293207801 - type: f1 value: 68.80017302344231 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sv) type: mteb/amazon_massive_scenario config: sv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.09952925353059 - type: f1 value: 76.07992707688408 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sw) type: mteb/amazon_massive_scenario config: sw split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.140551445864155 - type: f1 value: 61.73855010331415 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ta) type: mteb/amazon_massive_scenario config: ta split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.27774041694687 - type: f1 value: 64.83664868894539 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (te) type: mteb/amazon_massive_scenario config: te split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.69468728984533 - type: f1 value: 64.76239666920868 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (th) type: mteb/amazon_massive_scenario config: th split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.44653665097512 - type: f1 value: 73.14646052013873 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (tl) type: mteb/amazon_massive_scenario config: tl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.71351714862139 - type: f1 value: 66.67212180163382 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (tr) type: mteb/amazon_massive_scenario config: tr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.9946200403497 - type: f1 value: 73.87348793725525 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ur) type: mteb/amazon_massive_scenario config: ur split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.15400134498992 - type: f1 value: 67.09433241421094 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (vi) type: mteb/amazon_massive_scenario config: vi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.11365164761264 - type: f1 value: 73.59502539433753 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-CN) type: mteb/amazon_massive_scenario config: zh-CN split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.82582380632145 - type: f1 value: 76.89992945316313 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-TW) type: mteb/amazon_massive_scenario config: zh-TW split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.81237390719569 - type: f1 value: 72.36499770986265 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 31.480506569594695 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 29.71252128004552 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.421396787056548 - type: mrr value: 32.48155274872267 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 5.595 - type: map_at_10 value: 12.642000000000001 - type: map_at_100 value: 15.726 - type: map_at_1000 value: 17.061999999999998 - type: map_at_3 value: 9.125 - type: map_at_5 value: 10.866000000000001 - type: mrr_at_1 value: 43.344 - type: mrr_at_10 value: 52.227999999999994 - type: mrr_at_100 value: 52.898999999999994 - type: mrr_at_1000 value: 52.944 - type: mrr_at_3 value: 49.845 - type: mrr_at_5 value: 51.115 - type: ndcg_at_1 value: 41.949999999999996 - type: ndcg_at_10 value: 33.995 - type: ndcg_at_100 value: 30.869999999999997 - type: ndcg_at_1000 value: 39.487 - type: ndcg_at_3 value: 38.903999999999996 - type: ndcg_at_5 value: 37.236999999999995 - type: precision_at_1 value: 43.344 - type: precision_at_10 value: 25.480000000000004 - type: precision_at_100 value: 7.672 - type: precision_at_1000 value: 2.028 - type: precision_at_3 value: 36.636 - type: precision_at_5 value: 32.632 - type: recall_at_1 value: 5.595 - type: recall_at_10 value: 16.466 - type: recall_at_100 value: 31.226 - type: recall_at_1000 value: 62.778999999999996 - type: recall_at_3 value: 9.931 - type: recall_at_5 value: 12.884 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 40.414 - type: map_at_10 value: 56.754000000000005 - type: map_at_100 value: 57.457 - type: map_at_1000 value: 57.477999999999994 - type: map_at_3 value: 52.873999999999995 - type: map_at_5 value: 55.175 - type: mrr_at_1 value: 45.278 - type: mrr_at_10 value: 59.192 - type: mrr_at_100 value: 59.650000000000006 - type: mrr_at_1000 value: 59.665 - type: mrr_at_3 value: 56.141 - type: mrr_at_5 value: 57.998000000000005 - type: ndcg_at_1 value: 45.278 - type: ndcg_at_10 value: 64.056 - type: ndcg_at_100 value: 66.89 - type: ndcg_at_1000 value: 67.364 - type: ndcg_at_3 value: 56.97 - type: ndcg_at_5 value: 60.719 - type: precision_at_1 value: 45.278 - type: precision_at_10 value: 9.994 - type: precision_at_100 value: 1.165 - type: precision_at_1000 value: 0.121 - type: precision_at_3 value: 25.512 - type: precision_at_5 value: 17.509 - type: recall_at_1 value: 40.414 - type: recall_at_10 value: 83.596 - type: recall_at_100 value: 95.72 - type: recall_at_1000 value: 99.24 - type: recall_at_3 value: 65.472 - type: recall_at_5 value: 74.039 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 70.352 - type: map_at_10 value: 84.369 - type: map_at_100 value: 85.02499999999999 - type: map_at_1000 value: 85.04 - type: map_at_3 value: 81.42399999999999 - type: map_at_5 value: 83.279 - type: mrr_at_1 value: 81.05 - type: mrr_at_10 value: 87.401 - type: mrr_at_100 value: 87.504 - type: mrr_at_1000 value: 87.505 - type: mrr_at_3 value: 86.443 - type: mrr_at_5 value: 87.10799999999999 - type: ndcg_at_1 value: 81.04 - type: ndcg_at_10 value: 88.181 - type: ndcg_at_100 value: 89.411 - type: ndcg_at_1000 value: 89.507 - type: ndcg_at_3 value: 85.28099999999999 - type: ndcg_at_5 value: 86.888 - type: precision_at_1 value: 81.04 - type: precision_at_10 value: 13.406 - type: precision_at_100 value: 1.5350000000000001 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.31 - type: precision_at_5 value: 24.54 - type: recall_at_1 value: 70.352 - type: recall_at_10 value: 95.358 - type: recall_at_100 value: 99.541 - type: recall_at_1000 value: 99.984 - type: recall_at_3 value: 87.111 - type: recall_at_5 value: 91.643 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 46.54068723291946 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 63.216287629895994 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 4.023000000000001 - type: map_at_10 value: 10.071 - type: map_at_100 value: 11.892 - type: map_at_1000 value: 12.196 - type: map_at_3 value: 7.234 - type: map_at_5 value: 8.613999999999999 - type: mrr_at_1 value: 19.900000000000002 - type: mrr_at_10 value: 30.516 - type: mrr_at_100 value: 31.656000000000002 - type: mrr_at_1000 value: 31.723000000000003 - type: mrr_at_3 value: 27.400000000000002 - type: mrr_at_5 value: 29.270000000000003 - type: ndcg_at_1 value: 19.900000000000002 - type: ndcg_at_10 value: 17.474 - type: ndcg_at_100 value: 25.020999999999997 - type: ndcg_at_1000 value: 30.728 - type: ndcg_at_3 value: 16.588 - type: ndcg_at_5 value: 14.498 - type: precision_at_1 value: 19.900000000000002 - type: precision_at_10 value: 9.139999999999999 - type: precision_at_100 value: 2.011 - type: precision_at_1000 value: 0.33899999999999997 - type: precision_at_3 value: 15.667 - type: precision_at_5 value: 12.839999999999998 - type: recall_at_1 value: 4.023000000000001 - type: recall_at_10 value: 18.497 - type: recall_at_100 value: 40.8 - type: recall_at_1000 value: 68.812 - type: recall_at_3 value: 9.508 - type: recall_at_5 value: 12.983 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 83.967008785134 - type: cos_sim_spearman value: 80.23142141101837 - type: euclidean_pearson value: 81.20166064704539 - type: euclidean_spearman value: 80.18961335654585 - type: manhattan_pearson value: 81.13925443187625 - type: manhattan_spearman value: 80.07948723044424 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 86.94262461316023 - type: cos_sim_spearman value: 80.01596278563865 - type: euclidean_pearson value: 83.80799622922581 - type: euclidean_spearman value: 79.94984954947103 - type: manhattan_pearson value: 83.68473841756281 - type: manhattan_spearman value: 79.84990707951822 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 80.57346443146068 - type: cos_sim_spearman value: 81.54689837570866 - type: euclidean_pearson value: 81.10909881516007 - type: euclidean_spearman value: 81.56746243261762 - type: manhattan_pearson value: 80.87076036186582 - type: manhattan_spearman value: 81.33074987964402 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 79.54733787179849 - type: cos_sim_spearman value: 77.72202105610411 - type: euclidean_pearson value: 78.9043595478849 - type: euclidean_spearman value: 77.93422804309435 - type: manhattan_pearson value: 78.58115121621368 - type: manhattan_spearman value: 77.62508135122033 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 88.59880017237558 - type: cos_sim_spearman value: 89.31088630824758 - type: euclidean_pearson value: 88.47069261564656 - type: euclidean_spearman value: 89.33581971465233 - type: manhattan_pearson value: 88.40774264100956 - type: manhattan_spearman value: 89.28657485627835 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 84.08055117917084 - type: cos_sim_spearman value: 85.78491813080304 - type: euclidean_pearson value: 84.99329155500392 - type: euclidean_spearman value: 85.76728064677287 - type: manhattan_pearson value: 84.87947428989587 - type: manhattan_spearman value: 85.62429454917464 - task: type: STS dataset: name: MTEB STS17 (ko-ko) type: mteb/sts17-crosslingual-sts config: ko-ko split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 82.14190939287384 - type: cos_sim_spearman value: 82.27331573306041 - type: euclidean_pearson value: 81.891896953716 - type: euclidean_spearman value: 82.37695542955998 - type: manhattan_pearson value: 81.73123869460504 - type: manhattan_spearman value: 82.19989168441421 - task: type: STS dataset: name: MTEB STS17 (ar-ar) type: mteb/sts17-crosslingual-sts config: ar-ar split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 76.84695301843362 - type: cos_sim_spearman value: 77.87790986014461 - type: euclidean_pearson value: 76.91981583106315 - type: euclidean_spearman value: 77.88154772749589 - type: manhattan_pearson value: 76.94953277451093 - type: manhattan_spearman value: 77.80499230728604 - task: type: STS dataset: name: MTEB STS17 (en-ar) type: mteb/sts17-crosslingual-sts config: en-ar split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 75.44657840482016 - type: cos_sim_spearman value: 75.05531095119674 - type: euclidean_pearson value: 75.88161755829299 - type: euclidean_spearman value: 74.73176238219332 - type: manhattan_pearson value: 75.63984765635362 - type: manhattan_spearman value: 74.86476440770737 - task: type: STS dataset: name: MTEB STS17 (en-de) type: mteb/sts17-crosslingual-sts config: en-de split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 85.64700140524133 - type: cos_sim_spearman value: 86.16014210425672 - type: euclidean_pearson value: 86.49086860843221 - type: euclidean_spearman value: 86.09729326815614 - type: manhattan_pearson value: 86.43406265125513 - type: manhattan_spearman value: 86.17740150939994 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.91170098764921 - type: cos_sim_spearman value: 88.12437004058931 - type: euclidean_pearson value: 88.81828254494437 - type: euclidean_spearman value: 88.14831794572122 - type: manhattan_pearson value: 88.93442183448961 - type: manhattan_spearman value: 88.15254630778304 - task: type: STS dataset: name: MTEB STS17 (en-tr) type: mteb/sts17-crosslingual-sts config: en-tr split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 72.91390577997292 - type: cos_sim_spearman value: 71.22979457536074 - type: euclidean_pearson value: 74.40314008106749 - type: euclidean_spearman value: 72.54972136083246 - type: manhattan_pearson value: 73.85687539530218 - type: manhattan_spearman value: 72.09500771742637 - task: type: STS dataset: name: MTEB STS17 (es-en) type: mteb/sts17-crosslingual-sts config: es-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 80.9301067983089 - type: cos_sim_spearman value: 80.74989828346473 - type: euclidean_pearson value: 81.36781301814257 - type: euclidean_spearman value: 80.9448819964426 - type: manhattan_pearson value: 81.0351322685609 - type: manhattan_spearman value: 80.70192121844177 - task: type: STS dataset: name: MTEB STS17 (es-es) type: mteb/sts17-crosslingual-sts config: es-es split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.13820465980005 - type: cos_sim_spearman value: 86.73532498758757 - type: euclidean_pearson value: 87.21329451846637 - type: euclidean_spearman value: 86.57863198601002 - type: manhattan_pearson value: 87.06973713818554 - type: manhattan_spearman value: 86.47534918791499 - task: type: STS dataset: name: MTEB STS17 (fr-en) type: mteb/sts17-crosslingual-sts config: fr-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 85.48720108904415 - type: cos_sim_spearman value: 85.62221757068387 - type: euclidean_pearson value: 86.1010129512749 - type: euclidean_spearman value: 85.86580966509942 - type: manhattan_pearson value: 86.26800938808971 - type: manhattan_spearman value: 85.88902721678429 - task: type: STS dataset: name: MTEB STS17 (it-en) type: mteb/sts17-crosslingual-sts config: it-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 83.98021347333516 - type: cos_sim_spearman value: 84.53806553803501 - type: euclidean_pearson value: 84.61483347248364 - type: euclidean_spearman value: 85.14191408011702 - type: manhattan_pearson value: 84.75297588825967 - type: manhattan_spearman value: 85.33176753669242 - task: type: STS dataset: name: MTEB STS17 (nl-en) type: mteb/sts17-crosslingual-sts config: nl-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 84.51856644893233 - type: cos_sim_spearman value: 85.27510748506413 - type: euclidean_pearson value: 85.09886861540977 - type: euclidean_spearman value: 85.62579245860887 - type: manhattan_pearson value: 84.93017860464607 - type: manhattan_spearman value: 85.5063988898453 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 62.581573200584195 - type: cos_sim_spearman value: 63.05503590247928 - type: euclidean_pearson value: 63.652564812602094 - type: euclidean_spearman value: 62.64811520876156 - type: manhattan_pearson value: 63.506842893061076 - type: manhattan_spearman value: 62.51289573046917 - task: type: STS dataset: name: MTEB STS22 (de) type: mteb/sts22-crosslingual-sts config: de split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 48.2248801729127 - type: cos_sim_spearman value: 56.5936604678561 - type: euclidean_pearson value: 43.98149464089 - type: euclidean_spearman value: 56.108561882423615 - type: manhattan_pearson value: 43.86880305903564 - type: manhattan_spearman value: 56.04671150510166 - task: type: STS dataset: name: MTEB STS22 (es) type: mteb/sts22-crosslingual-sts config: es split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 55.17564527009831 - type: cos_sim_spearman value: 64.57978560979488 - type: euclidean_pearson value: 58.8818330154583 - type: euclidean_spearman value: 64.99214839071281 - type: manhattan_pearson value: 58.72671436121381 - type: manhattan_spearman value: 65.10713416616109 - task: type: STS dataset: name: MTEB STS22 (pl) type: mteb/sts22-crosslingual-sts config: pl split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 26.772131864023297 - type: cos_sim_spearman value: 34.68200792408681 - type: euclidean_pearson value: 16.68082419005441 - type: euclidean_spearman value: 34.83099932652166 - type: manhattan_pearson value: 16.52605949659529 - type: manhattan_spearman value: 34.82075801399475 - task: type: STS dataset: name: MTEB STS22 (tr) type: mteb/sts22-crosslingual-sts config: tr split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 54.42415189043831 - type: cos_sim_spearman value: 63.54594264576758 - type: euclidean_pearson value: 57.36577498297745 - type: euclidean_spearman value: 63.111466379158074 - type: manhattan_pearson value: 57.584543715873885 - type: manhattan_spearman value: 63.22361054139183 - task: type: STS dataset: name: MTEB STS22 (ar) type: mteb/sts22-crosslingual-sts config: ar split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 47.55216762405518 - type: cos_sim_spearman value: 56.98670142896412 - type: euclidean_pearson value: 50.15318757562699 - type: euclidean_spearman value: 56.524941926541906 - type: manhattan_pearson value: 49.955618528674904 - type: manhattan_spearman value: 56.37102209240117 - task: type: STS dataset: name: MTEB STS22 (ru) type: mteb/sts22-crosslingual-sts config: ru split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 49.20540980338571 - type: cos_sim_spearman value: 59.9009453504406 - type: euclidean_pearson value: 49.557749853620535 - type: euclidean_spearman value: 59.76631621172456 - type: manhattan_pearson value: 49.62340591181147 - type: manhattan_spearman value: 59.94224880322436 - task: type: STS dataset: name: MTEB STS22 (zh) type: mteb/sts22-crosslingual-sts config: zh split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 51.508169956576985 - type: cos_sim_spearman value: 66.82461565306046 - type: euclidean_pearson value: 56.2274426480083 - type: euclidean_spearman value: 66.6775323848333 - type: manhattan_pearson value: 55.98277796300661 - type: manhattan_spearman value: 66.63669848497175 - task: type: STS dataset: name: MTEB STS22 (fr) type: mteb/sts22-crosslingual-sts config: fr split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 72.86478788045507 - type: cos_sim_spearman value: 76.7946552053193 - type: euclidean_pearson value: 75.01598530490269 - type: euclidean_spearman value: 76.83618917858281 - type: manhattan_pearson value: 74.68337628304332 - type: manhattan_spearman value: 76.57480204017773 - task: type: STS dataset: name: MTEB STS22 (de-en) type: mteb/sts22-crosslingual-sts config: de-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 55.922619099401984 - type: cos_sim_spearman value: 56.599362477240774 - type: euclidean_pearson value: 56.68307052369783 - type: euclidean_spearman value: 54.28760436777401 - type: manhattan_pearson value: 56.67763566500681 - type: manhattan_spearman value: 53.94619541711359 - task: type: STS dataset: name: MTEB STS22 (es-en) type: mteb/sts22-crosslingual-sts config: es-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 66.74357206710913 - type: cos_sim_spearman value: 72.5208244925311 - type: euclidean_pearson value: 67.49254562186032 - type: euclidean_spearman value: 72.02469076238683 - type: manhattan_pearson value: 67.45251772238085 - type: manhattan_spearman value: 72.05538819984538 - task: type: STS dataset: name: MTEB STS22 (it) type: mteb/sts22-crosslingual-sts config: it split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 71.25734330033191 - type: cos_sim_spearman value: 76.98349083946823 - type: euclidean_pearson value: 73.71642838667736 - type: euclidean_spearman value: 77.01715504651384 - type: manhattan_pearson value: 73.61712711868105 - type: manhattan_spearman value: 77.01392571153896 - task: type: STS dataset: name: MTEB STS22 (pl-en) type: mteb/sts22-crosslingual-sts config: pl-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 63.18215462781212 - type: cos_sim_spearman value: 65.54373266117607 - type: euclidean_pearson value: 64.54126095439005 - type: euclidean_spearman value: 65.30410369102711 - type: manhattan_pearson value: 63.50332221148234 - type: manhattan_spearman value: 64.3455878104313 - task: type: STS dataset: name: MTEB STS22 (zh-en) type: mteb/sts22-crosslingual-sts config: zh-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 62.30509221440029 - type: cos_sim_spearman value: 65.99582704642478 - type: euclidean_pearson value: 63.43818859884195 - type: euclidean_spearman value: 66.83172582815764 - type: manhattan_pearson value: 63.055779168508764 - type: manhattan_spearman value: 65.49585020501449 - task: type: STS dataset: name: MTEB STS22 (es-it) type: mteb/sts22-crosslingual-sts config: es-it split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 59.587830825340404 - type: cos_sim_spearman value: 68.93467614588089 - type: euclidean_pearson value: 62.3073527367404 - type: euclidean_spearman value: 69.69758171553175 - type: manhattan_pearson value: 61.9074580815789 - type: manhattan_spearman value: 69.57696375597865 - task: type: STS dataset: name: MTEB STS22 (de-fr) type: mteb/sts22-crosslingual-sts config: de-fr split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 57.143220125577066 - type: cos_sim_spearman value: 67.78857859159226 - type: euclidean_pearson value: 55.58225107923733 - type: euclidean_spearman value: 67.80662907184563 - type: manhattan_pearson value: 56.24953502726514 - type: manhattan_spearman value: 67.98262125431616 - task: type: STS dataset: name: MTEB STS22 (de-pl) type: mteb/sts22-crosslingual-sts config: de-pl split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 21.826928900322066 - type: cos_sim_spearman value: 49.578506634400405 - type: euclidean_pearson value: 27.939890138843214 - type: euclidean_spearman value: 52.71950519136242 - type: manhattan_pearson value: 26.39878683847546 - type: manhattan_spearman value: 47.54609580342499 - task: type: STS dataset: name: MTEB STS22 (fr-pl) type: mteb/sts22-crosslingual-sts config: fr-pl split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 57.27603854632001 - type: cos_sim_spearman value: 50.709255283710995 - type: euclidean_pearson value: 59.5419024445929 - type: euclidean_spearman value: 50.709255283710995 - type: manhattan_pearson value: 59.03256832438492 - type: manhattan_spearman value: 61.97797868009122 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 85.00757054859712 - type: cos_sim_spearman value: 87.29283629622222 - type: euclidean_pearson value: 86.54824171775536 - type: euclidean_spearman value: 87.24364730491402 - type: manhattan_pearson value: 86.5062156915074 - type: manhattan_spearman value: 87.15052170378574 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 82.03549357197389 - type: mrr value: 95.05437645143527 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 57.260999999999996 - type: map_at_10 value: 66.259 - type: map_at_100 value: 66.884 - type: map_at_1000 value: 66.912 - type: map_at_3 value: 63.685 - type: map_at_5 value: 65.35499999999999 - type: mrr_at_1 value: 60.333000000000006 - type: mrr_at_10 value: 67.5 - type: mrr_at_100 value: 68.013 - type: mrr_at_1000 value: 68.038 - type: mrr_at_3 value: 65.61099999999999 - type: mrr_at_5 value: 66.861 - type: ndcg_at_1 value: 60.333000000000006 - type: ndcg_at_10 value: 70.41 - type: ndcg_at_100 value: 73.10600000000001 - type: ndcg_at_1000 value: 73.846 - type: ndcg_at_3 value: 66.133 - type: ndcg_at_5 value: 68.499 - type: precision_at_1 value: 60.333000000000006 - type: precision_at_10 value: 9.232999999999999 - type: precision_at_100 value: 1.0630000000000002 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 25.667 - type: precision_at_5 value: 17.067 - type: recall_at_1 value: 57.260999999999996 - type: recall_at_10 value: 81.94399999999999 - type: recall_at_100 value: 93.867 - type: recall_at_1000 value: 99.667 - type: recall_at_3 value: 70.339 - type: recall_at_5 value: 76.25 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.74356435643564 - type: cos_sim_ap value: 93.13411948212683 - type: cos_sim_f1 value: 86.80521991300147 - type: cos_sim_precision value: 84.00374181478017 - type: cos_sim_recall value: 89.8 - type: dot_accuracy value: 99.67920792079208 - type: dot_ap value: 89.27277565444479 - type: dot_f1 value: 83.9276990718124 - type: dot_precision value: 82.04393505253104 - type: dot_recall value: 85.9 - type: euclidean_accuracy value: 99.74257425742574 - type: euclidean_ap value: 93.17993008259062 - type: euclidean_f1 value: 86.69396110542476 - type: euclidean_precision value: 88.78406708595388 - type: euclidean_recall value: 84.7 - type: manhattan_accuracy value: 99.74257425742574 - type: manhattan_ap value: 93.14413755550099 - type: manhattan_f1 value: 86.82483594144371 - type: manhattan_precision value: 87.66564729867483 - type: manhattan_recall value: 86 - type: max_accuracy value: 99.74356435643564 - type: max_ap value: 93.17993008259062 - type: max_f1 value: 86.82483594144371 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 57.525863806168566 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 32.68850574423839 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 49.71580650644033 - type: mrr value: 50.50971903913081 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 29.152190498799484 - type: cos_sim_spearman value: 29.686180371952727 - type: dot_pearson value: 27.248664793816342 - type: dot_spearman value: 28.37748983721745 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.20400000000000001 - type: map_at_10 value: 1.6209999999999998 - type: map_at_100 value: 9.690999999999999 - type: map_at_1000 value: 23.733 - type: map_at_3 value: 0.575 - type: map_at_5 value: 0.885 - type: mrr_at_1 value: 78 - type: mrr_at_10 value: 86.56700000000001 - type: mrr_at_100 value: 86.56700000000001 - type: mrr_at_1000 value: 86.56700000000001 - type: mrr_at_3 value: 85.667 - type: mrr_at_5 value: 86.56700000000001 - type: ndcg_at_1 value: 76 - type: ndcg_at_10 value: 71.326 - type: ndcg_at_100 value: 54.208999999999996 - type: ndcg_at_1000 value: 49.252 - type: ndcg_at_3 value: 74.235 - type: ndcg_at_5 value: 73.833 - type: precision_at_1 value: 78 - type: precision_at_10 value: 74.8 - type: precision_at_100 value: 55.50000000000001 - type: precision_at_1000 value: 21.836 - type: precision_at_3 value: 78 - type: precision_at_5 value: 78 - type: recall_at_1 value: 0.20400000000000001 - type: recall_at_10 value: 1.894 - type: recall_at_100 value: 13.245999999999999 - type: recall_at_1000 value: 46.373 - type: recall_at_3 value: 0.613 - type: recall_at_5 value: 0.991 - task: type: BitextMining dataset: name: MTEB Tatoeba (sqi-eng) type: mteb/tatoeba-bitext-mining config: sqi-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.89999999999999 - type: f1 value: 94.69999999999999 - type: precision value: 94.11666666666667 - type: recall value: 95.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (fry-eng) type: mteb/tatoeba-bitext-mining config: fry-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 68.20809248554913 - type: f1 value: 63.431048720066066 - type: precision value: 61.69143958161298 - type: recall value: 68.20809248554913 - task: type: BitextMining dataset: name: MTEB Tatoeba (kur-eng) type: mteb/tatoeba-bitext-mining config: kur-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 71.21951219512195 - type: f1 value: 66.82926829268293 - type: precision value: 65.1260162601626 - type: recall value: 71.21951219512195 - task: type: BitextMining dataset: name: MTEB Tatoeba (tur-eng) type: mteb/tatoeba-bitext-mining config: tur-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.2 - type: f1 value: 96.26666666666667 - type: precision value: 95.8 - type: recall value: 97.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (deu-eng) type: mteb/tatoeba-bitext-mining config: deu-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 99.3 - type: f1 value: 99.06666666666666 - type: precision value: 98.95 - type: recall value: 99.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (nld-eng) type: mteb/tatoeba-bitext-mining config: nld-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.39999999999999 - type: f1 value: 96.63333333333333 - type: precision value: 96.26666666666668 - type: recall value: 97.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (ron-eng) type: mteb/tatoeba-bitext-mining config: ron-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96 - type: f1 value: 94.86666666666666 - type: precision value: 94.31666666666668 - type: recall value: 96 - task: type: BitextMining dataset: name: MTEB Tatoeba (ang-eng) type: mteb/tatoeba-bitext-mining config: ang-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 47.01492537313433 - type: f1 value: 40.178867566927266 - type: precision value: 38.179295828549556 - type: recall value: 47.01492537313433 - task: type: BitextMining dataset: name: MTEB Tatoeba (ido-eng) type: mteb/tatoeba-bitext-mining config: ido-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 86.5 - type: f1 value: 83.62537480063796 - type: precision value: 82.44555555555554 - type: recall value: 86.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (jav-eng) type: mteb/tatoeba-bitext-mining config: jav-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 80.48780487804879 - type: f1 value: 75.45644599303138 - type: precision value: 73.37398373983739 - type: recall value: 80.48780487804879 - task: type: BitextMining dataset: name: MTEB Tatoeba (isl-eng) type: mteb/tatoeba-bitext-mining config: isl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.7 - type: f1 value: 91.95666666666666 - type: precision value: 91.125 - type: recall value: 93.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (slv-eng) type: mteb/tatoeba-bitext-mining config: slv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.73754556500607 - type: f1 value: 89.65168084244632 - type: precision value: 88.73025516403402 - type: recall value: 91.73754556500607 - task: type: BitextMining dataset: name: MTEB Tatoeba (cym-eng) type: mteb/tatoeba-bitext-mining config: cym-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 81.04347826086956 - type: f1 value: 76.2128364389234 - type: precision value: 74.2 - type: recall value: 81.04347826086956 - task: type: BitextMining dataset: name: MTEB Tatoeba (kaz-eng) type: mteb/tatoeba-bitext-mining config: kaz-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 83.65217391304348 - type: f1 value: 79.4376811594203 - type: precision value: 77.65797101449274 - type: recall value: 83.65217391304348 - task: type: BitextMining dataset: name: MTEB Tatoeba (est-eng) type: mteb/tatoeba-bitext-mining config: est-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.5 - type: f1 value: 85.02690476190476 - type: precision value: 83.96261904761904 - type: recall value: 87.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (heb-eng) type: mteb/tatoeba-bitext-mining config: heb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89.3 - type: f1 value: 86.52333333333333 - type: precision value: 85.22833333333332 - type: recall value: 89.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (gla-eng) type: mteb/tatoeba-bitext-mining config: gla-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 65.01809408926418 - type: f1 value: 59.00594446432805 - type: precision value: 56.827215807915444 - type: recall value: 65.01809408926418 - task: type: BitextMining dataset: name: MTEB Tatoeba (mar-eng) type: mteb/tatoeba-bitext-mining config: mar-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.2 - type: f1 value: 88.58 - type: precision value: 87.33333333333334 - type: recall value: 91.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (lat-eng) type: mteb/tatoeba-bitext-mining config: lat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 59.199999999999996 - type: f1 value: 53.299166276284915 - type: precision value: 51.3383908045977 - type: recall value: 59.199999999999996 - task: type: BitextMining dataset: name: MTEB Tatoeba (bel-eng) type: mteb/tatoeba-bitext-mining config: bel-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.2 - type: f1 value: 91.2 - type: precision value: 90.25 - type: recall value: 93.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (pms-eng) type: mteb/tatoeba-bitext-mining config: pms-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 64.76190476190476 - type: f1 value: 59.867110667110666 - type: precision value: 58.07390192653351 - type: recall value: 64.76190476190476 - task: type: BitextMining dataset: name: MTEB Tatoeba (gle-eng) type: mteb/tatoeba-bitext-mining config: gle-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.2 - type: f1 value: 71.48147546897547 - type: precision value: 69.65409090909091 - type: recall value: 76.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (pes-eng) type: mteb/tatoeba-bitext-mining config: pes-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.8 - type: f1 value: 92.14 - type: precision value: 91.35833333333333 - type: recall value: 93.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (nob-eng) type: mteb/tatoeba-bitext-mining config: nob-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.89999999999999 - type: f1 value: 97.2 - type: precision value: 96.85000000000001 - type: recall value: 97.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (bul-eng) type: mteb/tatoeba-bitext-mining config: bul-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.6 - type: f1 value: 92.93333333333334 - type: precision value: 92.13333333333333 - type: recall value: 94.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (cbk-eng) type: mteb/tatoeba-bitext-mining config: cbk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 74.1 - type: f1 value: 69.14817460317461 - type: precision value: 67.2515873015873 - type: recall value: 74.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (hun-eng) type: mteb/tatoeba-bitext-mining config: hun-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.19999999999999 - type: f1 value: 94.01333333333335 - type: precision value: 93.46666666666667 - type: recall value: 95.19999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (uig-eng) type: mteb/tatoeba-bitext-mining config: uig-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.9 - type: f1 value: 72.07523809523809 - type: precision value: 70.19777777777779 - type: recall value: 76.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (rus-eng) type: mteb/tatoeba-bitext-mining config: rus-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.1 - type: f1 value: 92.31666666666666 - type: precision value: 91.43333333333332 - type: recall value: 94.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (spa-eng) type: mteb/tatoeba-bitext-mining config: spa-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.8 - type: f1 value: 97.1 - type: precision value: 96.76666666666668 - type: recall value: 97.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (hye-eng) type: mteb/tatoeba-bitext-mining config: hye-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.85714285714286 - type: f1 value: 90.92093441150045 - type: precision value: 90.00449236298293 - type: recall value: 92.85714285714286 - task: type: BitextMining dataset: name: MTEB Tatoeba (tel-eng) type: mteb/tatoeba-bitext-mining config: tel-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.16239316239316 - type: f1 value: 91.33903133903132 - type: precision value: 90.56267806267806 - type: recall value: 93.16239316239316 - task: type: BitextMining dataset: name: MTEB Tatoeba (afr-eng) type: mteb/tatoeba-bitext-mining config: afr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.4 - type: f1 value: 90.25666666666666 - type: precision value: 89.25833333333334 - type: recall value: 92.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (mon-eng) type: mteb/tatoeba-bitext-mining config: mon-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.22727272727272 - type: f1 value: 87.53030303030303 - type: precision value: 86.37121212121211 - type: recall value: 90.22727272727272 - task: type: BitextMining dataset: name: MTEB Tatoeba (arz-eng) type: mteb/tatoeba-bitext-mining config: arz-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 79.03563941299791 - type: f1 value: 74.7349505840072 - type: precision value: 72.9035639412998 - type: recall value: 79.03563941299791 - task: type: BitextMining dataset: name: MTEB Tatoeba (hrv-eng) type: mteb/tatoeba-bitext-mining config: hrv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97 - type: f1 value: 96.15 - type: precision value: 95.76666666666668 - type: recall value: 97 - task: type: BitextMining dataset: name: MTEB Tatoeba (nov-eng) type: mteb/tatoeba-bitext-mining config: nov-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.26459143968872 - type: f1 value: 71.55642023346303 - type: precision value: 69.7544932369835 - type: recall value: 76.26459143968872 - task: type: BitextMining dataset: name: MTEB Tatoeba (gsw-eng) type: mteb/tatoeba-bitext-mining config: gsw-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 58.119658119658126 - type: f1 value: 51.65242165242165 - type: precision value: 49.41768108434775 - type: recall value: 58.119658119658126 - task: type: BitextMining dataset: name: MTEB Tatoeba (nds-eng) type: mteb/tatoeba-bitext-mining config: nds-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 74.3 - type: f1 value: 69.52055555555555 - type: precision value: 67.7574938949939 - type: recall value: 74.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (ukr-eng) type: mteb/tatoeba-bitext-mining config: ukr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.8 - type: f1 value: 93.31666666666666 - type: precision value: 92.60000000000001 - type: recall value: 94.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (uzb-eng) type: mteb/tatoeba-bitext-mining config: uzb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.63551401869158 - type: f1 value: 72.35202492211837 - type: precision value: 70.60358255451713 - type: recall value: 76.63551401869158 - task: type: BitextMining dataset: name: MTEB Tatoeba (lit-eng) type: mteb/tatoeba-bitext-mining config: lit-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.4 - type: f1 value: 88.4811111111111 - type: precision value: 87.7452380952381 - type: recall value: 90.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (ina-eng) type: mteb/tatoeba-bitext-mining config: ina-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95 - type: f1 value: 93.60666666666667 - type: precision value: 92.975 - type: recall value: 95 - task: type: BitextMining dataset: name: MTEB Tatoeba (lfn-eng) type: mteb/tatoeba-bitext-mining config: lfn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 67.2 - type: f1 value: 63.01595782872099 - type: precision value: 61.596587301587306 - type: recall value: 67.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (zsm-eng) type: mteb/tatoeba-bitext-mining config: zsm-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.7 - type: f1 value: 94.52999999999999 - type: precision value: 94 - type: recall value: 95.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (ita-eng) type: mteb/tatoeba-bitext-mining config: ita-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.6 - type: f1 value: 93.28999999999999 - type: precision value: 92.675 - type: recall value: 94.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (cmn-eng) type: mteb/tatoeba-bitext-mining config: cmn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.39999999999999 - type: f1 value: 95.28333333333333 - type: precision value: 94.75 - type: recall value: 96.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (lvs-eng) type: mteb/tatoeba-bitext-mining config: lvs-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.9 - type: f1 value: 89.83 - type: precision value: 88.92 - type: recall value: 91.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (glg-eng) type: mteb/tatoeba-bitext-mining config: glg-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.69999999999999 - type: f1 value: 93.34222222222223 - type: precision value: 92.75416666666668 - type: recall value: 94.69999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (ceb-eng) type: mteb/tatoeba-bitext-mining config: ceb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 60.333333333333336 - type: f1 value: 55.31203703703703 - type: precision value: 53.39971108326371 - type: recall value: 60.333333333333336 - task: type: BitextMining dataset: name: MTEB Tatoeba (bre-eng) type: mteb/tatoeba-bitext-mining config: bre-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 12.9 - type: f1 value: 11.099861903031458 - type: precision value: 10.589187932631877 - type: recall value: 12.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (ben-eng) type: mteb/tatoeba-bitext-mining config: ben-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 86.7 - type: f1 value: 83.0152380952381 - type: precision value: 81.37833333333333 - type: recall value: 86.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (swg-eng) type: mteb/tatoeba-bitext-mining config: swg-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 63.39285714285714 - type: f1 value: 56.832482993197274 - type: precision value: 54.56845238095237 - type: recall value: 63.39285714285714 - task: type: BitextMining dataset: name: MTEB Tatoeba (arq-eng) type: mteb/tatoeba-bitext-mining config: arq-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 48.73765093304062 - type: f1 value: 41.555736920720456 - type: precision value: 39.06874531737319 - type: recall value: 48.73765093304062 - task: type: BitextMining dataset: name: MTEB Tatoeba (kab-eng) type: mteb/tatoeba-bitext-mining config: kab-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 41.099999999999994 - type: f1 value: 36.540165945165946 - type: precision value: 35.05175685425686 - type: recall value: 41.099999999999994 - task: type: BitextMining dataset: name: MTEB Tatoeba (fra-eng) type: mteb/tatoeba-bitext-mining config: fra-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.89999999999999 - type: f1 value: 93.42333333333333 - type: precision value: 92.75833333333333 - type: recall value: 94.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (por-eng) type: mteb/tatoeba-bitext-mining config: por-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.89999999999999 - type: f1 value: 93.63333333333334 - type: precision value: 93.01666666666665 - type: recall value: 94.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (tat-eng) type: mteb/tatoeba-bitext-mining config: tat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.9 - type: f1 value: 73.64833333333334 - type: precision value: 71.90282106782105 - type: recall value: 77.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (oci-eng) type: mteb/tatoeba-bitext-mining config: oci-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 59.4 - type: f1 value: 54.90521367521367 - type: precision value: 53.432840025471606 - type: recall value: 59.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (pol-eng) type: mteb/tatoeba-bitext-mining config: pol-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.39999999999999 - type: f1 value: 96.6 - type: precision value: 96.2 - type: recall value: 97.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (war-eng) type: mteb/tatoeba-bitext-mining config: war-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 67.2 - type: f1 value: 62.25926129426129 - type: precision value: 60.408376623376626 - type: recall value: 67.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (aze-eng) type: mteb/tatoeba-bitext-mining config: aze-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.2 - type: f1 value: 87.60666666666667 - type: precision value: 86.45277777777778 - type: recall value: 90.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (vie-eng) type: mteb/tatoeba-bitext-mining config: vie-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.7 - type: f1 value: 97 - type: precision value: 96.65 - type: recall value: 97.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (nno-eng) type: mteb/tatoeba-bitext-mining config: nno-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.2 - type: f1 value: 91.39746031746031 - type: precision value: 90.6125 - type: recall value: 93.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (cha-eng) type: mteb/tatoeba-bitext-mining config: cha-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 32.11678832116788 - type: f1 value: 27.210415386260234 - type: precision value: 26.20408990846947 - type: recall value: 32.11678832116788 - task: type: BitextMining dataset: name: MTEB Tatoeba (mhr-eng) type: mteb/tatoeba-bitext-mining config: mhr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 8.5 - type: f1 value: 6.787319277832475 - type: precision value: 6.3452094433344435 - type: recall value: 8.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (dan-eng) type: mteb/tatoeba-bitext-mining config: dan-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.1 - type: f1 value: 95.08 - type: precision value: 94.61666666666667 - type: recall value: 96.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (ell-eng) type: mteb/tatoeba-bitext-mining config: ell-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.3 - type: f1 value: 93.88333333333333 - type: precision value: 93.18333333333332 - type: recall value: 95.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (amh-eng) type: mteb/tatoeba-bitext-mining config: amh-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 85.11904761904762 - type: f1 value: 80.69444444444444 - type: precision value: 78.72023809523809 - type: recall value: 85.11904761904762 - task: type: BitextMining dataset: name: MTEB Tatoeba (pam-eng) type: mteb/tatoeba-bitext-mining config: pam-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 11.1 - type: f1 value: 9.276381801735853 - type: precision value: 8.798174603174601 - type: recall value: 11.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (hsb-eng) type: mteb/tatoeba-bitext-mining config: hsb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 63.56107660455487 - type: f1 value: 58.70433569191332 - type: precision value: 56.896926581464015 - type: recall value: 63.56107660455487 - task: type: BitextMining dataset: name: MTEB Tatoeba (srp-eng) type: mteb/tatoeba-bitext-mining config: srp-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.69999999999999 - type: f1 value: 93.10000000000001 - type: precision value: 92.35 - type: recall value: 94.69999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (epo-eng) type: mteb/tatoeba-bitext-mining config: epo-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.8 - type: f1 value: 96.01222222222222 - type: precision value: 95.67083333333332 - type: recall value: 96.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (kzj-eng) type: mteb/tatoeba-bitext-mining config: kzj-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 9.2 - type: f1 value: 7.911555250305249 - type: precision value: 7.631246556216846 - type: recall value: 9.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (awa-eng) type: mteb/tatoeba-bitext-mining config: awa-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.48917748917748 - type: f1 value: 72.27375798804371 - type: precision value: 70.14430014430013 - type: recall value: 77.48917748917748 - task: type: BitextMining dataset: name: MTEB Tatoeba (fao-eng) type: mteb/tatoeba-bitext-mining config: fao-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.09923664122137 - type: f1 value: 72.61541257724463 - type: precision value: 70.8998380754106 - type: recall value: 77.09923664122137 - task: type: BitextMining dataset: name: MTEB Tatoeba (mal-eng) type: mteb/tatoeba-bitext-mining config: mal-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 98.2532751091703 - type: f1 value: 97.69529354682193 - type: precision value: 97.42843279961184 - type: recall value: 98.2532751091703 - task: type: BitextMining dataset: name: MTEB Tatoeba (ile-eng) type: mteb/tatoeba-bitext-mining config: ile-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 82.8 - type: f1 value: 79.14672619047619 - type: precision value: 77.59489247311828 - type: recall value: 82.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (bos-eng) type: mteb/tatoeba-bitext-mining config: bos-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.35028248587571 - type: f1 value: 92.86252354048965 - type: precision value: 92.2080979284369 - type: recall value: 94.35028248587571 - task: type: BitextMining dataset: name: MTEB Tatoeba (cor-eng) type: mteb/tatoeba-bitext-mining config: cor-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 8.5 - type: f1 value: 6.282429263935621 - type: precision value: 5.783274240739785 - type: recall value: 8.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (cat-eng) type: mteb/tatoeba-bitext-mining config: cat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.7 - type: f1 value: 91.025 - type: precision value: 90.30428571428571 - type: recall value: 92.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (eus-eng) type: mteb/tatoeba-bitext-mining config: eus-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 81 - type: f1 value: 77.8232380952381 - type: precision value: 76.60194444444444 - type: recall value: 81 - task: type: BitextMining dataset: name: MTEB Tatoeba (yue-eng) type: mteb/tatoeba-bitext-mining config: yue-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91 - type: f1 value: 88.70857142857142 - type: precision value: 87.7 - type: recall value: 91 - task: type: BitextMining dataset: name: MTEB Tatoeba (swe-eng) type: mteb/tatoeba-bitext-mining config: swe-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.39999999999999 - type: f1 value: 95.3 - type: precision value: 94.76666666666667 - type: recall value: 96.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (dtp-eng) type: mteb/tatoeba-bitext-mining config: dtp-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 8.1 - type: f1 value: 7.001008218834307 - type: precision value: 6.708329562594269 - type: recall value: 8.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (kat-eng) type: mteb/tatoeba-bitext-mining config: kat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.1313672922252 - type: f1 value: 84.09070598748882 - type: precision value: 82.79171454104429 - type: recall value: 87.1313672922252 - task: type: BitextMining dataset: name: MTEB Tatoeba (jpn-eng) type: mteb/tatoeba-bitext-mining config: jpn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.39999999999999 - type: f1 value: 95.28333333333333 - type: precision value: 94.73333333333332 - type: recall value: 96.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (csb-eng) type: mteb/tatoeba-bitext-mining config: csb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 42.29249011857708 - type: f1 value: 36.981018542283365 - type: precision value: 35.415877813576024 - type: recall value: 42.29249011857708 - task: type: BitextMining dataset: name: MTEB Tatoeba (xho-eng) type: mteb/tatoeba-bitext-mining config: xho-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 83.80281690140845 - type: f1 value: 80.86854460093896 - type: precision value: 79.60093896713614 - type: recall value: 83.80281690140845 - task: type: BitextMining dataset: name: MTEB Tatoeba (orv-eng) type: mteb/tatoeba-bitext-mining config: orv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 45.26946107784431 - type: f1 value: 39.80235464678088 - type: precision value: 38.14342660001342 - type: recall value: 45.26946107784431 - task: type: BitextMining dataset: name: MTEB Tatoeba (ind-eng) type: mteb/tatoeba-bitext-mining config: ind-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.3 - type: f1 value: 92.9 - type: precision value: 92.26666666666668 - type: recall value: 94.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (tuk-eng) type: mteb/tatoeba-bitext-mining config: tuk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 37.93103448275862 - type: f1 value: 33.15192743764172 - type: precision value: 31.57456528146183 - type: recall value: 37.93103448275862 - task: type: BitextMining dataset: name: MTEB Tatoeba (max-eng) type: mteb/tatoeba-bitext-mining config: max-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 69.01408450704226 - type: f1 value: 63.41549295774648 - type: precision value: 61.342778895595806 - type: recall value: 69.01408450704226 - task: type: BitextMining dataset: name: MTEB Tatoeba (swh-eng) type: mteb/tatoeba-bitext-mining config: swh-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.66666666666667 - type: f1 value: 71.60705960705961 - type: precision value: 69.60683760683762 - type: recall value: 76.66666666666667 - task: type: BitextMining dataset: name: MTEB Tatoeba (hin-eng) type: mteb/tatoeba-bitext-mining config: hin-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.8 - type: f1 value: 94.48333333333333 - type: precision value: 93.83333333333333 - type: recall value: 95.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (dsb-eng) type: mteb/tatoeba-bitext-mining config: dsb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 52.81837160751566 - type: f1 value: 48.435977731384824 - type: precision value: 47.11291973845539 - type: recall value: 52.81837160751566 - task: type: BitextMining dataset: name: MTEB Tatoeba (ber-eng) type: mteb/tatoeba-bitext-mining config: ber-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 44.9 - type: f1 value: 38.88962621607783 - type: precision value: 36.95936507936508 - type: recall value: 44.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (tam-eng) type: mteb/tatoeba-bitext-mining config: tam-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.55374592833876 - type: f1 value: 88.22553125484721 - type: precision value: 87.26927252985884 - type: recall value: 90.55374592833876 - task: type: BitextMining dataset: name: MTEB Tatoeba (slk-eng) type: mteb/tatoeba-bitext-mining config: slk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.6 - type: f1 value: 93.13333333333333 - type: precision value: 92.45333333333333 - type: recall value: 94.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (tgl-eng) type: mteb/tatoeba-bitext-mining config: tgl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.7 - type: f1 value: 91.99666666666667 - type: precision value: 91.26666666666668 - type: recall value: 93.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (ast-eng) type: mteb/tatoeba-bitext-mining config: ast-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 85.03937007874016 - type: f1 value: 81.75853018372703 - type: precision value: 80.34120734908137 - type: recall value: 85.03937007874016 - task: type: BitextMining dataset: name: MTEB Tatoeba (mkd-eng) type: mteb/tatoeba-bitext-mining config: mkd-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 88.3 - type: f1 value: 85.5 - type: precision value: 84.25833333333334 - type: recall value: 88.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (khm-eng) type: mteb/tatoeba-bitext-mining config: khm-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 65.51246537396122 - type: f1 value: 60.02297410192148 - type: precision value: 58.133467727289236 - type: recall value: 65.51246537396122 - task: type: BitextMining dataset: name: MTEB Tatoeba (ces-eng) type: mteb/tatoeba-bitext-mining config: ces-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96 - type: f1 value: 94.89 - type: precision value: 94.39166666666667 - type: recall value: 96 - task: type: BitextMining dataset: name: MTEB Tatoeba (tzl-eng) type: mteb/tatoeba-bitext-mining config: tzl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 57.692307692307686 - type: f1 value: 53.162393162393165 - type: precision value: 51.70673076923077 - type: recall value: 57.692307692307686 - task: type: BitextMining dataset: name: MTEB Tatoeba (urd-eng) type: mteb/tatoeba-bitext-mining config: urd-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.60000000000001 - type: f1 value: 89.21190476190475 - type: precision value: 88.08666666666667 - type: recall value: 91.60000000000001 - task: type: BitextMining dataset: name: MTEB Tatoeba (ara-eng) type: mteb/tatoeba-bitext-mining config: ara-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 88 - type: f1 value: 85.47 - type: precision value: 84.43266233766234 - type: recall value: 88 - task: type: BitextMining dataset: name: MTEB Tatoeba (kor-eng) type: mteb/tatoeba-bitext-mining config: kor-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.7 - type: f1 value: 90.64999999999999 - type: precision value: 89.68333333333332 - type: recall value: 92.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (yid-eng) type: mteb/tatoeba-bitext-mining config: yid-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 80.30660377358491 - type: f1 value: 76.33044137466307 - type: precision value: 74.78970125786164 - type: recall value: 80.30660377358491 - task: type: BitextMining dataset: name: MTEB Tatoeba (fin-eng) type: mteb/tatoeba-bitext-mining config: fin-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.39999999999999 - type: f1 value: 95.44 - type: precision value: 94.99166666666666 - type: recall value: 96.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (tha-eng) type: mteb/tatoeba-bitext-mining config: tha-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.53284671532847 - type: f1 value: 95.37712895377129 - type: precision value: 94.7992700729927 - type: recall value: 96.53284671532847 - task: type: BitextMining dataset: name: MTEB Tatoeba (wuu-eng) type: mteb/tatoeba-bitext-mining config: wuu-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89 - type: f1 value: 86.23190476190476 - type: precision value: 85.035 - type: recall value: 89 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.585 - type: map_at_10 value: 9.012 - type: map_at_100 value: 14.027000000000001 - type: map_at_1000 value: 15.565000000000001 - type: map_at_3 value: 5.032 - type: map_at_5 value: 6.657 - type: mrr_at_1 value: 28.571 - type: mrr_at_10 value: 45.377 - type: mrr_at_100 value: 46.119 - type: mrr_at_1000 value: 46.127 - type: mrr_at_3 value: 41.156 - type: mrr_at_5 value: 42.585 - type: ndcg_at_1 value: 27.551 - type: ndcg_at_10 value: 23.395 - type: ndcg_at_100 value: 33.342 - type: ndcg_at_1000 value: 45.523 - type: ndcg_at_3 value: 25.158 - type: ndcg_at_5 value: 23.427 - type: precision_at_1 value: 28.571 - type: precision_at_10 value: 21.429000000000002 - type: precision_at_100 value: 6.714 - type: precision_at_1000 value: 1.473 - type: precision_at_3 value: 27.211000000000002 - type: precision_at_5 value: 24.490000000000002 - type: recall_at_1 value: 2.585 - type: recall_at_10 value: 15.418999999999999 - type: recall_at_100 value: 42.485 - type: recall_at_1000 value: 79.536 - type: recall_at_3 value: 6.239999999999999 - type: recall_at_5 value: 8.996 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.3234 - type: ap value: 14.361688653847423 - type: f1 value: 54.819068624319044 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 61.97792869269949 - type: f1 value: 62.28965628513728 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 38.90540145385218 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 86.53513739047506 - type: cos_sim_ap value: 75.27741586677557 - type: cos_sim_f1 value: 69.18792902473774 - type: cos_sim_precision value: 67.94708725515136 - type: cos_sim_recall value: 70.47493403693932 - type: dot_accuracy value: 84.7052512368123 - type: dot_ap value: 69.36075482849378 - type: dot_f1 value: 64.44688376631296 - type: dot_precision value: 59.92288500793831 - type: dot_recall value: 69.70976253298153 - type: euclidean_accuracy value: 86.60666388508076 - type: euclidean_ap value: 75.47512772621097 - type: euclidean_f1 value: 69.413872536473 - type: euclidean_precision value: 67.39562624254472 - type: euclidean_recall value: 71.55672823218997 - type: manhattan_accuracy value: 86.52917684925792 - type: manhattan_ap value: 75.34000110496703 - type: manhattan_f1 value: 69.28489190226429 - type: manhattan_precision value: 67.24608889992551 - type: manhattan_recall value: 71.45118733509234 - type: max_accuracy value: 86.60666388508076 - type: max_ap value: 75.47512772621097 - type: max_f1 value: 69.413872536473 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.01695967710637 - type: cos_sim_ap value: 85.8298270742901 - type: cos_sim_f1 value: 78.46988128389272 - type: cos_sim_precision value: 74.86017897091722 - type: cos_sim_recall value: 82.44533415460425 - type: dot_accuracy value: 88.19420188613343 - type: dot_ap value: 83.82679165901324 - type: dot_f1 value: 76.55833777304208 - type: dot_precision value: 75.6884875846501 - type: dot_recall value: 77.44841392054204 - type: euclidean_accuracy value: 89.03054294252338 - type: euclidean_ap value: 85.89089555185325 - type: euclidean_f1 value: 78.62997658079624 - type: euclidean_precision value: 74.92329149232914 - type: euclidean_recall value: 82.72251308900523 - type: manhattan_accuracy value: 89.0266620095471 - type: manhattan_ap value: 85.86458997929147 - type: manhattan_f1 value: 78.50685331000291 - type: manhattan_precision value: 74.5499861534201 - type: manhattan_recall value: 82.90729904527257 - type: max_accuracy value: 89.03054294252338 - type: max_ap value: 85.89089555185325 - type: max_f1 value: 78.62997658079624 --- ## Multilingual-E5-large [Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf). Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022 This model has 24 layers and the embedding size is 1024. ## Usage Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset. ```python import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0) return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] # Each input text should start with "query: " or "passage: ", even for non-English texts. # For tasks other than retrieval, you can simply use the "query: " prefix. input_texts = ['query: how much protein should a female eat', 'query: 南瓜的家常做法', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: 1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右,放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅"] tokenizer = AutoTokenizer.from_pretrained('intfloat/multilingual-e5-large') model = AutoModel.from_pretrained('intfloat/multilingual-e5-large') # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:2] @ embeddings[2:].T) * 100 print(scores.tolist()) ``` ## Supported Languages This model is initialized from [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) and continually trained on a mixture of multilingual datasets. It supports 100 languages from xlm-roberta, but low-resource languages may see performance degradation. ## Training Details **Initialization**: [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) **First stage**: contrastive pre-training with weak supervision | Dataset | Weak supervision | # of text pairs | |--------------------------------------------------------------------------------------------------------|---------------------------------------|-----------------| | Filtered [mC4](https://huggingface.co/datasets/mc4) | (title, page content) | 1B | | [CC News](https://huggingface.co/datasets/intfloat/multilingual_cc_news) | (title, news content) | 400M | | [NLLB](https://huggingface.co/datasets/allenai/nllb) | translation pairs | 2.4B | | [Wikipedia](https://huggingface.co/datasets/intfloat/wikipedia) | (hierarchical section title, passage) | 150M | | Filtered [Reddit](https://www.reddit.com/) | (comment, response) | 800M | | [S2ORC](https://github.com/allenai/s2orc) | (title, abstract) and citation pairs | 100M | | [Stackexchange](https://stackexchange.com/) | (question, answer) | 50M | | [xP3](https://huggingface.co/datasets/bigscience/xP3) | (input prompt, response) | 80M | | [Miscellaneous unsupervised SBERT data](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) | - | 10M | **Second stage**: supervised fine-tuning | Dataset | Language | # of text pairs | |----------------------------------------------------------------------------------------|--------------|-----------------| | [MS MARCO](https://microsoft.github.io/msmarco/) | English | 500k | | [NQ](https://github.com/facebookresearch/DPR) | English | 70k | | [Trivia QA](https://github.com/facebookresearch/DPR) | English | 60k | | [NLI from SimCSE](https://github.com/princeton-nlp/SimCSE) | English | <300k | | [ELI5](https://huggingface.co/datasets/eli5) | English | 500k | | [DuReader Retrieval](https://github.com/baidu/DuReader/tree/master/DuReader-Retrieval) | Chinese | 86k | | [KILT Fever](https://huggingface.co/datasets/kilt_tasks) | English | 70k | | [KILT HotpotQA](https://huggingface.co/datasets/kilt_tasks) | English | 70k | | [SQuAD](https://huggingface.co/datasets/squad) | English | 87k | | [Quora](https://huggingface.co/datasets/quora) | English | 150k | | [Mr. TyDi](https://huggingface.co/datasets/castorini/mr-tydi) | 11 languages | 50k | | [MIRACL](https://huggingface.co/datasets/miracl/miracl) | 16 languages | 40k | For all labeled datasets, we only use its training set for fine-tuning. For other training details, please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf). ## Benchmark Results on [Mr. TyDi](https://arxiv.org/abs/2108.08787) | Model | Avg MRR@10 | | ar | bn | en | fi | id | ja | ko | ru | sw | te | th | |-----------------------|------------|-------|------| --- | --- | --- | --- | --- | --- | --- |------| --- | --- | | BM25 | 33.3 | | 36.7 | 41.3 | 15.1 | 28.8 | 38.2 | 21.7 | 28.1 | 32.9 | 39.6 | 42.4 | 41.7 | | mDPR | 16.7 | | 26.0 | 25.8 | 16.2 | 11.3 | 14.6 | 18.1 | 21.9 | 18.5 | 7.3 | 10.6 | 13.5 | | BM25 + mDPR | 41.7 | | 49.1 | 53.5 | 28.4 | 36.5 | 45.5 | 35.5 | 36.2 | 42.7 | 40.5 | 42.0 | 49.2 | | | | | multilingual-e5-small | 64.4 | | 71.5 | 66.3 | 54.5 | 57.7 | 63.2 | 55.4 | 54.3 | 60.8 | 65.4 | 89.1 | 70.1 | | multilingual-e5-base | 65.9 | | 72.3 | 65.0 | 58.5 | 60.8 | 64.9 | 56.6 | 55.8 | 62.7 | 69.0 | 86.6 | 72.7 | | multilingual-e5-large | **70.5** | | 77.5 | 73.2 | 60.8 | 66.8 | 68.5 | 62.5 | 61.6 | 65.8 | 72.7 | 90.2 | 76.2 | ## MTEB Benchmark Evaluation Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316). ## Support for Sentence Transformers Below is an example for usage with sentence_transformers. ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('intfloat/multilingual-e5-large') input_texts = [ 'query: how much protein should a female eat', 'query: 南瓜的家常做法', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 i s 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or traini ng for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: 1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮 ,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右, 放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油 锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅" ] embeddings = model.encode(input_texts, normalize_embeddings=True) ``` Package requirements `pip install sentence_transformers~=2.2.2` Contributors: [michaelfeil](https://huggingface.co/michaelfeil) ## FAQ **1. Do I need to add the prefix "query: " and "passage: " to input texts?** Yes, this is how the model is trained, otherwise you will see a performance degradation. Here are some rules of thumb: - Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval. - Use "query: " prefix for symmetric tasks such as semantic similarity, bitext mining, paraphrase retrieval. - Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering. **2. Why are my reproduced results slightly different from reported in the model card?** Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences. **3. Why does the cosine similarity scores distribute around 0.7 to 1.0?** This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss. For text embedding tasks like text retrieval or semantic similarity, what matters is the relative order of the scores instead of the absolute values, so this should not be an issue. ## Citation If you find our paper or models helpful, please consider cite as follows: ``` @article{wang2022text, title={Text Embeddings by Weakly-Supervised Contrastive Pre-training}, author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu}, journal={arXiv preprint arXiv:2212.03533}, year={2022} } ``` ## Limitations Long texts will be truncated to at most 512 tokens.
[ "BIOSSES", "SCIFACT" ]
Atgenomix/icd_o_sentence_transformer_128_dim_model
Atgenomix
sentence-similarity
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2023-11-22T03:24:12Z
2023-11-22T03:24:33+00:00
14
0
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # pritamdeka/BioBERT-mnli-snli-scinli-scitail-mednli-stsb This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. It has been trained over the SNLI, MNLI, SCINLI, SCITAIL, MEDNLI and STSB datasets for providing robust sentence embeddings. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('pritamdeka/BioBERT-mnli-snli-scinli-scitail-mednli-stsb') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('pritamdeka/BioBERT-mnli-snli-scinli-scitail-mednli-stsb') model = AutoModel.from_pretrained('pritamdeka/BioBERT-mnli-snli-scinli-scitail-mednli-stsb') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 90 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 4, "evaluation_steps": 1000, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 36, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 100, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information --> If you use the model kindly cite the following work ``` @inproceedings{deka2022evidence, title={Evidence Extraction to Validate Medical Claims in Fake News Detection}, author={Deka, Pritam and Jurek-Loughrey, Anna and others}, booktitle={International Conference on Health Information Science}, pages={3--15}, year={2022}, organization={Springer} } ```
[ "MEDNLI", "SCITAIL" ]
ntc-ai/SDXL-LoRA-slider.clown
ntc-ai
text-to-image
[ "diffusers", "text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
2023-12-12T01:48:44Z
2024-02-06T00:30:54+00:00
14
0
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 language: - en license: mit tags: - text-to-image - stable-diffusion-xl - lora - template:sd-lora - template:sdxl-lora - sdxl-sliders - ntcai.xyz-sliders - concept - diffusers thumbnail: images/clown_17_3.0.png widget: - text: clown output: url: images/clown_17_3.0.png - text: clown output: url: images/clown_19_3.0.png - text: clown output: url: images/clown_20_3.0.png - text: clown output: url: images/clown_21_3.0.png - text: clown output: url: images/clown_22_3.0.png inference: false instance_prompt: clown --- # ntcai.xyz slider - clown (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/clown_17_-3.0.png" width=256 height=256 /> | <img src="images/clown_17_0.0.png" width=256 height=256 /> | <img src="images/clown_17_3.0.png" width=256 height=256 /> | | <img src="images/clown_19_-3.0.png" width=256 height=256 /> | <img src="images/clown_19_0.0.png" width=256 height=256 /> | <img src="images/clown_19_3.0.png" width=256 height=256 /> | | <img src="images/clown_20_-3.0.png" width=256 height=256 /> | <img src="images/clown_20_0.0.png" width=256 height=256 /> | <img src="images/clown_20_3.0.png" width=256 height=256 /> | See more at [https://sliders.ntcai.xyz/sliders/app/loras/1ae58d55-c377-4923-a71a-de934dedd16b](https://sliders.ntcai.xyz/sliders/app/loras/1ae58d55-c377-4923-a71a-de934dedd16b) ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` clown ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.clown', weight_name='clown.safetensors', adapter_name="clown") # Activate the LoRA pipe.set_adapters(["clown"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, clown" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 1496+ unique and diverse LoRAs along with 14602+ slider merges, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful <strong>NTC Slider Factory</strong> LoRA creator, allowing you to craft your own custom LoRAs and merges opening up endless possibilities. Your support on Patreon will allow us to continue developing new models and tools. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
[ "CRAFT" ]
ntc-ai/SDXL-LoRA-slider.fit
ntc-ai
text-to-image
[ "diffusers", "text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
2023-12-12T03:49:08Z
2024-02-06T00:31:02+00:00
14
1
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 language: - en license: mit tags: - text-to-image - stable-diffusion-xl - lora - template:sd-lora - template:sdxl-lora - sdxl-sliders - ntcai.xyz-sliders - concept - diffusers thumbnail: images/fit_17_3.0.png widget: - text: fit output: url: images/fit_17_3.0.png - text: fit output: url: images/fit_19_3.0.png - text: fit output: url: images/fit_20_3.0.png - text: fit output: url: images/fit_21_3.0.png - text: fit output: url: images/fit_22_3.0.png inference: false instance_prompt: fit --- # ntcai.xyz slider - fit (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/fit_17_-3.0.png" width=256 height=256 /> | <img src="images/fit_17_0.0.png" width=256 height=256 /> | <img src="images/fit_17_3.0.png" width=256 height=256 /> | | <img src="images/fit_19_-3.0.png" width=256 height=256 /> | <img src="images/fit_19_0.0.png" width=256 height=256 /> | <img src="images/fit_19_3.0.png" width=256 height=256 /> | | <img src="images/fit_20_-3.0.png" width=256 height=256 /> | <img src="images/fit_20_0.0.png" width=256 height=256 /> | <img src="images/fit_20_3.0.png" width=256 height=256 /> | See more at [https://sliders.ntcai.xyz/sliders/app/loras/f1edd2a6-de00-41dd-8c05-2efb1f98926d](https://sliders.ntcai.xyz/sliders/app/loras/f1edd2a6-de00-41dd-8c05-2efb1f98926d) ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` fit ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.fit', weight_name='fit.safetensors', adapter_name="fit") # Activate the LoRA pipe.set_adapters(["fit"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, fit" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 1496+ unique and diverse LoRAs along with 14602+ slider merges, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful <strong>NTC Slider Factory</strong> LoRA creator, allowing you to craft your own custom LoRAs and merges opening up endless possibilities. Your support on Patreon will allow us to continue developing new models and tools. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
[ "CRAFT" ]
TheBloke/AmberChat-GPTQ
TheBloke
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "nlp", "llm", "en", "dataset:WizardLM/WizardLM_evol_instruct_V2_196k", "dataset:icybee/share_gpt_90k_v1", "base_model:LLM360/AmberChat", "base_model:quantized:LLM360/AmberChat", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
2023-12-12T10:16:59Z
2023-12-12T10:37:36+00:00
14
1
--- base_model: LLM360/AmberChat datasets: - WizardLM/WizardLM_evol_instruct_V2_196k - icybee/share_gpt_90k_v1 language: - en library_name: transformers license: apache-2.0 model_name: AmberChat pipeline_tag: text-generation tags: - nlp - llm inference: false model_creator: LLM360 model_type: amber prompt_template: 'A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user''s questions. USER: {prompt} ASSISTANT: ' quantized_by: TheBloke widget: - example_title: example 1 output: text: "Mounting a TV to drywall safely is crucial to ensure the longevity of your\ \ TV and the wall. Here's a step-by-step guide on how to do it safely:\n 1.\ \ Choose the right mount: Choose a mount that is compatible with your TV and\ \ the wall. Make sure the mount is rated for drywall and not just wood or concrete.\n\ \ 2. Prepare the wall: Before mounting the TV, make sure the wall is clean and\ \ free from any debris or paint. Use a stud finder to locate the studs in the\ \ wall.\n 3. Install the mount: Follow the manufacturer's instructions for mounting\ \ the TV to the wall. Use appropriate hardware and tools to secure the mount\ \ to the wall.\n 4. Level the TV: Level the TV on the mount using a spirit level\ \ or a leveling kit provided by the mount manufacturer.\n 5. Attach the TV to\ \ the mount: Attach the TV to the mount using the appropriate hardware and tools.\ \ Tighten the bolts and screws to ensure the TV is securely attached.\n 6. Connect\ \ the cables: Connect the TV cables to the appropriate ports on the back of\ \ the TV and the mount.\n 7. Test the mount: Test the mount to ensure it's secure\ \ and stable. Adjust the mount as needed to ensure the TV is level and secure.\n\ \ Mounting a TV to drywall safely is crucial to avoid damaging the wall or the\ \ TV. Follow these steps carefully and use appropriate tools and hardware to\ \ ensure a secure and stable installation." text: How do I mount a tv to drywall safely? - example_title: example 2 output: text: The adjective that can be used to describe the opposite of calm is "anxious" or "stressed." So, from happy to sad, we can say that happy is to sad as calm is to anxious or stressed. text: Happy is to sad as calm is to _. --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # AmberChat - GPTQ - Model creator: [LLM360](https://huggingface.co/LLM360) - Original model: [AmberChat](https://huggingface.co/LLM360/AmberChat) <!-- description start --> # Description This repo contains GPTQ model files for [LLM360's AmberChat](https://huggingface.co/LLM360/AmberChat). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/AmberChat-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/AmberChat-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/AmberChat-GGUF) * [LLM360's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/LLM360/AmberChat) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Vicuna ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT: ``` <!-- prompt-template end --> <!-- README_GPTQ.md-compatible clients start --> ## Known compatible clients / servers GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models. These GPTQ models are known to work in the following inference servers/webuis. - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) - [KoboldAI United](https://github.com/henk717/koboldai) - [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui) - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) This may not be a complete list; if you know of others, please let me know! <!-- README_GPTQ.md-compatible clients end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/AmberChat-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 3.90 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/AmberChat-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 4.28 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/AmberChat-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 7.01 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/AmberChat-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 7.16 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | | [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/AmberChat-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 7.62 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. | | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/AmberChat-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 4.02 GB | No | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/AmberChat-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/AmberChat-GPTQ:gptq-4bit-32g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `AmberChat-GPTQ`: ```shell mkdir AmberChat-GPTQ huggingface-cli download TheBloke/AmberChat-GPTQ --local-dir AmberChat-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir AmberChat-GPTQ huggingface-cli download TheBloke/AmberChat-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir AmberChat-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir AmberChat-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/AmberChat-GPTQ --local-dir AmberChat-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/AmberChat-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/AmberChat-GPTQ`. - To download from a specific branch, enter for example `TheBloke/AmberChat-GPTQ:gptq-4bit-32g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `AmberChat-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. - Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/AmberChat-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT: ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## Python code example: inference from this GPTQ model ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install --upgrade transformers optimum # If using PyTorch 2.1 + CUDA 12.x: pip3 install --upgrade auto-gptq # or, if using PyTorch 2.1 + CUDA 11.x: pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ ``` If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.5.1 pip3 install . ``` ### Example Python code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/AmberChat-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-32g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT: ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly. [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility. For a list of clients/servers, please see "Known compatible clients / servers", above. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: LLM360's AmberChat # AmberChat We present AmberChat, an instruction following model finetuned from [LLM360/Amber](https://huggingface.co/LLM360/Amber). ## Model Description - **Model type:** Language model with the same architecture as LLaMA-7B - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Resources for more information:** - [Metrics](https://github.com/LLM360/Analysis360) - [Fully processed Amber pretraining data](https://huggingface.co/datasets/LLM360/AmberDatasets) # Loading AmberChat ```python import torch from transformers import LlamaTokenizer, LlamaForCausalLM tokenizer = LlamaTokenizer.from_pretrained("LLM360/AmberChat") model = LlamaForCausalLM.from_pretrained("LLM360/AmberChat") #template adapated from fastchat template= "A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.\n### Human: Got any creative ideas for a 10 year old’s birthday?\n### Assistant: Of course! Here are some creative ideas for a 10-year-old's birthday party:\n1. Treasure Hunt: Organize a treasure hunt in your backyard or nearby park. Create clues and riddles for the kids to solve, leading them to hidden treasures and surprises.\n2. Science Party: Plan a science-themed party where kids can engage in fun and interactive experiments. You can set up different stations with activities like making slime, erupting volcanoes, or creating simple chemical reactions.\n3. Outdoor Movie Night: Set up a backyard movie night with a projector and a large screen or white sheet. Create a cozy seating area with blankets and pillows, and serve popcorn and snacks while the kids enjoy a favorite movie under the stars.\n4. DIY Crafts Party: Arrange a craft party where kids can unleash their creativity. Provide a variety of craft supplies like beads, paints, and fabrics, and let them create their own unique masterpieces to take home as party favors.\n5. Sports Olympics: Host a mini Olympics event with various sports and games. Set up different stations for activities like sack races, relay races, basketball shooting, and obstacle courses. Give out medals or certificates to the participants.\n6. Cooking Party: Have a cooking-themed party where the kids can prepare their own mini pizzas, cupcakes, or cookies. Provide toppings, frosting, and decorating supplies, and let them get hands-on in the kitchen.\n7. Superhero Training Camp: Create a superhero-themed party where the kids can engage in fun training activities. Set up an obstacle course, have them design their own superhero capes or masks, and organize superhero-themed games and challenges.\n8. Outdoor Adventure: Plan an outdoor adventure party at a local park or nature reserve. Arrange activities like hiking, nature scavenger hunts, or a picnic with games. Encourage exploration and appreciation for the outdoors.\nRemember to tailor the activities to the birthday child's interests and preferences. Have a great celebration!\n### Human: {prompt}\n### Assistant:" prompt = "How do I mount a tv to drywall safely?" input_str = template.format(prompt=prompt) input_ids = tokenizer(input_str, return_tensors="pt").input_ids outputs = model.generate(input_ids, max_length=1000) print(tokenizer.batch_decode(outputs[:, input_ids.shape[1]:-1])[0].strip()) ``` Alternatively, you may use [FastChat](https://github.com/lm-sys/FastChat): ```bash python3 -m fastchat.serve.cli --model-path LLM360/AmberChat ``` # AmberChat Finetuning Details ## DataMix | Subset | Number of rows | License | | ----------- | ----------- | ----------- | | WizardLM/WizardLM_evol_instruct_V2_196k | 143k | | | icybee/share_gpt_90k_v1 | 90k | cc0-1.0 | | Total | 233k | | ## Hyperparameters | Hyperparameter | Value | | ----------- | ----------- | | Total Parameters | 6.7B | | Hidden Size | 4096 | | Intermediate Size (MLPs) | 11008 | | Number of Attention Heads | 32 | | Number of Hidden Lyaers | 32 | | RMSNorm ɛ | 1e^-6 | | Max Seq Length | 2048 | | Vocab Size | 32000 | | Training Hyperparameter | Value | | ----------- | ----------- | | learning_rate | 2e-5 | | num_train_epochs | 3 | | per_device_train_batch_size | 2 | | gradient_accumulation_steps | 16 | | warmup_ratio | 0.04 | | model_max_length | 2048 | # Evaluation | Model | MT-Bench | |------------------------------------------------------|------------------------------------------------------------| | LLM360/Amber 359 | 2.48750 | | **LLM360/AmberChat** | **5.428125** | # Citation **BibTeX:** ```bibtex @article{xxx, title={XXX}, author={XXX}, journal={XXX}, year={2023} } ```
[ "CRAFT" ]
ntc-ai/SDXL-LoRA-slider.laser-background
ntc-ai
text-to-image
[ "diffusers", "text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
2023-12-14T22:28:27Z
2024-02-06T00:32:59+00:00
14
0
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 language: - en license: mit tags: - text-to-image - stable-diffusion-xl - lora - template:sd-lora - template:sdxl-lora - sdxl-sliders - ntcai.xyz-sliders - concept - diffusers thumbnail: images/laser background_17_3.0.png widget: - text: laser background output: url: images/laser background_17_3.0.png - text: laser background output: url: images/laser background_19_3.0.png - text: laser background output: url: images/laser background_20_3.0.png - text: laser background output: url: images/laser background_21_3.0.png - text: laser background output: url: images/laser background_22_3.0.png inference: false instance_prompt: laser background --- # ntcai.xyz slider - laser background (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/laser background_17_-3.0.png" width=256 height=256 /> | <img src="images/laser background_17_0.0.png" width=256 height=256 /> | <img src="images/laser background_17_3.0.png" width=256 height=256 /> | | <img src="images/laser background_19_-3.0.png" width=256 height=256 /> | <img src="images/laser background_19_0.0.png" width=256 height=256 /> | <img src="images/laser background_19_3.0.png" width=256 height=256 /> | | <img src="images/laser background_20_-3.0.png" width=256 height=256 /> | <img src="images/laser background_20_0.0.png" width=256 height=256 /> | <img src="images/laser background_20_3.0.png" width=256 height=256 /> | See more at [https://sliders.ntcai.xyz/sliders/app/loras/f18de67a-209d-4f2f-84d1-b1a03aba2d31](https://sliders.ntcai.xyz/sliders/app/loras/f18de67a-209d-4f2f-84d1-b1a03aba2d31) ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` laser background ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.laser-background', weight_name='laser background.safetensors', adapter_name="laser background") # Activate the LoRA pipe.set_adapters(["laser background"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, laser background" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 1496+ unique and diverse LoRAs along with 14602+ slider merges, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful <strong>NTC Slider Factory</strong> LoRA creator, allowing you to craft your own custom LoRAs and merges opening up endless possibilities. Your support on Patreon will allow us to continue developing new models and tools. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
[ "CRAFT" ]
ntc-ai/SDXL-LoRA-slider.apocalyptic
ntc-ai
text-to-image
[ "diffusers", "text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
2023-12-26T13:49:02Z
2023-12-26T13:49:05+00:00
14
0
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 language: - en license: mit tags: - text-to-image - stable-diffusion-xl - lora - template:sd-lora - template:sdxl-lora - sdxl-sliders - ntcai.xyz-sliders - concept - diffusers thumbnail: images/evaluate/apocalyptic.../apocalyptic_17_3.0.png widget: - text: apocalyptic output: url: images/apocalyptic_17_3.0.png - text: apocalyptic output: url: images/apocalyptic_19_3.0.png - text: apocalyptic output: url: images/apocalyptic_20_3.0.png - text: apocalyptic output: url: images/apocalyptic_21_3.0.png - text: apocalyptic output: url: images/apocalyptic_22_3.0.png inference: false instance_prompt: apocalyptic --- # ntcai.xyz slider - apocalyptic (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/apocalyptic_17_-3.0.png" width=256 height=256 /> | <img src="images/apocalyptic_17_0.0.png" width=256 height=256 /> | <img src="images/apocalyptic_17_3.0.png" width=256 height=256 /> | | <img src="images/apocalyptic_19_-3.0.png" width=256 height=256 /> | <img src="images/apocalyptic_19_0.0.png" width=256 height=256 /> | <img src="images/apocalyptic_19_3.0.png" width=256 height=256 /> | | <img src="images/apocalyptic_20_-3.0.png" width=256 height=256 /> | <img src="images/apocalyptic_20_0.0.png" width=256 height=256 /> | <img src="images/apocalyptic_20_3.0.png" width=256 height=256 /> | ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` apocalyptic ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.apocalyptic', weight_name='apocalyptic.safetensors', adapter_name="apocalyptic") # Activate the LoRA pipe.set_adapters(["apocalyptic"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, apocalyptic" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 640+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities. Your support on Patreon will allow us to continue developing and refining new models. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
[ "CRAFT" ]
ntc-ai/SDXL-LoRA-slider.in-deep-meditation
ntc-ai
text-to-image
[ "diffusers", "text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
2024-01-07T02:09:08Z
2024-01-07T02:09:24+00:00
14
0
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 language: - en license: mit tags: - text-to-image - stable-diffusion-xl - lora - template:sd-lora - template:sdxl-lora - sdxl-sliders - ntcai.xyz-sliders - concept - diffusers thumbnail: images/evaluate/in deep meditation.../in deep meditation_17_3.0.png widget: - text: in deep meditation output: url: images/in deep meditation_17_3.0.png - text: in deep meditation output: url: images/in deep meditation_19_3.0.png - text: in deep meditation output: url: images/in deep meditation_20_3.0.png - text: in deep meditation output: url: images/in deep meditation_21_3.0.png - text: in deep meditation output: url: images/in deep meditation_22_3.0.png inference: false instance_prompt: in deep meditation --- # ntcai.xyz slider - in deep meditation (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/in deep meditation_17_-3.0.png" width=256 height=256 /> | <img src="images/in deep meditation_17_0.0.png" width=256 height=256 /> | <img src="images/in deep meditation_17_3.0.png" width=256 height=256 /> | | <img src="images/in deep meditation_19_-3.0.png" width=256 height=256 /> | <img src="images/in deep meditation_19_0.0.png" width=256 height=256 /> | <img src="images/in deep meditation_19_3.0.png" width=256 height=256 /> | | <img src="images/in deep meditation_20_-3.0.png" width=256 height=256 /> | <img src="images/in deep meditation_20_0.0.png" width=256 height=256 /> | <img src="images/in deep meditation_20_3.0.png" width=256 height=256 /> | ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` in deep meditation ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.in-deep-meditation', weight_name='in deep meditation.safetensors', adapter_name="in deep meditation") # Activate the LoRA pipe.set_adapters(["in deep meditation"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, in deep meditation" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 910+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities. Your support on Patreon will allow us to continue developing and refining new models. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
[ "CRAFT" ]
Seokeon/V14_R384_lora_pp_bear_plushie
Seokeon
text-to-image
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:CompVis/stable-diffusion-v1-4", "base_model:adapter:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "region:us" ]
2024-01-16T10:41:28Z
2024-01-16T10:48:12+00:00
14
1
--- base_model: CompVis/stable-diffusion-v1-4 license: creativeml-openrail-m tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora instance_prompt: a photo of sks stuffed animal inference: true --- # LoRA DreamBooth - Seokeon/V14_R384_lora_pp_bear_plushie These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks stuffed animal using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: False.
[ "BEAR" ]
abazoge/DrLongformer
abazoge
fill-mask
[ "transformers", "pytorch", "tensorboard", "longformer", "fill-mask", "biomedical", "medical", "clinical", "life science", "fr", "dataset:Dr-BERT/NACHOS", "arxiv:2402.16689", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-01-17T13:43:01Z
2024-02-27T06:48:16+00:00
14
1
--- datasets: - Dr-BERT/NACHOS language: - fr library_name: transformers license: apache-2.0 tags: - biomedical - medical - clinical - life science widget: - text: Le patient est atteint d'une <mask>. --- # DrLongformer <span style="font-size:larger;">**DrLongformer**</span> is a French pretrained Longformer model based on Clinical-Longformer that was further pretrained on the NACHOS dataset (same dataset as [DrBERT](https://github.com/qanastek/DrBERT)). This model allows up to 4,096 tokens as input. DrLongformer consistently outperforms medical BERT-based models across most downstream tasks regardless of sequence length, except on NER tasks. Evaluated downstream tasks cover named entity recognition (NER), question answering (MCQA), Semantic textual similarity (STS) and text classification tasks (CLS) from [DrBenchmark](https://huggingface.co/DrBenchmark). For more details, please refer to our paper: [Adaptation of Biomedical and Clinical Pretrained Models to French Long Documents: A Comparative Study](). ### Model pretraining We explored multiple strategies for the adaptation of Longformer models to the French medical domain: - Further pretraining of English clinical Longformer on French medical data. - Converting a French medical BERT model to the Longformer architecture. - Pretraining a Longformer from scratch on French medical data. All Pretraining scripts to reproduce the experiments are available in this Github repository: [DrLongformer](https://github.com/abazoge/DrLongformer). For the `from scratch` and `further pretraining` strategies, the training scripts are the same as [DrBERT](https://github.com/qanastek/DrBERT), only the bash scripts are different and available in this repository. All models were trained on the [Jean Zay](http://www.idris.fr/jean-zay/) French supercomputer. | Model name | Corpus | Pretraining strategy | Sequence Length | Model URL | | :------: | :---: | :---: | :---: | :---: | | `DrLongformer (DrLonformer-CP)` | NACHOS 7 GB | Further pretraining of [Clinical-Longformer](https://huggingface.co/yikuan8/Clinical-Longformer) | 4096 | [HuggingFace](https://huggingface.co/abazoge/DrLongformer) | | `DrBERT-4096` | NACHOS 7 GB | Conversion of [DrBERT-7B](https://huggingface.co/Dr-BERT/DrBERT-7GB) to the Longformer architecture | 4096 | [HuggingFace](https://huggingface.co/abazoge/DrBERT-4096) | | `DrLongformer-FS (from scratch)` | NACHOS 7 GB | Pretraining from scratch | 4096 | Not available | ### Model Usage You can use DrLongformer directly from [Hugging Face's Transformers](https://github.com/huggingface/transformers): ```python # !pip install transformers from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("abazoge/DrLongformer") model = AutoModelForMaskedLM.from_pretrained("abazoge/DrLongformer") ``` ### Citation ``` @misc{bazoge2024adaptation, title={Adaptation of Biomedical and Clinical Pretrained Models to French Long Documents: A Comparative Study}, author={Adrien Bazoge and Emmanuel Morin and Beatrice Daille and Pierre-Antoine Gourraud}, year={2024}, eprint={2402.16689}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
[ "MEDICAL DATA" ]
ntc-ai/SDXL-LoRA-slider.back-to-the-future-film-still
ntc-ai
text-to-image
[ "diffusers", "text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
2024-01-24T01:26:06Z
2024-01-24T01:26:09+00:00
14
0
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 language: - en license: mit tags: - text-to-image - stable-diffusion-xl - lora - template:sd-lora - template:sdxl-lora - sdxl-sliders - ntcai.xyz-sliders - concept - diffusers thumbnail: images/evaluate/back to the future film still.../back to the future film still_17_3.0.png widget: - text: back to the future film still output: url: images/back to the future film still_17_3.0.png - text: back to the future film still output: url: images/back to the future film still_19_3.0.png - text: back to the future film still output: url: images/back to the future film still_20_3.0.png - text: back to the future film still output: url: images/back to the future film still_21_3.0.png - text: back to the future film still output: url: images/back to the future film still_22_3.0.png inference: false instance_prompt: back to the future film still --- # ntcai.xyz slider - back to the future film still (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/back to the future film still_17_-3.0.png" width=256 height=256 /> | <img src="images/back to the future film still_17_0.0.png" width=256 height=256 /> | <img src="images/back to the future film still_17_3.0.png" width=256 height=256 /> | | <img src="images/back to the future film still_19_-3.0.png" width=256 height=256 /> | <img src="images/back to the future film still_19_0.0.png" width=256 height=256 /> | <img src="images/back to the future film still_19_3.0.png" width=256 height=256 /> | | <img src="images/back to the future film still_20_-3.0.png" width=256 height=256 /> | <img src="images/back to the future film still_20_0.0.png" width=256 height=256 /> | <img src="images/back to the future film still_20_3.0.png" width=256 height=256 /> | ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` back to the future film still ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.back-to-the-future-film-still', weight_name='back to the future film still.safetensors', adapter_name="back to the future film still") # Activate the LoRA pipe.set_adapters(["back to the future film still"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, back to the future film still" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 1140+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities. Your support on Patreon will allow us to continue developing and refining new models. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
[ "CRAFT" ]
ntc-ai/SDXL-LoRA-slider.dynamic-anatomy
ntc-ai
text-to-image
[ "diffusers", "text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
2024-01-24T19:26:47Z
2024-01-24T19:26:50+00:00
14
1
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 language: - en license: mit tags: - text-to-image - stable-diffusion-xl - lora - template:sd-lora - template:sdxl-lora - sdxl-sliders - ntcai.xyz-sliders - concept - diffusers thumbnail: images/evaluate/dynamic anatomy.../dynamic anatomy_17_3.0.png widget: - text: dynamic anatomy output: url: images/dynamic anatomy_17_3.0.png - text: dynamic anatomy output: url: images/dynamic anatomy_19_3.0.png - text: dynamic anatomy output: url: images/dynamic anatomy_20_3.0.png - text: dynamic anatomy output: url: images/dynamic anatomy_21_3.0.png - text: dynamic anatomy output: url: images/dynamic anatomy_22_3.0.png inference: false instance_prompt: dynamic anatomy --- # ntcai.xyz slider - dynamic anatomy (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/dynamic anatomy_17_-3.0.png" width=256 height=256 /> | <img src="images/dynamic anatomy_17_0.0.png" width=256 height=256 /> | <img src="images/dynamic anatomy_17_3.0.png" width=256 height=256 /> | | <img src="images/dynamic anatomy_19_-3.0.png" width=256 height=256 /> | <img src="images/dynamic anatomy_19_0.0.png" width=256 height=256 /> | <img src="images/dynamic anatomy_19_3.0.png" width=256 height=256 /> | | <img src="images/dynamic anatomy_20_-3.0.png" width=256 height=256 /> | <img src="images/dynamic anatomy_20_0.0.png" width=256 height=256 /> | <img src="images/dynamic anatomy_20_3.0.png" width=256 height=256 /> | ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` dynamic anatomy ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.dynamic-anatomy', weight_name='dynamic anatomy.safetensors', adapter_name="dynamic anatomy") # Activate the LoRA pipe.set_adapters(["dynamic anatomy"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, dynamic anatomy" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 1140+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities. Your support on Patreon will allow us to continue developing and refining new models. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
[ "CRAFT" ]
gabrielandrade2/point-to-span-estimation
gabrielandrade2
token-classification
[ "transformers", "pytorch", "bert", "token-classification", "ja", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-02-20T03:42:06Z
2024-02-22T07:54:51+00:00
14
0
--- language: ja license: gpl-3.0 widget: - text: 今回も意識⧫障害が出現し救急外来を受診した。 --- A model used to estimate the start and end of a Named Entity (NE) span based on a Point annotation, as used in the paper "Is boundary annotation necessary? Evaluating boundary-free approaches to improve clinical named entity annotation efficiency". Basically, the goal of this model is to convert a point annotation to a corresponding span annotation with the correct span. The model locates an identifier token (⧫) and based on its surround context estimates where the NE concept starts and ends. The model is trained to estimate the spans of diseases and symptom names in Japanese medical texts. If you want to re-train the model for a different language or domain, dataset preprocessing and training scripts are available [here](https://github.com/gabrielandrade2/Point-to-Span-estimation). ## Concepts ### Point annotation Unlike span-based paradigms, a point annotation is composed by a single position within the NE span. It is a simple and fast way to annotate NEs, but it introduces ambiguity in the information captured by the annotation. On this repository implementation, a point annotation is represented by a lozenge character (⧫). Example: ``` The patient has a history of dia⧫betes. ``` ### Span annotation A span annotation is composed by the two markings, identifying both start and end positions of the NE span. The implementation on this repository is based on the span annotation schema defined by [Yada et al. (2020)](https://aclanthology.org/2020.lrec-1.561/). Example: ``` The patient has a history of <C>diabetes</C>. ``` ## Model architecture This model was fine-tuned on top of [cl-tohoku/bert-base-japanese-char-v2] (https://huggingface.co/cl-tohoku/bert-base-japanese-char-v2). The model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads. To be executed, this model requires the following dependencies: - fugashi - unidic-lite ## Training data The model was finetuned using a dataset of Japanese medical texts (which is not available pubicly), comprised of 1027 synthetic medication history notes generated through crowd-sourcing. Ten experienced dispensing pharmacists were hired as writers to craft the corpus. Each writer was assigned one of 285 drug names and tasked with creating a ``typical'' clinical narrative. This corpus was later fully annotated for symptoms and disease names. Each annotation received a ⧫ token within its span based on a Truncated normal distribution. The model was then trained to identify this token and output a span corresponding to the surrounding concept. ## Usage The `requirements.txt` file contains all the dependencies needed to run the example code. ```python import mojimoji import numpy as np from transformers import AutoTokenizer, AutoModelForTokenClassification import iob_util #pip install git+https://github.com/gabrielandrade2/IOB-util.git model_name = "gabrielandrade2/point-to-span-estimation" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForTokenClassification.from_pretrained(model_name) # Point-annotated text text = "肥大型心⧫筋症、心房⧫細動に対してWF投与が開始となった。\ 治療経過中に非持続性心⧫室頻拍が認められたためアミオダロンが併用となった。" # Convert to zenkaku and tokenize text = mojimoji.han_to_zen(text) tokenized = tokenizer.tokenize(text) # Encode text input_ids = tokenizer.encode(text, return_tensors="pt") # Predict spans output = model(input_ids) logits = output[0].detach().cpu().numpy() tags = np.argmax(logits, axis=2)[:, :].tolist()[0] # Convert model output to IOB format id2label = model.config.id2label tags = [id2label[t] for t in tags] # Convert input_ids back to chars tokens = [tokenizer.convert_ids_to_tokens(t) for t in input_ids][0] # Remove model special tokens (CLS, SEP, PAD) tags = [y for x, y in zip(tokens, tags) if x not in ['[CLS]', '[SEP]', '[PAD]']] tokens = [x for x in tokens if x not in ['[CLS]', '[SEP]', '[PAD]']] # Convert from IOB to XML tag format xml_text = iob_util.convert_iob_to_xml(tokens, tags) xml_text = xml_text.replace('⧫', '') print(xml_text) ``` ### Output ```xml <C>肥大型心筋症</C>、<C>心房細動</C>に対してWF投与が開始となった。治療経過中に<C>非持続性心室頻拍</C>が認められたためアミオダロンが併用となった。 ```
[ "CRAFT" ]
FreedomIntelligence/Apollo-0.5B-GGUF
FreedomIntelligence
null
[ "gguf", "arxiv:2403.03640", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
2024-03-22T11:14:39Z
2024-04-23T09:31:25+00:00
14
1
--- license: apache-2.0 --- # Multilingual Medicine: Model, Dataset, Benchmark, Code Covering English, Chinese, French, Hindi, Spanish, Hindi, Arabic So far <p align="center"> 👨🏻‍💻<a href="https://github.com/FreedomIntelligence/Apollo" target="_blank">Github</a> •📃 <a href="https://arxiv.org/abs/2403.03640" target="_blank">Paper</a> • 🌐 <a href="https://apollo.llmzoo.com/" target="_blank">Demo</a> • 🤗 <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus" target="_blank">ApolloCorpus</a> • 🤗 <a href="https://huggingface.co/datasets/FreedomIntelligence/XMedbench" target="_blank">XMedBench</a> <br> <a href="./README_zh.md"> 中文 </a> | <a href="./README.md"> English </p> ![Apollo](assets/apollo_medium_final.png) ## 🌈 Update * **[2024.03.07]** [Paper](https://arxiv.org/abs/2403.03640) released. * **[2024.02.12]** <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus" target="_blank">ApolloCorpus</a> and <a href="https://huggingface.co/datasets/FreedomIntelligence/XMedbench" target="_blank">XMedBench</a> is published!🎉 * **[2024.01.23]** Apollo repo is published!🎉 ## Results 🤗<a href="https://huggingface.co/FreedomIntelligence/Apollo-0.5B" target="_blank">Apollo-0.5B</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-1.8B" target="_blank">Apollo-1.8B</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-2B" target="_blank">Apollo-2B</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-6B" target="_blank">Apollo-6B</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-7B" target="_blank">Apollo-7B</a> 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-34B" target="_blank">Apollo-34B</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-72B" target="_blank">Apollo-72B</a> 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-0.5B-GGUF" target="_blank">Apollo-0.5B-GGUF</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-2B-GGUF" target="_blank">Apollo-2B-GGUF</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-6B-GGUF" target="_blank">Apollo-6B-GGUF</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-7B-GGUF" target="_blank">Apollo-7B-GGUF</a> ![Apollo](assets/result.png) ## Dataset & Evaluation - Dataset 🤗 <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus" target="_blank">ApolloCorpus</a> <details><summary>Click to expand</summary> ![Apollo](assets/dataset.png) - [Zip File](https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus/blob/main/ApolloCorpus.zip) - [Data category](https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus/tree/main/train) - Pretrain: - data item: - json_name: {data_source}_{language}_{data_type}.json - data_type: medicalBook, medicalGuideline, medicalPaper, medicalWeb(from online forum), medicalWiki - language: en(English), zh(chinese), es(spanish), fr(french), hi(Hindi) - data_type: qa(generated qa from text) - data_type==text: list of string ``` [ "string1", "string2", ... ] ``` - data_type==qa: list of qa pairs(list of string) ``` [ [ "q1", "a1", "q2", "a2", ... ], ... ] ``` - SFT: - json_name: {data_source}_{language}.json - data_type: code, general, math, medicalExam, medicalPatient - data item: list of qa pairs(list of string) ``` [ [ "q1", "a1", "q2", "a2", ... ], ... ] ``` </details> - Evaluation 🤗 <a href="https://huggingface.co/datasets/FreedomIntelligence/XMedbench" target="_blank">XMedBench</a> <details><summary>Click to expand</summary> - EN: - [MedQA-USMLE](https://huggingface.co/datasets/GBaker/MedQA-USMLE-4-options) - [MedMCQA](https://huggingface.co/datasets/medmcqa/viewer/default/test) - [PubMedQA](https://huggingface.co/datasets/pubmed_qa): Because the results fluctuated too much, they were not used in the paper. - [MMLU-Medical](https://huggingface.co/datasets/cais/mmlu) - Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine - ZH: - [MedQA-MCMLE](https://huggingface.co/datasets/bigbio/med_qa/viewer/med_qa_zh_4options_bigbio_qa/test) - [CMB-single](https://huggingface.co/datasets/FreedomIntelligence/CMB): Not used in the paper - Randomly sample 2,000 multiple-choice questions with single answer. - [CMMLU-Medical](https://huggingface.co/datasets/haonan-li/cmmlu) - Anatomy, Clinical_knowledge, College_medicine, Genetics, Nutrition, Traditional_chinese_medicine, Virology - [CExam](https://github.com/williamliujl/CMExam): Not used in the paper - Randomly sample 2,000 multiple-choice questions - ES: [Head_qa](https://huggingface.co/datasets/head_qa) - FR: [Frenchmedmcqa](https://github.com/qanastek/FrenchMedMCQA) - HI: [MMLU_HI](https://huggingface.co/datasets/FreedomIntelligence/MMLU_Arabic) - Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine - AR: [MMLU_Ara](https://huggingface.co/datasets/FreedomIntelligence/MMLU_Hindi) - Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine </details> ## Results reproduction <details><summary>Click to expand</summary> **Waiting for Update** </details> ## Citation Please use the following citation if you intend to use our dataset for training or evaluation: ``` @misc{wang2024apollo, title={Apollo: Lightweight Multilingual Medical LLMs towards Democratizing Medical AI to 6B People}, author={Xidong Wang and Nuo Chen and Junyin Chen and Yan Hu and Yidong Wang and Xiangbo Wu and Anningzhe Gao and Xiang Wan and Haizhou Li and Benyou Wang}, year={2024}, eprint={2403.03640}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
[ "HEAD-QA", "MEDQA", "PUBMEDQA" ]
bartowski/Einstein-v6-7B-exl2
bartowski
text-generation
[ "axolotl", "generated_from_trainer", "Mistral", "instruct", "finetune", "chatml", "gpt4", "synthetic data", "science", "physics", "chemistry", "biology", "math", "text-generation", "en", "dataset:allenai/ai2_arc", "dataset:camel-ai/physics", "dataset:camel-ai/chemistry", "dataset:camel-ai/biology", "dataset:camel-ai/math", "dataset:metaeval/reclor", "dataset:openbookqa", "dataset:mandyyyyii/scibench", "dataset:derek-thomas/ScienceQA", "dataset:TIGER-Lab/ScienceEval", "dataset:jondurbin/airoboros-3.2", "dataset:LDJnr/Capybara", "dataset:Cot-Alpaca-GPT4-From-OpenHermes-2.5", "dataset:STEM-AI-mtl/Electrical-engineering", "dataset:knowrohit07/saraswati-stem", "dataset:sablo/oasst2_curated", "dataset:lmsys/lmsys-chat-1m", "dataset:TIGER-Lab/MathInstruct", "dataset:bigbio/med_qa", "dataset:meta-math/MetaMathQA-40K", "dataset:piqa", "dataset:scibench", "dataset:sciq", "dataset:Open-Orca/SlimOrca", "dataset:migtissera/Synthia-v1.3", "dataset:allenai/WildChat", "dataset:microsoft/orca-math-word-problems-200k", "dataset:openchat/openchat_sharegpt4_dataset", "dataset:teknium/GPTeacher-General-Instruct", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "dataset:totally-not-an-llm/EverythingLM-data-V3", "dataset:HuggingFaceH4/no_robots", "dataset:OpenAssistant/oasst_top1_2023-08-25", "dataset:WizardLM/WizardLM_evol_instruct_70k", "base_model:mistral-community/Mistral-7B-v0.2", "base_model:finetune:mistral-community/Mistral-7B-v0.2", "license:other", "region:us" ]
2024-04-07T23:29:21Z
2024-04-07T23:29:21+00:00
14
2
--- base_model: alpindale/Mistral-7B-v0.2-hf datasets: - allenai/ai2_arc - camel-ai/physics - camel-ai/chemistry - camel-ai/biology - camel-ai/math - metaeval/reclor - openbookqa - mandyyyyii/scibench - derek-thomas/ScienceQA - TIGER-Lab/ScienceEval - jondurbin/airoboros-3.2 - LDJnr/Capybara - Cot-Alpaca-GPT4-From-OpenHermes-2.5 - STEM-AI-mtl/Electrical-engineering - knowrohit07/saraswati-stem - sablo/oasst2_curated - lmsys/lmsys-chat-1m - TIGER-Lab/MathInstruct - bigbio/med_qa - meta-math/MetaMathQA-40K - openbookqa - piqa - metaeval/reclor - derek-thomas/ScienceQA - scibench - sciq - Open-Orca/SlimOrca - migtissera/Synthia-v1.3 - TIGER-Lab/ScienceEval - allenai/WildChat - microsoft/orca-math-word-problems-200k - openchat/openchat_sharegpt4_dataset - teknium/GPTeacher-General-Instruct - m-a-p/CodeFeedback-Filtered-Instruction - totally-not-an-llm/EverythingLM-data-V3 - HuggingFaceH4/no_robots - OpenAssistant/oasst_top1_2023-08-25 - WizardLM/WizardLM_evol_instruct_70k language: - en license: other pipeline_tag: text-generation tags: - axolotl - generated_from_trainer - Mistral - instruct - finetune - chatml - gpt4 - synthetic data - science - physics - chemistry - biology - math quantized_by: bartowski --- ## Exllama v2 Quantizations of Einstein-v6-7B Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.18">turboderp's ExLlamaV2 v0.0.18</a> for quantization. <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b> Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. Original model: https://huggingface.co/Weyaxi/Einstein-v6-7B | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description | | ----- | ---- | ------- | ------ | ------ | ------ | ------------ | | [8_0](https://huggingface.co/bartowski/Einstein-v6-7B-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.8 GB | 11.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. | | [6_5](https://huggingface.co/bartowski/Einstein-v6-7B-exl2/tree/6_5) | 6.5 | 8.0 | 7.2 GB | 8.6 GB | 10.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. | | [5_0](https://huggingface.co/bartowski/Einstein-v6-7B-exl2/tree/5_0) | 5.0 | 6.0 | 6.0 GB | 7.4 GB | 9.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. | | [4_25](https://huggingface.co/bartowski/Einstein-v6-7B-exl2/tree/4_25) | 4.25 | 6.0 | 5.3 GB | 6.7 GB | 8.7 GB | GPTQ equivalent bits per weight, slightly higher quality. | | [3_5](https://huggingface.co/bartowski/Einstein-v6-7B-exl2/tree/3_5) | 3.5 | 6.0 | 4.7 GB | 6.1 GB | 8.1 GB | Lower quality, only use if you have to. | ## Download instructions With git: ```shell git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Einstein-v6-7B-exl2 Einstein-v6-7B-exl2-6_5 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download the `main` (only useful if you only care about measurement.json) branch to a folder called `Einstein-v6-7B-exl2`: ```shell mkdir Einstein-v6-7B-exl2 huggingface-cli download bartowski/Einstein-v6-7B-exl2 --local-dir Einstein-v6-7B-exl2 --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: Linux: ```shell mkdir Einstein-v6-7B-exl2-6_5 huggingface-cli download bartowski/Einstein-v6-7B-exl2 --revision 6_5 --local-dir Einstein-v6-7B-exl2-6_5 --local-dir-use-symlinks False ``` Windows (which apparently doesn't like _ in folders sometimes?): ```shell mkdir Einstein-v6-7B-exl2-6.5 huggingface-cli download bartowski/Einstein-v6-7B-exl2 --revision 6_5 --local-dir Einstein-v6-7B-exl2-6.5 --local-dir-use-symlinks False ``` Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
[ "SCIQ" ]
srikanthmalla/hkunlp-instructor-xl
srikanthmalla
sentence-similarity
[ "sentence-transformers", "pytorch", "t5", "text-embedding", "embeddings", "information-retrieval", "beir", "text-classification", "language-model", "text-clustering", "text-semantic-similarity", "text-evaluation", "prompt-retrieval", "text-reranking", "feature-extraction", "sentence-similarity", "transformers", "English", "Sentence Similarity", "natural_questions", "ms_marco", "fever", "hotpot_qa", "mteb", "en", "arxiv:2212.09741", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "region:us" ]
2024-04-10T21:02:43Z
2024-04-10T21:02:44+00:00
14
0
--- language: en license: apache-2.0 pipeline_tag: sentence-similarity tags: - text-embedding - embeddings - information-retrieval - beir - text-classification - language-model - text-clustering - text-semantic-similarity - text-evaluation - prompt-retrieval - text-reranking - sentence-transformers - feature-extraction - sentence-similarity - transformers - t5 - English - Sentence Similarity - natural_questions - ms_marco - fever - hotpot_qa - mteb inference: false model-index: - name: final_xl_results results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 85.08955223880596 - type: ap value: 52.66066378722476 - type: f1 value: 79.63340218960269 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 86.542 - type: ap value: 81.92695193008987 - type: f1 value: 86.51466132573681 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 42.964 - type: f1 value: 41.43146249774862 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 29.872 - type: map_at_10 value: 46.342 - type: map_at_100 value: 47.152 - type: map_at_1000 value: 47.154 - type: map_at_3 value: 41.216 - type: map_at_5 value: 44.035999999999994 - type: mrr_at_1 value: 30.939 - type: mrr_at_10 value: 46.756 - type: mrr_at_100 value: 47.573 - type: mrr_at_1000 value: 47.575 - type: mrr_at_3 value: 41.548 - type: mrr_at_5 value: 44.425 - type: ndcg_at_1 value: 29.872 - type: ndcg_at_10 value: 55.65 - type: ndcg_at_100 value: 58.88099999999999 - type: ndcg_at_1000 value: 58.951 - type: ndcg_at_3 value: 45.0 - type: ndcg_at_5 value: 50.09 - type: precision_at_1 value: 29.872 - type: precision_at_10 value: 8.549 - type: precision_at_100 value: 0.991 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 18.658 - type: precision_at_5 value: 13.669999999999998 - type: recall_at_1 value: 29.872 - type: recall_at_10 value: 85.491 - type: recall_at_100 value: 99.075 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 55.974000000000004 - type: recall_at_5 value: 68.35 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 42.452729850641276 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 32.21141846480423 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 65.34710928952622 - type: mrr value: 77.61124301983028 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_spearman value: 84.15312230525639 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 82.66233766233766 - type: f1 value: 82.04175284777669 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 37.36697339826455 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 30.551241447593092 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 36.797000000000004 - type: map_at_10 value: 48.46 - type: map_at_100 value: 49.968 - type: map_at_1000 value: 50.080000000000005 - type: map_at_3 value: 44.71 - type: map_at_5 value: 46.592 - type: mrr_at_1 value: 45.494 - type: mrr_at_10 value: 54.747 - type: mrr_at_100 value: 55.43599999999999 - type: mrr_at_1000 value: 55.464999999999996 - type: mrr_at_3 value: 52.361000000000004 - type: mrr_at_5 value: 53.727000000000004 - type: ndcg_at_1 value: 45.494 - type: ndcg_at_10 value: 54.989 - type: ndcg_at_100 value: 60.096000000000004 - type: ndcg_at_1000 value: 61.58 - type: ndcg_at_3 value: 49.977 - type: ndcg_at_5 value: 51.964999999999996 - type: precision_at_1 value: 45.494 - type: precision_at_10 value: 10.558 - type: precision_at_100 value: 1.6049999999999998 - type: precision_at_1000 value: 0.203 - type: precision_at_3 value: 23.796 - type: precision_at_5 value: 16.881 - type: recall_at_1 value: 36.797000000000004 - type: recall_at_10 value: 66.83 - type: recall_at_100 value: 88.34100000000001 - type: recall_at_1000 value: 97.202 - type: recall_at_3 value: 51.961999999999996 - type: recall_at_5 value: 57.940000000000005 - type: map_at_1 value: 32.597 - type: map_at_10 value: 43.424 - type: map_at_100 value: 44.78 - type: map_at_1000 value: 44.913 - type: map_at_3 value: 40.315 - type: map_at_5 value: 41.987 - type: mrr_at_1 value: 40.382 - type: mrr_at_10 value: 49.219 - type: mrr_at_100 value: 49.895 - type: mrr_at_1000 value: 49.936 - type: mrr_at_3 value: 46.996 - type: mrr_at_5 value: 48.231 - type: ndcg_at_1 value: 40.382 - type: ndcg_at_10 value: 49.318 - type: ndcg_at_100 value: 53.839999999999996 - type: ndcg_at_1000 value: 55.82899999999999 - type: ndcg_at_3 value: 44.914 - type: ndcg_at_5 value: 46.798 - type: precision_at_1 value: 40.382 - type: precision_at_10 value: 9.274000000000001 - type: precision_at_100 value: 1.497 - type: precision_at_1000 value: 0.198 - type: precision_at_3 value: 21.592 - type: precision_at_5 value: 15.159 - type: recall_at_1 value: 32.597 - type: recall_at_10 value: 59.882000000000005 - type: recall_at_100 value: 78.446 - type: recall_at_1000 value: 90.88000000000001 - type: recall_at_3 value: 46.9 - type: recall_at_5 value: 52.222 - type: map_at_1 value: 43.8 - type: map_at_10 value: 57.293000000000006 - type: map_at_100 value: 58.321 - type: map_at_1000 value: 58.361 - type: map_at_3 value: 53.839999999999996 - type: map_at_5 value: 55.838 - type: mrr_at_1 value: 49.592000000000006 - type: mrr_at_10 value: 60.643 - type: mrr_at_100 value: 61.23499999999999 - type: mrr_at_1000 value: 61.251999999999995 - type: mrr_at_3 value: 58.265 - type: mrr_at_5 value: 59.717 - type: ndcg_at_1 value: 49.592000000000006 - type: ndcg_at_10 value: 63.364 - type: ndcg_at_100 value: 67.167 - type: ndcg_at_1000 value: 67.867 - type: ndcg_at_3 value: 57.912 - type: ndcg_at_5 value: 60.697 - type: precision_at_1 value: 49.592000000000006 - type: precision_at_10 value: 10.088 - type: precision_at_100 value: 1.2930000000000001 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 25.789 - type: precision_at_5 value: 17.541999999999998 - type: recall_at_1 value: 43.8 - type: recall_at_10 value: 77.635 - type: recall_at_100 value: 93.748 - type: recall_at_1000 value: 98.468 - type: recall_at_3 value: 63.223 - type: recall_at_5 value: 70.122 - type: map_at_1 value: 27.721 - type: map_at_10 value: 35.626999999999995 - type: map_at_100 value: 36.719 - type: map_at_1000 value: 36.8 - type: map_at_3 value: 32.781 - type: map_at_5 value: 34.333999999999996 - type: mrr_at_1 value: 29.604999999999997 - type: mrr_at_10 value: 37.564 - type: mrr_at_100 value: 38.505 - type: mrr_at_1000 value: 38.565 - type: mrr_at_3 value: 34.727000000000004 - type: mrr_at_5 value: 36.207 - type: ndcg_at_1 value: 29.604999999999997 - type: ndcg_at_10 value: 40.575 - type: ndcg_at_100 value: 45.613 - type: ndcg_at_1000 value: 47.676 - type: ndcg_at_3 value: 34.811 - type: ndcg_at_5 value: 37.491 - type: precision_at_1 value: 29.604999999999997 - type: precision_at_10 value: 6.1690000000000005 - type: precision_at_100 value: 0.906 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 14.237 - type: precision_at_5 value: 10.056 - type: recall_at_1 value: 27.721 - type: recall_at_10 value: 54.041 - type: recall_at_100 value: 76.62299999999999 - type: recall_at_1000 value: 92.134 - type: recall_at_3 value: 38.582 - type: recall_at_5 value: 44.989000000000004 - type: map_at_1 value: 16.553 - type: map_at_10 value: 25.384 - type: map_at_100 value: 26.655 - type: map_at_1000 value: 26.778000000000002 - type: map_at_3 value: 22.733 - type: map_at_5 value: 24.119 - type: mrr_at_1 value: 20.149 - type: mrr_at_10 value: 29.705 - type: mrr_at_100 value: 30.672 - type: mrr_at_1000 value: 30.737 - type: mrr_at_3 value: 27.032 - type: mrr_at_5 value: 28.369 - type: ndcg_at_1 value: 20.149 - type: ndcg_at_10 value: 30.843999999999998 - type: ndcg_at_100 value: 36.716 - type: ndcg_at_1000 value: 39.495000000000005 - type: ndcg_at_3 value: 25.918999999999997 - type: ndcg_at_5 value: 27.992 - type: precision_at_1 value: 20.149 - type: precision_at_10 value: 5.858 - type: precision_at_100 value: 1.009 - type: precision_at_1000 value: 0.13799999999999998 - type: precision_at_3 value: 12.645000000000001 - type: precision_at_5 value: 9.179 - type: recall_at_1 value: 16.553 - type: recall_at_10 value: 43.136 - type: recall_at_100 value: 68.562 - type: recall_at_1000 value: 88.208 - type: recall_at_3 value: 29.493000000000002 - type: recall_at_5 value: 34.751 - type: map_at_1 value: 28.000999999999998 - type: map_at_10 value: 39.004 - type: map_at_100 value: 40.461999999999996 - type: map_at_1000 value: 40.566 - type: map_at_3 value: 35.805 - type: map_at_5 value: 37.672 - type: mrr_at_1 value: 33.782000000000004 - type: mrr_at_10 value: 44.702 - type: mrr_at_100 value: 45.528 - type: mrr_at_1000 value: 45.576 - type: mrr_at_3 value: 42.14 - type: mrr_at_5 value: 43.651 - type: ndcg_at_1 value: 33.782000000000004 - type: ndcg_at_10 value: 45.275999999999996 - type: ndcg_at_100 value: 50.888 - type: ndcg_at_1000 value: 52.879 - type: ndcg_at_3 value: 40.191 - type: ndcg_at_5 value: 42.731 - type: precision_at_1 value: 33.782000000000004 - type: precision_at_10 value: 8.200000000000001 - type: precision_at_100 value: 1.287 - type: precision_at_1000 value: 0.16199999999999998 - type: precision_at_3 value: 19.185 - type: precision_at_5 value: 13.667000000000002 - type: recall_at_1 value: 28.000999999999998 - type: recall_at_10 value: 58.131 - type: recall_at_100 value: 80.869 - type: recall_at_1000 value: 93.931 - type: recall_at_3 value: 44.161 - type: recall_at_5 value: 50.592000000000006 - type: map_at_1 value: 28.047 - type: map_at_10 value: 38.596000000000004 - type: map_at_100 value: 40.116 - type: map_at_1000 value: 40.232 - type: map_at_3 value: 35.205 - type: map_at_5 value: 37.076 - type: mrr_at_1 value: 34.932 - type: mrr_at_10 value: 44.496 - type: mrr_at_100 value: 45.47 - type: mrr_at_1000 value: 45.519999999999996 - type: mrr_at_3 value: 41.743 - type: mrr_at_5 value: 43.352000000000004 - type: ndcg_at_1 value: 34.932 - type: ndcg_at_10 value: 44.901 - type: ndcg_at_100 value: 50.788999999999994 - type: ndcg_at_1000 value: 52.867 - type: ndcg_at_3 value: 39.449 - type: ndcg_at_5 value: 41.929 - type: precision_at_1 value: 34.932 - type: precision_at_10 value: 8.311 - type: precision_at_100 value: 1.3050000000000002 - type: precision_at_1000 value: 0.166 - type: precision_at_3 value: 18.836 - type: precision_at_5 value: 13.447000000000001 - type: recall_at_1 value: 28.047 - type: recall_at_10 value: 57.717 - type: recall_at_100 value: 82.182 - type: recall_at_1000 value: 95.82000000000001 - type: recall_at_3 value: 42.448 - type: recall_at_5 value: 49.071 - type: map_at_1 value: 27.861250000000005 - type: map_at_10 value: 37.529583333333335 - type: map_at_100 value: 38.7915 - type: map_at_1000 value: 38.90558333333335 - type: map_at_3 value: 34.57333333333333 - type: map_at_5 value: 36.187166666666656 - type: mrr_at_1 value: 32.88291666666666 - type: mrr_at_10 value: 41.79750000000001 - type: mrr_at_100 value: 42.63183333333333 - type: mrr_at_1000 value: 42.68483333333333 - type: mrr_at_3 value: 39.313750000000006 - type: mrr_at_5 value: 40.70483333333333 - type: ndcg_at_1 value: 32.88291666666666 - type: ndcg_at_10 value: 43.09408333333333 - type: ndcg_at_100 value: 48.22158333333333 - type: ndcg_at_1000 value: 50.358000000000004 - type: ndcg_at_3 value: 38.129583333333336 - type: ndcg_at_5 value: 40.39266666666666 - type: precision_at_1 value: 32.88291666666666 - type: precision_at_10 value: 7.5584999999999996 - type: precision_at_100 value: 1.1903333333333332 - type: precision_at_1000 value: 0.15658333333333332 - type: precision_at_3 value: 17.495916666666666 - type: precision_at_5 value: 12.373833333333332 - type: recall_at_1 value: 27.861250000000005 - type: recall_at_10 value: 55.215916666666665 - type: recall_at_100 value: 77.392 - type: recall_at_1000 value: 92.04908333333334 - type: recall_at_3 value: 41.37475 - type: recall_at_5 value: 47.22908333333333 - type: map_at_1 value: 25.064999999999998 - type: map_at_10 value: 31.635999999999996 - type: map_at_100 value: 32.596000000000004 - type: map_at_1000 value: 32.695 - type: map_at_3 value: 29.612 - type: map_at_5 value: 30.768 - type: mrr_at_1 value: 28.528 - type: mrr_at_10 value: 34.717 - type: mrr_at_100 value: 35.558 - type: mrr_at_1000 value: 35.626000000000005 - type: mrr_at_3 value: 32.745000000000005 - type: mrr_at_5 value: 33.819 - type: ndcg_at_1 value: 28.528 - type: ndcg_at_10 value: 35.647 - type: ndcg_at_100 value: 40.207 - type: ndcg_at_1000 value: 42.695 - type: ndcg_at_3 value: 31.878 - type: ndcg_at_5 value: 33.634 - type: precision_at_1 value: 28.528 - type: precision_at_10 value: 5.46 - type: precision_at_100 value: 0.84 - type: precision_at_1000 value: 0.11399999999999999 - type: precision_at_3 value: 13.547999999999998 - type: precision_at_5 value: 9.325 - type: recall_at_1 value: 25.064999999999998 - type: recall_at_10 value: 45.096000000000004 - type: recall_at_100 value: 65.658 - type: recall_at_1000 value: 84.128 - type: recall_at_3 value: 34.337 - type: recall_at_5 value: 38.849000000000004 - type: map_at_1 value: 17.276 - type: map_at_10 value: 24.535 - type: map_at_100 value: 25.655 - type: map_at_1000 value: 25.782 - type: map_at_3 value: 22.228 - type: map_at_5 value: 23.612 - type: mrr_at_1 value: 21.266 - type: mrr_at_10 value: 28.474 - type: mrr_at_100 value: 29.398000000000003 - type: mrr_at_1000 value: 29.482000000000003 - type: mrr_at_3 value: 26.245 - type: mrr_at_5 value: 27.624 - type: ndcg_at_1 value: 21.266 - type: ndcg_at_10 value: 29.087000000000003 - type: ndcg_at_100 value: 34.374 - type: ndcg_at_1000 value: 37.433 - type: ndcg_at_3 value: 25.040000000000003 - type: ndcg_at_5 value: 27.116 - type: precision_at_1 value: 21.266 - type: precision_at_10 value: 5.258 - type: precision_at_100 value: 0.9299999999999999 - type: precision_at_1000 value: 0.13699999999999998 - type: precision_at_3 value: 11.849 - type: precision_at_5 value: 8.699 - type: recall_at_1 value: 17.276 - type: recall_at_10 value: 38.928000000000004 - type: recall_at_100 value: 62.529 - type: recall_at_1000 value: 84.44800000000001 - type: recall_at_3 value: 27.554000000000002 - type: recall_at_5 value: 32.915 - type: map_at_1 value: 27.297 - type: map_at_10 value: 36.957 - type: map_at_100 value: 38.252 - type: map_at_1000 value: 38.356 - type: map_at_3 value: 34.121 - type: map_at_5 value: 35.782000000000004 - type: mrr_at_1 value: 32.275999999999996 - type: mrr_at_10 value: 41.198 - type: mrr_at_100 value: 42.131 - type: mrr_at_1000 value: 42.186 - type: mrr_at_3 value: 38.557 - type: mrr_at_5 value: 40.12 - type: ndcg_at_1 value: 32.275999999999996 - type: ndcg_at_10 value: 42.516 - type: ndcg_at_100 value: 48.15 - type: ndcg_at_1000 value: 50.344 - type: ndcg_at_3 value: 37.423 - type: ndcg_at_5 value: 39.919 - type: precision_at_1 value: 32.275999999999996 - type: precision_at_10 value: 7.155 - type: precision_at_100 value: 1.123 - type: precision_at_1000 value: 0.14200000000000002 - type: precision_at_3 value: 17.163999999999998 - type: precision_at_5 value: 12.127 - type: recall_at_1 value: 27.297 - type: recall_at_10 value: 55.238 - type: recall_at_100 value: 79.2 - type: recall_at_1000 value: 94.258 - type: recall_at_3 value: 41.327000000000005 - type: recall_at_5 value: 47.588 - type: map_at_1 value: 29.142000000000003 - type: map_at_10 value: 38.769 - type: map_at_100 value: 40.292 - type: map_at_1000 value: 40.510000000000005 - type: map_at_3 value: 35.39 - type: map_at_5 value: 37.009 - type: mrr_at_1 value: 34.19 - type: mrr_at_10 value: 43.418 - type: mrr_at_100 value: 44.132 - type: mrr_at_1000 value: 44.175 - type: mrr_at_3 value: 40.547 - type: mrr_at_5 value: 42.088 - type: ndcg_at_1 value: 34.19 - type: ndcg_at_10 value: 45.14 - type: ndcg_at_100 value: 50.364 - type: ndcg_at_1000 value: 52.481 - type: ndcg_at_3 value: 39.466 - type: ndcg_at_5 value: 41.772 - type: precision_at_1 value: 34.19 - type: precision_at_10 value: 8.715 - type: precision_at_100 value: 1.6150000000000002 - type: precision_at_1000 value: 0.247 - type: precision_at_3 value: 18.248 - type: precision_at_5 value: 13.161999999999999 - type: recall_at_1 value: 29.142000000000003 - type: recall_at_10 value: 57.577999999999996 - type: recall_at_100 value: 81.428 - type: recall_at_1000 value: 94.017 - type: recall_at_3 value: 41.402 - type: recall_at_5 value: 47.695 - type: map_at_1 value: 22.039 - type: map_at_10 value: 30.669999999999998 - type: map_at_100 value: 31.682 - type: map_at_1000 value: 31.794 - type: map_at_3 value: 28.139999999999997 - type: map_at_5 value: 29.457 - type: mrr_at_1 value: 24.399 - type: mrr_at_10 value: 32.687 - type: mrr_at_100 value: 33.622 - type: mrr_at_1000 value: 33.698 - type: mrr_at_3 value: 30.407 - type: mrr_at_5 value: 31.552999999999997 - type: ndcg_at_1 value: 24.399 - type: ndcg_at_10 value: 35.472 - type: ndcg_at_100 value: 40.455000000000005 - type: ndcg_at_1000 value: 43.15 - type: ndcg_at_3 value: 30.575000000000003 - type: ndcg_at_5 value: 32.668 - type: precision_at_1 value: 24.399 - type: precision_at_10 value: 5.656 - type: precision_at_100 value: 0.874 - type: precision_at_1000 value: 0.121 - type: precision_at_3 value: 13.062000000000001 - type: precision_at_5 value: 9.242 - type: recall_at_1 value: 22.039 - type: recall_at_10 value: 48.379 - type: recall_at_100 value: 71.11800000000001 - type: recall_at_1000 value: 91.095 - type: recall_at_3 value: 35.108 - type: recall_at_5 value: 40.015 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 10.144 - type: map_at_10 value: 18.238 - type: map_at_100 value: 20.143 - type: map_at_1000 value: 20.346 - type: map_at_3 value: 14.809 - type: map_at_5 value: 16.567999999999998 - type: mrr_at_1 value: 22.671 - type: mrr_at_10 value: 34.906 - type: mrr_at_100 value: 35.858000000000004 - type: mrr_at_1000 value: 35.898 - type: mrr_at_3 value: 31.238 - type: mrr_at_5 value: 33.342 - type: ndcg_at_1 value: 22.671 - type: ndcg_at_10 value: 26.540000000000003 - type: ndcg_at_100 value: 34.138000000000005 - type: ndcg_at_1000 value: 37.72 - type: ndcg_at_3 value: 20.766000000000002 - type: ndcg_at_5 value: 22.927 - type: precision_at_1 value: 22.671 - type: precision_at_10 value: 8.619 - type: precision_at_100 value: 1.678 - type: precision_at_1000 value: 0.23500000000000001 - type: precision_at_3 value: 15.592 - type: precision_at_5 value: 12.43 - type: recall_at_1 value: 10.144 - type: recall_at_10 value: 33.46 - type: recall_at_100 value: 59.758 - type: recall_at_1000 value: 79.704 - type: recall_at_3 value: 19.604 - type: recall_at_5 value: 25.367 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 8.654 - type: map_at_10 value: 18.506 - type: map_at_100 value: 26.412999999999997 - type: map_at_1000 value: 28.13 - type: map_at_3 value: 13.379 - type: map_at_5 value: 15.529000000000002 - type: mrr_at_1 value: 66.0 - type: mrr_at_10 value: 74.13 - type: mrr_at_100 value: 74.48700000000001 - type: mrr_at_1000 value: 74.49799999999999 - type: mrr_at_3 value: 72.75 - type: mrr_at_5 value: 73.762 - type: ndcg_at_1 value: 54.50000000000001 - type: ndcg_at_10 value: 40.236 - type: ndcg_at_100 value: 44.690999999999995 - type: ndcg_at_1000 value: 52.195 - type: ndcg_at_3 value: 45.632 - type: ndcg_at_5 value: 42.952 - type: precision_at_1 value: 66.0 - type: precision_at_10 value: 31.724999999999998 - type: precision_at_100 value: 10.299999999999999 - type: precision_at_1000 value: 2.194 - type: precision_at_3 value: 48.75 - type: precision_at_5 value: 41.6 - type: recall_at_1 value: 8.654 - type: recall_at_10 value: 23.74 - type: recall_at_100 value: 50.346999999999994 - type: recall_at_1000 value: 74.376 - type: recall_at_3 value: 14.636 - type: recall_at_5 value: 18.009 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 53.245 - type: f1 value: 48.74520523753552 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 51.729 - type: map_at_10 value: 63.904 - type: map_at_100 value: 64.363 - type: map_at_1000 value: 64.38199999999999 - type: map_at_3 value: 61.393 - type: map_at_5 value: 63.02100000000001 - type: mrr_at_1 value: 55.686 - type: mrr_at_10 value: 67.804 - type: mrr_at_100 value: 68.15299999999999 - type: mrr_at_1000 value: 68.161 - type: mrr_at_3 value: 65.494 - type: mrr_at_5 value: 67.01599999999999 - type: ndcg_at_1 value: 55.686 - type: ndcg_at_10 value: 70.025 - type: ndcg_at_100 value: 72.011 - type: ndcg_at_1000 value: 72.443 - type: ndcg_at_3 value: 65.32900000000001 - type: ndcg_at_5 value: 68.05600000000001 - type: precision_at_1 value: 55.686 - type: precision_at_10 value: 9.358 - type: precision_at_100 value: 1.05 - type: precision_at_1000 value: 0.11 - type: precision_at_3 value: 26.318 - type: precision_at_5 value: 17.321 - type: recall_at_1 value: 51.729 - type: recall_at_10 value: 85.04 - type: recall_at_100 value: 93.777 - type: recall_at_1000 value: 96.824 - type: recall_at_3 value: 72.521 - type: recall_at_5 value: 79.148 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 23.765 - type: map_at_10 value: 39.114 - type: map_at_100 value: 40.987 - type: map_at_1000 value: 41.155 - type: map_at_3 value: 34.028000000000006 - type: map_at_5 value: 36.925000000000004 - type: mrr_at_1 value: 46.451 - type: mrr_at_10 value: 54.711 - type: mrr_at_100 value: 55.509 - type: mrr_at_1000 value: 55.535000000000004 - type: mrr_at_3 value: 52.649 - type: mrr_at_5 value: 53.729000000000006 - type: ndcg_at_1 value: 46.451 - type: ndcg_at_10 value: 46.955999999999996 - type: ndcg_at_100 value: 53.686 - type: ndcg_at_1000 value: 56.230000000000004 - type: ndcg_at_3 value: 43.374 - type: ndcg_at_5 value: 44.372 - type: precision_at_1 value: 46.451 - type: precision_at_10 value: 13.256 - type: precision_at_100 value: 2.019 - type: precision_at_1000 value: 0.247 - type: precision_at_3 value: 29.115000000000002 - type: precision_at_5 value: 21.389 - type: recall_at_1 value: 23.765 - type: recall_at_10 value: 53.452999999999996 - type: recall_at_100 value: 78.828 - type: recall_at_1000 value: 93.938 - type: recall_at_3 value: 39.023 - type: recall_at_5 value: 45.18 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 31.918000000000003 - type: map_at_10 value: 46.741 - type: map_at_100 value: 47.762 - type: map_at_1000 value: 47.849000000000004 - type: map_at_3 value: 43.578 - type: map_at_5 value: 45.395 - type: mrr_at_1 value: 63.834999999999994 - type: mrr_at_10 value: 71.312 - type: mrr_at_100 value: 71.695 - type: mrr_at_1000 value: 71.714 - type: mrr_at_3 value: 69.82000000000001 - type: mrr_at_5 value: 70.726 - type: ndcg_at_1 value: 63.834999999999994 - type: ndcg_at_10 value: 55.879999999999995 - type: ndcg_at_100 value: 59.723000000000006 - type: ndcg_at_1000 value: 61.49400000000001 - type: ndcg_at_3 value: 50.964 - type: ndcg_at_5 value: 53.47 - type: precision_at_1 value: 63.834999999999994 - type: precision_at_10 value: 11.845 - type: precision_at_100 value: 1.4869999999999999 - type: precision_at_1000 value: 0.172 - type: precision_at_3 value: 32.158 - type: precision_at_5 value: 21.278 - type: recall_at_1 value: 31.918000000000003 - type: recall_at_10 value: 59.223000000000006 - type: recall_at_100 value: 74.328 - type: recall_at_1000 value: 86.05000000000001 - type: recall_at_3 value: 48.238 - type: recall_at_5 value: 53.193999999999996 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 79.7896 - type: ap value: 73.65166029460288 - type: f1 value: 79.71794693711813 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 22.239 - type: map_at_10 value: 34.542 - type: map_at_100 value: 35.717999999999996 - type: map_at_1000 value: 35.764 - type: map_at_3 value: 30.432 - type: map_at_5 value: 32.81 - type: mrr_at_1 value: 22.908 - type: mrr_at_10 value: 35.127 - type: mrr_at_100 value: 36.238 - type: mrr_at_1000 value: 36.278 - type: mrr_at_3 value: 31.076999999999998 - type: mrr_at_5 value: 33.419 - type: ndcg_at_1 value: 22.908 - type: ndcg_at_10 value: 41.607 - type: ndcg_at_100 value: 47.28 - type: ndcg_at_1000 value: 48.414 - type: ndcg_at_3 value: 33.253 - type: ndcg_at_5 value: 37.486000000000004 - type: precision_at_1 value: 22.908 - type: precision_at_10 value: 6.645 - type: precision_at_100 value: 0.9490000000000001 - type: precision_at_1000 value: 0.105 - type: precision_at_3 value: 14.130999999999998 - type: precision_at_5 value: 10.616 - type: recall_at_1 value: 22.239 - type: recall_at_10 value: 63.42 - type: recall_at_100 value: 89.696 - type: recall_at_1000 value: 98.351 - type: recall_at_3 value: 40.77 - type: recall_at_5 value: 50.93 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 95.06839945280439 - type: f1 value: 94.74276398224072 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 72.25718194254446 - type: f1 value: 53.91164489161391 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.47948890383323 - type: f1 value: 69.98520247230257 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.46603900470748 - type: f1 value: 76.44111526065399 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 33.19106070798198 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 30.78772205248094 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.811231631488507 - type: mrr value: 32.98200485378021 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 6.9 - type: map_at_10 value: 13.703000000000001 - type: map_at_100 value: 17.251 - type: map_at_1000 value: 18.795 - type: map_at_3 value: 10.366999999999999 - type: map_at_5 value: 11.675 - type: mrr_at_1 value: 47.059 - type: mrr_at_10 value: 55.816 - type: mrr_at_100 value: 56.434 - type: mrr_at_1000 value: 56.467 - type: mrr_at_3 value: 53.973000000000006 - type: mrr_at_5 value: 55.257999999999996 - type: ndcg_at_1 value: 44.737 - type: ndcg_at_10 value: 35.997 - type: ndcg_at_100 value: 33.487 - type: ndcg_at_1000 value: 41.897 - type: ndcg_at_3 value: 41.18 - type: ndcg_at_5 value: 38.721 - type: precision_at_1 value: 46.129999999999995 - type: precision_at_10 value: 26.533 - type: precision_at_100 value: 8.706 - type: precision_at_1000 value: 2.16 - type: precision_at_3 value: 38.493 - type: precision_at_5 value: 33.189 - type: recall_at_1 value: 6.9 - type: recall_at_10 value: 17.488999999999997 - type: recall_at_100 value: 34.583000000000006 - type: recall_at_1000 value: 64.942 - type: recall_at_3 value: 11.494 - type: recall_at_5 value: 13.496 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 33.028999999999996 - type: map_at_10 value: 49.307 - type: map_at_100 value: 50.205 - type: map_at_1000 value: 50.23 - type: map_at_3 value: 44.782 - type: map_at_5 value: 47.599999999999994 - type: mrr_at_1 value: 37.108999999999995 - type: mrr_at_10 value: 51.742999999999995 - type: mrr_at_100 value: 52.405 - type: mrr_at_1000 value: 52.422000000000004 - type: mrr_at_3 value: 48.087999999999994 - type: mrr_at_5 value: 50.414 - type: ndcg_at_1 value: 37.08 - type: ndcg_at_10 value: 57.236 - type: ndcg_at_100 value: 60.931999999999995 - type: ndcg_at_1000 value: 61.522 - type: ndcg_at_3 value: 48.93 - type: ndcg_at_5 value: 53.561 - type: precision_at_1 value: 37.08 - type: precision_at_10 value: 9.386 - type: precision_at_100 value: 1.1480000000000001 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 22.258 - type: precision_at_5 value: 16.025 - type: recall_at_1 value: 33.028999999999996 - type: recall_at_10 value: 78.805 - type: recall_at_100 value: 94.643 - type: recall_at_1000 value: 99.039 - type: recall_at_3 value: 57.602 - type: recall_at_5 value: 68.253 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 71.122 - type: map_at_10 value: 85.237 - type: map_at_100 value: 85.872 - type: map_at_1000 value: 85.885 - type: map_at_3 value: 82.27499999999999 - type: map_at_5 value: 84.13199999999999 - type: mrr_at_1 value: 81.73 - type: mrr_at_10 value: 87.834 - type: mrr_at_100 value: 87.92 - type: mrr_at_1000 value: 87.921 - type: mrr_at_3 value: 86.878 - type: mrr_at_5 value: 87.512 - type: ndcg_at_1 value: 81.73 - type: ndcg_at_10 value: 88.85499999999999 - type: ndcg_at_100 value: 89.992 - type: ndcg_at_1000 value: 90.07 - type: ndcg_at_3 value: 85.997 - type: ndcg_at_5 value: 87.55199999999999 - type: precision_at_1 value: 81.73 - type: precision_at_10 value: 13.491 - type: precision_at_100 value: 1.536 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.623 - type: precision_at_5 value: 24.742 - type: recall_at_1 value: 71.122 - type: recall_at_10 value: 95.935 - type: recall_at_100 value: 99.657 - type: recall_at_1000 value: 99.996 - type: recall_at_3 value: 87.80799999999999 - type: recall_at_5 value: 92.161 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 63.490029238193756 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 65.13153408508836 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 4.202999999999999 - type: map_at_10 value: 10.174 - type: map_at_100 value: 12.138 - type: map_at_1000 value: 12.418 - type: map_at_3 value: 7.379 - type: map_at_5 value: 8.727 - type: mrr_at_1 value: 20.7 - type: mrr_at_10 value: 30.389 - type: mrr_at_100 value: 31.566 - type: mrr_at_1000 value: 31.637999999999998 - type: mrr_at_3 value: 27.133000000000003 - type: mrr_at_5 value: 29.078 - type: ndcg_at_1 value: 20.7 - type: ndcg_at_10 value: 17.355999999999998 - type: ndcg_at_100 value: 25.151 - type: ndcg_at_1000 value: 30.37 - type: ndcg_at_3 value: 16.528000000000002 - type: ndcg_at_5 value: 14.396999999999998 - type: precision_at_1 value: 20.7 - type: precision_at_10 value: 8.98 - type: precision_at_100 value: 2.015 - type: precision_at_1000 value: 0.327 - type: precision_at_3 value: 15.367 - type: precision_at_5 value: 12.559999999999999 - type: recall_at_1 value: 4.202999999999999 - type: recall_at_10 value: 18.197 - type: recall_at_100 value: 40.903 - type: recall_at_1000 value: 66.427 - type: recall_at_3 value: 9.362 - type: recall_at_5 value: 12.747 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_spearman value: 81.69890989765257 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_spearman value: 75.31953790551489 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_spearman value: 87.44050861280759 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_spearman value: 81.86922869270393 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_spearman value: 88.9399170304284 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_spearman value: 85.38015314088582 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_spearman value: 90.53653527788835 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_spearman value: 68.64526474250209 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_spearman value: 86.56156983963042 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 79.48610254648003 - type: mrr value: 94.02481505422682 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 48.983 - type: map_at_10 value: 59.077999999999996 - type: map_at_100 value: 59.536 - type: map_at_1000 value: 59.575 - type: map_at_3 value: 55.691 - type: map_at_5 value: 57.410000000000004 - type: mrr_at_1 value: 51.666999999999994 - type: mrr_at_10 value: 60.427 - type: mrr_at_100 value: 60.763 - type: mrr_at_1000 value: 60.79900000000001 - type: mrr_at_3 value: 57.556 - type: mrr_at_5 value: 59.089000000000006 - type: ndcg_at_1 value: 51.666999999999994 - type: ndcg_at_10 value: 64.559 - type: ndcg_at_100 value: 66.58 - type: ndcg_at_1000 value: 67.64 - type: ndcg_at_3 value: 58.287 - type: ndcg_at_5 value: 61.001000000000005 - type: precision_at_1 value: 51.666999999999994 - type: precision_at_10 value: 9.067 - type: precision_at_100 value: 1.0170000000000001 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_3 value: 23.0 - type: precision_at_5 value: 15.6 - type: recall_at_1 value: 48.983 - type: recall_at_10 value: 80.289 - type: recall_at_100 value: 89.43299999999999 - type: recall_at_1000 value: 97.667 - type: recall_at_3 value: 62.978 - type: recall_at_5 value: 69.872 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.79009900990098 - type: cos_sim_ap value: 94.94115052608419 - type: cos_sim_f1 value: 89.1260162601626 - type: cos_sim_precision value: 90.599173553719 - type: cos_sim_recall value: 87.7 - type: dot_accuracy value: 99.79009900990098 - type: dot_ap value: 94.94115052608419 - type: dot_f1 value: 89.1260162601626 - type: dot_precision value: 90.599173553719 - type: dot_recall value: 87.7 - type: euclidean_accuracy value: 99.79009900990098 - type: euclidean_ap value: 94.94115052608419 - type: euclidean_f1 value: 89.1260162601626 - type: euclidean_precision value: 90.599173553719 - type: euclidean_recall value: 87.7 - type: manhattan_accuracy value: 99.7940594059406 - type: manhattan_ap value: 94.95271414642431 - type: manhattan_f1 value: 89.24508790072387 - type: manhattan_precision value: 92.3982869379015 - type: manhattan_recall value: 86.3 - type: max_accuracy value: 99.7940594059406 - type: max_ap value: 94.95271414642431 - type: max_f1 value: 89.24508790072387 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 68.43866571935851 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 35.16579026551532 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 52.518952473513934 - type: mrr value: 53.292457134368895 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 31.12529588316604 - type: cos_sim_spearman value: 32.31662126895294 - type: dot_pearson value: 31.125303796647056 - type: dot_spearman value: 32.31662126895294 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.219 - type: map_at_10 value: 1.7469999999999999 - type: map_at_100 value: 10.177999999999999 - type: map_at_1000 value: 26.108999999999998 - type: map_at_3 value: 0.64 - type: map_at_5 value: 0.968 - type: mrr_at_1 value: 82.0 - type: mrr_at_10 value: 89.067 - type: mrr_at_100 value: 89.067 - type: mrr_at_1000 value: 89.067 - type: mrr_at_3 value: 88.333 - type: mrr_at_5 value: 88.73299999999999 - type: ndcg_at_1 value: 78.0 - type: ndcg_at_10 value: 71.398 - type: ndcg_at_100 value: 55.574999999999996 - type: ndcg_at_1000 value: 51.771 - type: ndcg_at_3 value: 77.765 - type: ndcg_at_5 value: 73.614 - type: precision_at_1 value: 82.0 - type: precision_at_10 value: 75.4 - type: precision_at_100 value: 58.040000000000006 - type: precision_at_1000 value: 23.516000000000002 - type: precision_at_3 value: 84.0 - type: precision_at_5 value: 78.4 - type: recall_at_1 value: 0.219 - type: recall_at_10 value: 1.958 - type: recall_at_100 value: 13.797999999999998 - type: recall_at_1000 value: 49.881 - type: recall_at_3 value: 0.672 - type: recall_at_5 value: 1.0370000000000001 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 1.8610000000000002 - type: map_at_10 value: 8.705 - type: map_at_100 value: 15.164 - type: map_at_1000 value: 16.78 - type: map_at_3 value: 4.346 - type: map_at_5 value: 6.151 - type: mrr_at_1 value: 22.448999999999998 - type: mrr_at_10 value: 41.556 - type: mrr_at_100 value: 42.484 - type: mrr_at_1000 value: 42.494 - type: mrr_at_3 value: 37.755 - type: mrr_at_5 value: 40.102 - type: ndcg_at_1 value: 21.429000000000002 - type: ndcg_at_10 value: 23.439 - type: ndcg_at_100 value: 36.948 - type: ndcg_at_1000 value: 48.408 - type: ndcg_at_3 value: 22.261 - type: ndcg_at_5 value: 23.085 - type: precision_at_1 value: 22.448999999999998 - type: precision_at_10 value: 21.633 - type: precision_at_100 value: 8.02 - type: precision_at_1000 value: 1.5939999999999999 - type: precision_at_3 value: 23.810000000000002 - type: precision_at_5 value: 24.490000000000002 - type: recall_at_1 value: 1.8610000000000002 - type: recall_at_10 value: 15.876000000000001 - type: recall_at_100 value: 50.300999999999995 - type: recall_at_1000 value: 86.098 - type: recall_at_3 value: 5.892 - type: recall_at_5 value: 9.443 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 70.3264 - type: ap value: 13.249577616243794 - type: f1 value: 53.621518367695685 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 61.57611771363894 - type: f1 value: 61.79797478568639 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 53.38315344479284 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 87.55438993860642 - type: cos_sim_ap value: 77.98702600017738 - type: cos_sim_f1 value: 71.94971653931476 - type: cos_sim_precision value: 67.50693802035153 - type: cos_sim_recall value: 77.01846965699208 - type: dot_accuracy value: 87.55438993860642 - type: dot_ap value: 77.98702925907986 - type: dot_f1 value: 71.94971653931476 - type: dot_precision value: 67.50693802035153 - type: dot_recall value: 77.01846965699208 - type: euclidean_accuracy value: 87.55438993860642 - type: euclidean_ap value: 77.98702951957925 - type: euclidean_f1 value: 71.94971653931476 - type: euclidean_precision value: 67.50693802035153 - type: euclidean_recall value: 77.01846965699208 - type: manhattan_accuracy value: 87.54246885617214 - type: manhattan_ap value: 77.95531413902947 - type: manhattan_f1 value: 71.93605683836589 - type: manhattan_precision value: 69.28152492668622 - type: manhattan_recall value: 74.80211081794195 - type: max_accuracy value: 87.55438993860642 - type: max_ap value: 77.98702951957925 - type: max_f1 value: 71.94971653931476 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.47296930182016 - type: cos_sim_ap value: 86.92853616302108 - type: cos_sim_f1 value: 79.35138351681047 - type: cos_sim_precision value: 76.74820143884892 - type: cos_sim_recall value: 82.13735756082538 - type: dot_accuracy value: 89.47296930182016 - type: dot_ap value: 86.92854339601595 - type: dot_f1 value: 79.35138351681047 - type: dot_precision value: 76.74820143884892 - type: dot_recall value: 82.13735756082538 - type: euclidean_accuracy value: 89.47296930182016 - type: euclidean_ap value: 86.92854191061649 - type: euclidean_f1 value: 79.35138351681047 - type: euclidean_precision value: 76.74820143884892 - type: euclidean_recall value: 82.13735756082538 - type: manhattan_accuracy value: 89.47685023479644 - type: manhattan_ap value: 86.90063722679578 - type: manhattan_f1 value: 79.30753865502702 - type: manhattan_precision value: 76.32066068631639 - type: manhattan_recall value: 82.53772713273791 - type: max_accuracy value: 89.47685023479644 - type: max_ap value: 86.92854339601595 - type: max_f1 value: 79.35138351681047 --- # hkunlp/instructor-xl We introduce **Instructor**👨‍🏫, an instruction-finetuned text embedding model that can generate text embeddings tailored to any task (e.g., classification, retrieval, clustering, text evaluation, etc.) and domains (e.g., science, finance, etc.) ***by simply providing the task instruction, without any finetuning***. Instructor👨‍ achieves sota on 70 diverse embedding tasks! The model is easy to use with **our customized** `sentence-transformer` library. For more details, check out [our paper](https://arxiv.org/abs/2212.09741) and [project page](https://instructor-embedding.github.io/)! **************************** **Updates** **************************** * 01/21: We released a new [checkpoint](https://huggingface.co/hkunlp/instructor-xl) trained with hard negatives, which gives better performance. * 12/21: We released our [paper](https://arxiv.org/abs/2212.09741), [code](https://github.com/HKUNLP/instructor-embedding), [checkpoint](https://huggingface.co/hkunlp/instructor-xl) and [project page](https://instructor-embedding.github.io/)! Check them out! ## Quick start <hr /> ## Installation ```bash pip install InstructorEmbedding ``` ## Compute your customized embeddings Then you can use the model like this to calculate domain-specific and task-aware embeddings: ```python from InstructorEmbedding import INSTRUCTOR model = INSTRUCTOR('hkunlp/instructor-xl') sentence = "3D ActionSLAM: wearable person tracking in multi-floor environments" instruction = "Represent the Science title:" embeddings = model.encode([[instruction,sentence]]) print(embeddings) ``` ## Use cases <hr /> ## Calculate embeddings for your customized texts If you want to calculate customized embeddings for specific sentences, you may follow the unified template to write instructions: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Represent the `domain` `text_type` for `task_objective`: * `domain` is optional, and it specifies the domain of the text, e.g., science, finance, medicine, etc. * `text_type` is required, and it specifies the encoding unit, e.g., sentence, document, paragraph, etc. * `task_objective` is optional, and it specifies the objective of embedding, e.g., retrieve a document, classify the sentence, etc. ## Calculate Sentence similarities You can further use the model to compute similarities between two groups of sentences, with **customized embeddings**. ```python from sklearn.metrics.pairwise import cosine_similarity sentences_a = [['Represent the Science sentence: ','Parton energy loss in QCD matter'], ['Represent the Financial statement: ','The Federal Reserve on Wednesday raised its benchmark interest rate.']] sentences_b = [['Represent the Science sentence: ','The Chiral Phase Transition in Dissipative Dynamics'], ['Represent the Financial statement: ','The funds rose less than 0.5 per cent on Friday']] embeddings_a = model.encode(sentences_a) embeddings_b = model.encode(sentences_b) similarities = cosine_similarity(embeddings_a,embeddings_b) print(similarities) ``` ## Information Retrieval You can also use **customized embeddings** for information retrieval. ```python import numpy as np from sklearn.metrics.pairwise import cosine_similarity query = [['Represent the Wikipedia question for retrieving supporting documents: ','where is the food stored in a yam plant']] corpus = [['Represent the Wikipedia document for retrieval: ','Capitalism has been dominant in the Western world since the end of feudalism, but most feel[who?] that the term "mixed economies" more precisely describes most contemporary economies, due to their containing both private-owned and state-owned enterprises. In capitalism, prices determine the demand-supply scale. For example, higher demand for certain goods and services lead to higher prices and lower demand for certain goods lead to lower prices.'], ['Represent the Wikipedia document for retrieval: ',"The disparate impact theory is especially controversial under the Fair Housing Act because the Act regulates many activities relating to housing, insurance, and mortgage loans—and some scholars have argued that the theory's use under the Fair Housing Act, combined with extensions of the Community Reinvestment Act, contributed to rise of sub-prime lending and the crash of the U.S. housing market and ensuing global economic recession"], ['Represent the Wikipedia document for retrieval: ','Disparate impact in United States labor law refers to practices in employment, housing, and other areas that adversely affect one group of people of a protected characteristic more than another, even though rules applied by employers or landlords are formally neutral. Although the protected classes vary by statute, most federal civil rights laws protect based on race, color, religion, national origin, and sex as protected traits, and some laws include disability status and other traits as well.']] query_embeddings = model.encode(query) corpus_embeddings = model.encode(corpus) similarities = cosine_similarity(query_embeddings,corpus_embeddings) retrieved_doc_id = np.argmax(similarities) print(retrieved_doc_id) ``` ## Clustering Use **customized embeddings** for clustering texts in groups. ```python import sklearn.cluster sentences = [['Represent the Medicine sentence for clustering: ','Dynamical Scalar Degree of Freedom in Horava-Lifshitz Gravity'], ['Represent the Medicine sentence for clustering: ','Comparison of Atmospheric Neutrino Flux Calculations at Low Energies'], ['Represent the Medicine sentence for clustering: ','Fermion Bags in the Massive Gross-Neveu Model'], ['Represent the Medicine sentence for clustering: ',"QCD corrections to Associated t-tbar-H production at the Tevatron"], ['Represent the Medicine sentence for clustering: ','A New Analysis of the R Measurements: Resonance Parameters of the Higher, Vector States of Charmonium']] embeddings = model.encode(sentences) clustering_model = sklearn.cluster.MiniBatchKMeans(n_clusters=2) clustering_model.fit(embeddings) cluster_assignment = clustering_model.labels_ print(cluster_assignment) ```
[ "BIOSSES", "SCIFACT" ]
Tweeties/tweety-tatar-hydra-mt-7b-v24a
Tweeties
text-generation
[ "transformers", "safetensors", "llama_hydra", "text-generation", "tweety", "custom_code", "tt", "en", "de", "fr", "zh", "pt", "nl", "ru", "ko", "it", "es", "dataset:oscar-corpus/OSCAR-2301", "arxiv:2408.04303", "base_model:Unbabel/TowerInstruct-7B-v0.1", "base_model:finetune:Unbabel/TowerInstruct-7B-v0.1", "license:cc-by-nc-4.0", "autotrain_compatible", "region:us" ]
2024-04-12T14:59:30Z
2024-08-09T08:59:54+00:00
14
0
--- base_model: Unbabel/TowerInstruct-7B-v0.1 datasets: - oscar-corpus/OSCAR-2301 language: - tt - en - de - fr - zh - pt - nl - ru - ko - it - es license: cc-by-nc-4.0 tags: - tweety --- <img align="right" src="https://huggingface.co/Tweeties/tweety-tatar-base-7b-2024-v1/resolve/main/TweetyTatar.png?download=true" alt="Tweety-Tatar-7B: A Tatar Large Language Model" width="20%"> # Tweety Tatar / Hydra-MT 7b / 2024-v1 ## Model description This model is our Hydra LLM for the [Tatar language](https://en.wikipedia.org/wiki/Tatar_language), converted from the [TowerInstruct-7b-v0.1](https://huggingface.co/Unbabel/TowerInstruct-7B-v0.1) model trained by Unbabel, via [our Hydra-Base model](https://huggingface.co/Tweeties/tweety-tatar-hydra-base-7b-2024-v1). Hydra LLMs are trans-tokenized language models finetuned to produce output in a particular language, while accepting input encoded using either their own tokenizer, the one of their base model, or a mix of both. This enables them to receive code-switched input in both their native language and other languages, which is an ideal setup for translation tasks, or retrieval-augmented generation (RAG) in cross-lingual scenarios (see [our Hydra-Base model](https://huggingface.co/Tweeties/tweety-tatar-hydra-base-7b-2024-v1)). - **Developed by:** [François Remy](https://huggingface.co/FremyCompany) (UGent), [Alfiya Khabibullina](https://huggingface.co/justalphie) (BeCode), [et al.](#citation) - **Funded by:** IDLab / GPULab - **Model type:** Foundation model using the mistral architecture - **Language(s) (NLP):** Tatar - **License:** Creative Commons Attribution Non Commercial 4.0 ## In-scope usage This model can be used as-is or finetuned into a machine translation system from one of the 10 languages supported by TowerInstruct into the Tatar language. This list of languages nobably includes English and Russian. The model performs best when translating sentences or small paragraphs, and is not suited for document translation tasks. This model should not be used in the reverse direction, to translate Tatar into English. While the system is finetuned for translation, enabling beam search provides better results. Take note of the non-commercial license imposed by Unbabel on the base model, which also applies to this model. ## Usage instructions Using this model usually requires building the prompts by mixing tokens from two tokenizers, the original TowerInstruct tokenizer for input in the source language, and the new Tatar tokenizer for the prompt and output, as described in the examples below: ```py import re import torch import torch.nn as nn import transformers MODEL_NAME = "Tweeties/tweety-tatar-hydra-mt-7b-2024-v1" MAIN_TOKENIZER_NAME = "Tweeties/tweety-tatar-hydra-mt-7b-2024-v1" UTIL_TOKENIZER_NAME = "Unbabel/TowerInstruct-7B-v0.1" model = transformers.AutoModelForCausalLM.from_pretrained(MODEL_NAME, trust_remote_code=True) main_tokenizer = transformers.LlamaTokenizerFast.from_pretrained(MAIN_TOKENIZER_NAME) util_tokenizer = transformers.LlamaTokenizerFast.from_pretrained(UTIL_TOKENIZER_NAME) main_tokenizer_len = len(main_tokenizer) ``` ### Machine Translation ```py def translate_english_text(english_text: str) -> str: # craft the input input_ids = torch.concat([ main_tokenizer.encode(f"Түбәндәге текстны инглиз теленнән татар теленә тәрҗемә итегез:\n", return_tensors='pt'), util_tokenizer.encode(f"{english_text}", add_special_tokens=False, return_tensors='pt') + torch.tensor([main_tokenizer_len]), main_tokenizer.encode(f"\nТекстны татар теленә тәрҗемә итү:\n", add_special_tokens=False, return_tensors='pt') ], axis=1) # prevent the model from repeating the prompt prompt_starts = [ main_tokenizer.encode("Түбәндәге"), main_tokenizer.encode("\nТүбәндәге")[2:], main_tokenizer.encode("Текстны"), main_tokenizer.encode("\nТекстны")[2:] ] # genereate the output model_inputs = {'input_ids':input_ids.to(model.device)} model_outputs = model.generate( **model_inputs, max_new_tokens=128, num_beams=8, no_repeat_ngram_size=6, early_stopping=False, pad_token_id=main_tokenizer.eos_token_id, eos_token_id=main_tokenizer.convert_tokens_to_ids(['<0x0A>','</s>']), bad_words_ids=prompt_starts ) # decode the output return (main_tokenizer.decode(model_outputs[0][input_ids.shape[1]:])) translate_english_text("The city of Paris is very pretty.") # Париж шәһәре бик матур. ``` ## Citation If you use this model, please cite our work as: ``` @article{tweeties2024, title = {Trans-Tokenization and Cross-lingual Vocabulary Transfers: Language Adaptation of LLMs for Low-Resource NLP}, author = {François Remy and Pieter Delobelle and Hayastan Avetisyan and Alfiya Khabibullina and Miryam de Lhoneux and Thomas Demeester}, url = {https://arxiv.org/abs/2408.04303}, year = {2024}, note = {Accepted at COLM 2024} } ```
[ "CRAFT" ]
ahmet1338/finetuned_embedder
ahmet1338
sentence-similarity
[ "sentence-transformers", "pytorch", "t5", "text-embedding", "embeddings", "information-retrieval", "beir", "text-classification", "language-model", "text-clustering", "text-semantic-similarity", "text-evaluation", "prompt-retrieval", "text-reranking", "feature-extraction", "sentence-similarity", "transformers", "English", "Sentence Similarity", "natural_questions", "ms_marco", "fever", "hotpot_qa", "mteb", "en", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "region:us" ]
2024-04-22T08:44:42Z
2024-04-22T08:45:59+00:00
14
0
--- language: en license: apache-2.0 pipeline_tag: sentence-similarity tags: - text-embedding - embeddings - information-retrieval - beir - text-classification - language-model - text-clustering - text-semantic-similarity - text-evaluation - prompt-retrieval - text-reranking - sentence-transformers - feature-extraction - sentence-similarity - transformers - t5 - English - Sentence Similarity - natural_questions - ms_marco - fever - hotpot_qa - mteb inference: false model-index: - name: INSTRUCTOR results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 88.13432835820896 - type: ap value: 59.298209334395665 - type: f1 value: 83.31769058643586 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 91.526375 - type: ap value: 88.16327709705504 - type: f1 value: 91.51095801287843 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 47.856 - type: f1 value: 45.41490917650942 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 31.223 - type: map_at_10 value: 47.947 - type: map_at_100 value: 48.742000000000004 - type: map_at_1000 value: 48.745 - type: map_at_3 value: 43.137 - type: map_at_5 value: 45.992 - type: mrr_at_1 value: 32.432 - type: mrr_at_10 value: 48.4 - type: mrr_at_100 value: 49.202 - type: mrr_at_1000 value: 49.205 - type: mrr_at_3 value: 43.551 - type: mrr_at_5 value: 46.467999999999996 - type: ndcg_at_1 value: 31.223 - type: ndcg_at_10 value: 57.045 - type: ndcg_at_100 value: 60.175 - type: ndcg_at_1000 value: 60.233000000000004 - type: ndcg_at_3 value: 47.171 - type: ndcg_at_5 value: 52.322 - type: precision_at_1 value: 31.223 - type: precision_at_10 value: 8.599 - type: precision_at_100 value: 0.991 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 19.63 - type: precision_at_5 value: 14.282 - type: recall_at_1 value: 31.223 - type: recall_at_10 value: 85.989 - type: recall_at_100 value: 99.075 - type: recall_at_1000 value: 99.502 - type: recall_at_3 value: 58.89 - type: recall_at_5 value: 71.408 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 43.1621946393635 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 32.56417132407894 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 64.29539304390207 - type: mrr value: 76.44484017060196 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_spearman value: 84.38746499431112 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 78.51298701298701 - type: f1 value: 77.49041754069235 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 37.61848554098577 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 31.32623280148178 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 35.803000000000004 - type: map_at_10 value: 48.848 - type: map_at_100 value: 50.5 - type: map_at_1000 value: 50.602999999999994 - type: map_at_3 value: 45.111000000000004 - type: map_at_5 value: 47.202 - type: mrr_at_1 value: 44.635000000000005 - type: mrr_at_10 value: 55.593 - type: mrr_at_100 value: 56.169999999999995 - type: mrr_at_1000 value: 56.19499999999999 - type: mrr_at_3 value: 53.361999999999995 - type: mrr_at_5 value: 54.806999999999995 - type: ndcg_at_1 value: 44.635000000000005 - type: ndcg_at_10 value: 55.899 - type: ndcg_at_100 value: 60.958 - type: ndcg_at_1000 value: 62.302 - type: ndcg_at_3 value: 51.051 - type: ndcg_at_5 value: 53.351000000000006 - type: precision_at_1 value: 44.635000000000005 - type: precision_at_10 value: 10.786999999999999 - type: precision_at_100 value: 1.6580000000000001 - type: precision_at_1000 value: 0.213 - type: precision_at_3 value: 24.893 - type: precision_at_5 value: 17.740000000000002 - type: recall_at_1 value: 35.803000000000004 - type: recall_at_10 value: 68.657 - type: recall_at_100 value: 89.77199999999999 - type: recall_at_1000 value: 97.67 - type: recall_at_3 value: 54.066 - type: recall_at_5 value: 60.788 - type: map_at_1 value: 33.706 - type: map_at_10 value: 44.896 - type: map_at_100 value: 46.299 - type: map_at_1000 value: 46.44 - type: map_at_3 value: 41.721000000000004 - type: map_at_5 value: 43.486000000000004 - type: mrr_at_1 value: 41.592 - type: mrr_at_10 value: 50.529 - type: mrr_at_100 value: 51.22 - type: mrr_at_1000 value: 51.258 - type: mrr_at_3 value: 48.205999999999996 - type: mrr_at_5 value: 49.528 - type: ndcg_at_1 value: 41.592 - type: ndcg_at_10 value: 50.77199999999999 - type: ndcg_at_100 value: 55.383 - type: ndcg_at_1000 value: 57.288 - type: ndcg_at_3 value: 46.324 - type: ndcg_at_5 value: 48.346000000000004 - type: precision_at_1 value: 41.592 - type: precision_at_10 value: 9.516 - type: precision_at_100 value: 1.541 - type: precision_at_1000 value: 0.2 - type: precision_at_3 value: 22.399 - type: precision_at_5 value: 15.770999999999999 - type: recall_at_1 value: 33.706 - type: recall_at_10 value: 61.353 - type: recall_at_100 value: 80.182 - type: recall_at_1000 value: 91.896 - type: recall_at_3 value: 48.204 - type: recall_at_5 value: 53.89699999999999 - type: map_at_1 value: 44.424 - type: map_at_10 value: 57.169000000000004 - type: map_at_100 value: 58.202 - type: map_at_1000 value: 58.242000000000004 - type: map_at_3 value: 53.825 - type: map_at_5 value: 55.714 - type: mrr_at_1 value: 50.470000000000006 - type: mrr_at_10 value: 60.489000000000004 - type: mrr_at_100 value: 61.096 - type: mrr_at_1000 value: 61.112 - type: mrr_at_3 value: 58.192 - type: mrr_at_5 value: 59.611999999999995 - type: ndcg_at_1 value: 50.470000000000006 - type: ndcg_at_10 value: 63.071999999999996 - type: ndcg_at_100 value: 66.964 - type: ndcg_at_1000 value: 67.659 - type: ndcg_at_3 value: 57.74399999999999 - type: ndcg_at_5 value: 60.367000000000004 - type: precision_at_1 value: 50.470000000000006 - type: precision_at_10 value: 10.019 - type: precision_at_100 value: 1.29 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 25.558999999999997 - type: precision_at_5 value: 17.467 - type: recall_at_1 value: 44.424 - type: recall_at_10 value: 77.02 - type: recall_at_100 value: 93.738 - type: recall_at_1000 value: 98.451 - type: recall_at_3 value: 62.888 - type: recall_at_5 value: 69.138 - type: map_at_1 value: 26.294 - type: map_at_10 value: 34.503 - type: map_at_100 value: 35.641 - type: map_at_1000 value: 35.724000000000004 - type: map_at_3 value: 31.753999999999998 - type: map_at_5 value: 33.190999999999995 - type: mrr_at_1 value: 28.362 - type: mrr_at_10 value: 36.53 - type: mrr_at_100 value: 37.541000000000004 - type: mrr_at_1000 value: 37.602000000000004 - type: mrr_at_3 value: 33.917 - type: mrr_at_5 value: 35.358000000000004 - type: ndcg_at_1 value: 28.362 - type: ndcg_at_10 value: 39.513999999999996 - type: ndcg_at_100 value: 44.815 - type: ndcg_at_1000 value: 46.839 - type: ndcg_at_3 value: 34.02 - type: ndcg_at_5 value: 36.522 - type: precision_at_1 value: 28.362 - type: precision_at_10 value: 6.101999999999999 - type: precision_at_100 value: 0.9129999999999999 - type: precision_at_1000 value: 0.11399999999999999 - type: precision_at_3 value: 14.161999999999999 - type: precision_at_5 value: 9.966 - type: recall_at_1 value: 26.294 - type: recall_at_10 value: 53.098 - type: recall_at_100 value: 76.877 - type: recall_at_1000 value: 91.834 - type: recall_at_3 value: 38.266 - type: recall_at_5 value: 44.287 - type: map_at_1 value: 16.407 - type: map_at_10 value: 25.185999999999996 - type: map_at_100 value: 26.533 - type: map_at_1000 value: 26.657999999999998 - type: map_at_3 value: 22.201999999999998 - type: map_at_5 value: 23.923 - type: mrr_at_1 value: 20.522000000000002 - type: mrr_at_10 value: 29.522 - type: mrr_at_100 value: 30.644 - type: mrr_at_1000 value: 30.713 - type: mrr_at_3 value: 26.679000000000002 - type: mrr_at_5 value: 28.483000000000004 - type: ndcg_at_1 value: 20.522000000000002 - type: ndcg_at_10 value: 30.656 - type: ndcg_at_100 value: 36.864999999999995 - type: ndcg_at_1000 value: 39.675 - type: ndcg_at_3 value: 25.319000000000003 - type: ndcg_at_5 value: 27.992 - type: precision_at_1 value: 20.522000000000002 - type: precision_at_10 value: 5.795999999999999 - type: precision_at_100 value: 1.027 - type: precision_at_1000 value: 0.13999999999999999 - type: precision_at_3 value: 12.396 - type: precision_at_5 value: 9.328 - type: recall_at_1 value: 16.407 - type: recall_at_10 value: 43.164 - type: recall_at_100 value: 69.695 - type: recall_at_1000 value: 89.41900000000001 - type: recall_at_3 value: 28.634999999999998 - type: recall_at_5 value: 35.308 - type: map_at_1 value: 30.473 - type: map_at_10 value: 41.676 - type: map_at_100 value: 43.120999999999995 - type: map_at_1000 value: 43.230000000000004 - type: map_at_3 value: 38.306000000000004 - type: map_at_5 value: 40.355999999999995 - type: mrr_at_1 value: 37.536 - type: mrr_at_10 value: 47.643 - type: mrr_at_100 value: 48.508 - type: mrr_at_1000 value: 48.551 - type: mrr_at_3 value: 45.348 - type: mrr_at_5 value: 46.744 - type: ndcg_at_1 value: 37.536 - type: ndcg_at_10 value: 47.823 - type: ndcg_at_100 value: 53.395 - type: ndcg_at_1000 value: 55.271 - type: ndcg_at_3 value: 42.768 - type: ndcg_at_5 value: 45.373000000000005 - type: precision_at_1 value: 37.536 - type: precision_at_10 value: 8.681 - type: precision_at_100 value: 1.34 - type: precision_at_1000 value: 0.165 - type: precision_at_3 value: 20.468 - type: precision_at_5 value: 14.495 - type: recall_at_1 value: 30.473 - type: recall_at_10 value: 60.092999999999996 - type: recall_at_100 value: 82.733 - type: recall_at_1000 value: 94.875 - type: recall_at_3 value: 45.734 - type: recall_at_5 value: 52.691 - type: map_at_1 value: 29.976000000000003 - type: map_at_10 value: 41.097 - type: map_at_100 value: 42.547000000000004 - type: map_at_1000 value: 42.659000000000006 - type: map_at_3 value: 37.251 - type: map_at_5 value: 39.493 - type: mrr_at_1 value: 37.557 - type: mrr_at_10 value: 46.605000000000004 - type: mrr_at_100 value: 47.487 - type: mrr_at_1000 value: 47.54 - type: mrr_at_3 value: 43.721 - type: mrr_at_5 value: 45.411 - type: ndcg_at_1 value: 37.557 - type: ndcg_at_10 value: 47.449000000000005 - type: ndcg_at_100 value: 53.052 - type: ndcg_at_1000 value: 55.010999999999996 - type: ndcg_at_3 value: 41.439 - type: ndcg_at_5 value: 44.292 - type: precision_at_1 value: 37.557 - type: precision_at_10 value: 8.847 - type: precision_at_100 value: 1.357 - type: precision_at_1000 value: 0.16999999999999998 - type: precision_at_3 value: 20.091 - type: precision_at_5 value: 14.384 - type: recall_at_1 value: 29.976000000000003 - type: recall_at_10 value: 60.99099999999999 - type: recall_at_100 value: 84.245 - type: recall_at_1000 value: 96.97200000000001 - type: recall_at_3 value: 43.794 - type: recall_at_5 value: 51.778999999999996 - type: map_at_1 value: 28.099166666666665 - type: map_at_10 value: 38.1365 - type: map_at_100 value: 39.44491666666667 - type: map_at_1000 value: 39.55858333333334 - type: map_at_3 value: 35.03641666666666 - type: map_at_5 value: 36.79833333333334 - type: mrr_at_1 value: 33.39966666666667 - type: mrr_at_10 value: 42.42583333333333 - type: mrr_at_100 value: 43.28575 - type: mrr_at_1000 value: 43.33741666666667 - type: mrr_at_3 value: 39.94975 - type: mrr_at_5 value: 41.41633333333334 - type: ndcg_at_1 value: 33.39966666666667 - type: ndcg_at_10 value: 43.81741666666667 - type: ndcg_at_100 value: 49.08166666666667 - type: ndcg_at_1000 value: 51.121166666666674 - type: ndcg_at_3 value: 38.73575 - type: ndcg_at_5 value: 41.18158333333333 - type: precision_at_1 value: 33.39966666666667 - type: precision_at_10 value: 7.738916666666667 - type: precision_at_100 value: 1.2265833333333331 - type: precision_at_1000 value: 0.15983333333333336 - type: precision_at_3 value: 17.967416666666665 - type: precision_at_5 value: 12.78675 - type: recall_at_1 value: 28.099166666666665 - type: recall_at_10 value: 56.27049999999999 - type: recall_at_100 value: 78.93291666666667 - type: recall_at_1000 value: 92.81608333333334 - type: recall_at_3 value: 42.09775 - type: recall_at_5 value: 48.42533333333334 - type: map_at_1 value: 23.663 - type: map_at_10 value: 30.377 - type: map_at_100 value: 31.426 - type: map_at_1000 value: 31.519000000000002 - type: map_at_3 value: 28.069 - type: map_at_5 value: 29.256999999999998 - type: mrr_at_1 value: 26.687 - type: mrr_at_10 value: 33.107 - type: mrr_at_100 value: 34.055 - type: mrr_at_1000 value: 34.117999999999995 - type: mrr_at_3 value: 31.058000000000003 - type: mrr_at_5 value: 32.14 - type: ndcg_at_1 value: 26.687 - type: ndcg_at_10 value: 34.615 - type: ndcg_at_100 value: 39.776 - type: ndcg_at_1000 value: 42.05 - type: ndcg_at_3 value: 30.322 - type: ndcg_at_5 value: 32.157000000000004 - type: precision_at_1 value: 26.687 - type: precision_at_10 value: 5.491 - type: precision_at_100 value: 0.877 - type: precision_at_1000 value: 0.11499999999999999 - type: precision_at_3 value: 13.139000000000001 - type: precision_at_5 value: 9.049 - type: recall_at_1 value: 23.663 - type: recall_at_10 value: 45.035 - type: recall_at_100 value: 68.554 - type: recall_at_1000 value: 85.077 - type: recall_at_3 value: 32.982 - type: recall_at_5 value: 37.688 - type: map_at_1 value: 17.403 - type: map_at_10 value: 25.197000000000003 - type: map_at_100 value: 26.355 - type: map_at_1000 value: 26.487 - type: map_at_3 value: 22.733 - type: map_at_5 value: 24.114 - type: mrr_at_1 value: 21.37 - type: mrr_at_10 value: 29.091 - type: mrr_at_100 value: 30.018 - type: mrr_at_1000 value: 30.096 - type: mrr_at_3 value: 26.887 - type: mrr_at_5 value: 28.157 - type: ndcg_at_1 value: 21.37 - type: ndcg_at_10 value: 30.026000000000003 - type: ndcg_at_100 value: 35.416 - type: ndcg_at_1000 value: 38.45 - type: ndcg_at_3 value: 25.764 - type: ndcg_at_5 value: 27.742 - type: precision_at_1 value: 21.37 - type: precision_at_10 value: 5.609 - type: precision_at_100 value: 0.9860000000000001 - type: precision_at_1000 value: 0.14300000000000002 - type: precision_at_3 value: 12.423 - type: precision_at_5 value: 9.009 - type: recall_at_1 value: 17.403 - type: recall_at_10 value: 40.573 - type: recall_at_100 value: 64.818 - type: recall_at_1000 value: 86.53699999999999 - type: recall_at_3 value: 28.493000000000002 - type: recall_at_5 value: 33.660000000000004 - type: map_at_1 value: 28.639 - type: map_at_10 value: 38.951 - type: map_at_100 value: 40.238 - type: map_at_1000 value: 40.327 - type: map_at_3 value: 35.842 - type: map_at_5 value: 37.617 - type: mrr_at_1 value: 33.769 - type: mrr_at_10 value: 43.088 - type: mrr_at_100 value: 44.03 - type: mrr_at_1000 value: 44.072 - type: mrr_at_3 value: 40.656 - type: mrr_at_5 value: 42.138999999999996 - type: ndcg_at_1 value: 33.769 - type: ndcg_at_10 value: 44.676 - type: ndcg_at_100 value: 50.416000000000004 - type: ndcg_at_1000 value: 52.227999999999994 - type: ndcg_at_3 value: 39.494 - type: ndcg_at_5 value: 42.013 - type: precision_at_1 value: 33.769 - type: precision_at_10 value: 7.668 - type: precision_at_100 value: 1.18 - type: precision_at_1000 value: 0.145 - type: precision_at_3 value: 18.221 - type: precision_at_5 value: 12.966 - type: recall_at_1 value: 28.639 - type: recall_at_10 value: 57.687999999999995 - type: recall_at_100 value: 82.541 - type: recall_at_1000 value: 94.896 - type: recall_at_3 value: 43.651 - type: recall_at_5 value: 49.925999999999995 - type: map_at_1 value: 29.57 - type: map_at_10 value: 40.004 - type: map_at_100 value: 41.75 - type: map_at_1000 value: 41.97 - type: map_at_3 value: 36.788 - type: map_at_5 value: 38.671 - type: mrr_at_1 value: 35.375 - type: mrr_at_10 value: 45.121 - type: mrr_at_100 value: 45.994 - type: mrr_at_1000 value: 46.04 - type: mrr_at_3 value: 42.227 - type: mrr_at_5 value: 43.995 - type: ndcg_at_1 value: 35.375 - type: ndcg_at_10 value: 46.392 - type: ndcg_at_100 value: 52.196 - type: ndcg_at_1000 value: 54.274 - type: ndcg_at_3 value: 41.163 - type: ndcg_at_5 value: 43.813 - type: precision_at_1 value: 35.375 - type: precision_at_10 value: 8.676 - type: precision_at_100 value: 1.678 - type: precision_at_1000 value: 0.253 - type: precision_at_3 value: 19.104 - type: precision_at_5 value: 13.913 - type: recall_at_1 value: 29.57 - type: recall_at_10 value: 58.779 - type: recall_at_100 value: 83.337 - type: recall_at_1000 value: 95.979 - type: recall_at_3 value: 44.005 - type: recall_at_5 value: 50.975 - type: map_at_1 value: 20.832 - type: map_at_10 value: 29.733999999999998 - type: map_at_100 value: 30.727 - type: map_at_1000 value: 30.843999999999998 - type: map_at_3 value: 26.834999999999997 - type: map_at_5 value: 28.555999999999997 - type: mrr_at_1 value: 22.921 - type: mrr_at_10 value: 31.791999999999998 - type: mrr_at_100 value: 32.666000000000004 - type: mrr_at_1000 value: 32.751999999999995 - type: mrr_at_3 value: 29.144 - type: mrr_at_5 value: 30.622 - type: ndcg_at_1 value: 22.921 - type: ndcg_at_10 value: 34.915 - type: ndcg_at_100 value: 39.744 - type: ndcg_at_1000 value: 42.407000000000004 - type: ndcg_at_3 value: 29.421000000000003 - type: ndcg_at_5 value: 32.211 - type: precision_at_1 value: 22.921 - type: precision_at_10 value: 5.675 - type: precision_at_100 value: 0.872 - type: precision_at_1000 value: 0.121 - type: precision_at_3 value: 12.753999999999998 - type: precision_at_5 value: 9.353 - type: recall_at_1 value: 20.832 - type: recall_at_10 value: 48.795 - type: recall_at_100 value: 70.703 - type: recall_at_1000 value: 90.187 - type: recall_at_3 value: 34.455000000000005 - type: recall_at_5 value: 40.967 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 10.334 - type: map_at_10 value: 19.009999999999998 - type: map_at_100 value: 21.129 - type: map_at_1000 value: 21.328 - type: map_at_3 value: 15.152 - type: map_at_5 value: 17.084 - type: mrr_at_1 value: 23.453 - type: mrr_at_10 value: 36.099 - type: mrr_at_100 value: 37.069 - type: mrr_at_1000 value: 37.104 - type: mrr_at_3 value: 32.096000000000004 - type: mrr_at_5 value: 34.451 - type: ndcg_at_1 value: 23.453 - type: ndcg_at_10 value: 27.739000000000004 - type: ndcg_at_100 value: 35.836 - type: ndcg_at_1000 value: 39.242 - type: ndcg_at_3 value: 21.263 - type: ndcg_at_5 value: 23.677 - type: precision_at_1 value: 23.453 - type: precision_at_10 value: 9.199 - type: precision_at_100 value: 1.791 - type: precision_at_1000 value: 0.242 - type: precision_at_3 value: 16.2 - type: precision_at_5 value: 13.147 - type: recall_at_1 value: 10.334 - type: recall_at_10 value: 35.177 - type: recall_at_100 value: 63.009 - type: recall_at_1000 value: 81.938 - type: recall_at_3 value: 19.914 - type: recall_at_5 value: 26.077 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 8.212 - type: map_at_10 value: 17.386 - type: map_at_100 value: 24.234 - type: map_at_1000 value: 25.724999999999998 - type: map_at_3 value: 12.727 - type: map_at_5 value: 14.785 - type: mrr_at_1 value: 59.25 - type: mrr_at_10 value: 68.687 - type: mrr_at_100 value: 69.133 - type: mrr_at_1000 value: 69.14099999999999 - type: mrr_at_3 value: 66.917 - type: mrr_at_5 value: 67.742 - type: ndcg_at_1 value: 48.625 - type: ndcg_at_10 value: 36.675999999999995 - type: ndcg_at_100 value: 41.543 - type: ndcg_at_1000 value: 49.241 - type: ndcg_at_3 value: 41.373 - type: ndcg_at_5 value: 38.707 - type: precision_at_1 value: 59.25 - type: precision_at_10 value: 28.525 - type: precision_at_100 value: 9.027000000000001 - type: precision_at_1000 value: 1.8339999999999999 - type: precision_at_3 value: 44.833 - type: precision_at_5 value: 37.35 - type: recall_at_1 value: 8.212 - type: recall_at_10 value: 23.188 - type: recall_at_100 value: 48.613 - type: recall_at_1000 value: 73.093 - type: recall_at_3 value: 14.419 - type: recall_at_5 value: 17.798 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 52.725 - type: f1 value: 46.50743309855908 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 55.086 - type: map_at_10 value: 66.914 - type: map_at_100 value: 67.321 - type: map_at_1000 value: 67.341 - type: map_at_3 value: 64.75800000000001 - type: map_at_5 value: 66.189 - type: mrr_at_1 value: 59.28600000000001 - type: mrr_at_10 value: 71.005 - type: mrr_at_100 value: 71.304 - type: mrr_at_1000 value: 71.313 - type: mrr_at_3 value: 69.037 - type: mrr_at_5 value: 70.35 - type: ndcg_at_1 value: 59.28600000000001 - type: ndcg_at_10 value: 72.695 - type: ndcg_at_100 value: 74.432 - type: ndcg_at_1000 value: 74.868 - type: ndcg_at_3 value: 68.72200000000001 - type: ndcg_at_5 value: 71.081 - type: precision_at_1 value: 59.28600000000001 - type: precision_at_10 value: 9.499 - type: precision_at_100 value: 1.052 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_3 value: 27.503 - type: precision_at_5 value: 17.854999999999997 - type: recall_at_1 value: 55.086 - type: recall_at_10 value: 86.453 - type: recall_at_100 value: 94.028 - type: recall_at_1000 value: 97.052 - type: recall_at_3 value: 75.821 - type: recall_at_5 value: 81.6 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 22.262999999999998 - type: map_at_10 value: 37.488 - type: map_at_100 value: 39.498 - type: map_at_1000 value: 39.687 - type: map_at_3 value: 32.529 - type: map_at_5 value: 35.455 - type: mrr_at_1 value: 44.907000000000004 - type: mrr_at_10 value: 53.239000000000004 - type: mrr_at_100 value: 54.086 - type: mrr_at_1000 value: 54.122 - type: mrr_at_3 value: 51.235 - type: mrr_at_5 value: 52.415 - type: ndcg_at_1 value: 44.907000000000004 - type: ndcg_at_10 value: 45.446 - type: ndcg_at_100 value: 52.429 - type: ndcg_at_1000 value: 55.169000000000004 - type: ndcg_at_3 value: 41.882000000000005 - type: ndcg_at_5 value: 43.178 - type: precision_at_1 value: 44.907000000000004 - type: precision_at_10 value: 12.931999999999999 - type: precision_at_100 value: 2.025 - type: precision_at_1000 value: 0.248 - type: precision_at_3 value: 28.652 - type: precision_at_5 value: 21.204 - type: recall_at_1 value: 22.262999999999998 - type: recall_at_10 value: 52.447 - type: recall_at_100 value: 78.045 - type: recall_at_1000 value: 94.419 - type: recall_at_3 value: 38.064 - type: recall_at_5 value: 44.769 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 32.519 - type: map_at_10 value: 45.831 - type: map_at_100 value: 46.815 - type: map_at_1000 value: 46.899 - type: map_at_3 value: 42.836 - type: map_at_5 value: 44.65 - type: mrr_at_1 value: 65.037 - type: mrr_at_10 value: 72.16 - type: mrr_at_100 value: 72.51100000000001 - type: mrr_at_1000 value: 72.53 - type: mrr_at_3 value: 70.682 - type: mrr_at_5 value: 71.54599999999999 - type: ndcg_at_1 value: 65.037 - type: ndcg_at_10 value: 55.17999999999999 - type: ndcg_at_100 value: 58.888 - type: ndcg_at_1000 value: 60.648 - type: ndcg_at_3 value: 50.501 - type: ndcg_at_5 value: 52.977 - type: precision_at_1 value: 65.037 - type: precision_at_10 value: 11.530999999999999 - type: precision_at_100 value: 1.4460000000000002 - type: precision_at_1000 value: 0.168 - type: precision_at_3 value: 31.483 - type: precision_at_5 value: 20.845 - type: recall_at_1 value: 32.519 - type: recall_at_10 value: 57.657000000000004 - type: recall_at_100 value: 72.30199999999999 - type: recall_at_1000 value: 84.024 - type: recall_at_3 value: 47.225 - type: recall_at_5 value: 52.113 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 88.3168 - type: ap value: 83.80165516037135 - type: f1 value: 88.29942471066407 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 20.724999999999998 - type: map_at_10 value: 32.736 - type: map_at_100 value: 33.938 - type: map_at_1000 value: 33.991 - type: map_at_3 value: 28.788000000000004 - type: map_at_5 value: 31.016 - type: mrr_at_1 value: 21.361 - type: mrr_at_10 value: 33.323 - type: mrr_at_100 value: 34.471000000000004 - type: mrr_at_1000 value: 34.518 - type: mrr_at_3 value: 29.453000000000003 - type: mrr_at_5 value: 31.629 - type: ndcg_at_1 value: 21.361 - type: ndcg_at_10 value: 39.649 - type: ndcg_at_100 value: 45.481 - type: ndcg_at_1000 value: 46.775 - type: ndcg_at_3 value: 31.594 - type: ndcg_at_5 value: 35.543 - type: precision_at_1 value: 21.361 - type: precision_at_10 value: 6.3740000000000006 - type: precision_at_100 value: 0.931 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 13.514999999999999 - type: precision_at_5 value: 10.100000000000001 - type: recall_at_1 value: 20.724999999999998 - type: recall_at_10 value: 61.034 - type: recall_at_100 value: 88.062 - type: recall_at_1000 value: 97.86399999999999 - type: recall_at_3 value: 39.072 - type: recall_at_5 value: 48.53 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 93.8919288645691 - type: f1 value: 93.57059586398059 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 67.97993616051072 - type: f1 value: 48.244319183606535 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 68.90047074646941 - type: f1 value: 66.48999056063725 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.34566240753195 - type: f1 value: 73.54164154290658 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 34.21866934757011 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 32.000936217235534 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.68189362520352 - type: mrr value: 32.69603637784303 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 6.078 - type: map_at_10 value: 12.671 - type: map_at_100 value: 16.291 - type: map_at_1000 value: 17.855999999999998 - type: map_at_3 value: 9.610000000000001 - type: map_at_5 value: 11.152 - type: mrr_at_1 value: 43.963 - type: mrr_at_10 value: 53.173 - type: mrr_at_100 value: 53.718999999999994 - type: mrr_at_1000 value: 53.756 - type: mrr_at_3 value: 50.980000000000004 - type: mrr_at_5 value: 52.42 - type: ndcg_at_1 value: 42.415000000000006 - type: ndcg_at_10 value: 34.086 - type: ndcg_at_100 value: 32.545 - type: ndcg_at_1000 value: 41.144999999999996 - type: ndcg_at_3 value: 39.434999999999995 - type: ndcg_at_5 value: 37.888 - type: precision_at_1 value: 43.653 - type: precision_at_10 value: 25.014999999999997 - type: precision_at_100 value: 8.594 - type: precision_at_1000 value: 2.169 - type: precision_at_3 value: 37.049 - type: precision_at_5 value: 33.065 - type: recall_at_1 value: 6.078 - type: recall_at_10 value: 16.17 - type: recall_at_100 value: 34.512 - type: recall_at_1000 value: 65.447 - type: recall_at_3 value: 10.706 - type: recall_at_5 value: 13.158 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 27.378000000000004 - type: map_at_10 value: 42.178 - type: map_at_100 value: 43.32 - type: map_at_1000 value: 43.358000000000004 - type: map_at_3 value: 37.474000000000004 - type: map_at_5 value: 40.333000000000006 - type: mrr_at_1 value: 30.823 - type: mrr_at_10 value: 44.626 - type: mrr_at_100 value: 45.494 - type: mrr_at_1000 value: 45.519 - type: mrr_at_3 value: 40.585 - type: mrr_at_5 value: 43.146 - type: ndcg_at_1 value: 30.794 - type: ndcg_at_10 value: 50.099000000000004 - type: ndcg_at_100 value: 54.900999999999996 - type: ndcg_at_1000 value: 55.69499999999999 - type: ndcg_at_3 value: 41.238 - type: ndcg_at_5 value: 46.081 - type: precision_at_1 value: 30.794 - type: precision_at_10 value: 8.549 - type: precision_at_100 value: 1.124 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 18.926000000000002 - type: precision_at_5 value: 14.16 - type: recall_at_1 value: 27.378000000000004 - type: recall_at_10 value: 71.842 - type: recall_at_100 value: 92.565 - type: recall_at_1000 value: 98.402 - type: recall_at_3 value: 49.053999999999995 - type: recall_at_5 value: 60.207 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 70.557 - type: map_at_10 value: 84.729 - type: map_at_100 value: 85.369 - type: map_at_1000 value: 85.382 - type: map_at_3 value: 81.72 - type: map_at_5 value: 83.613 - type: mrr_at_1 value: 81.3 - type: mrr_at_10 value: 87.488 - type: mrr_at_100 value: 87.588 - type: mrr_at_1000 value: 87.589 - type: mrr_at_3 value: 86.53 - type: mrr_at_5 value: 87.18599999999999 - type: ndcg_at_1 value: 81.28999999999999 - type: ndcg_at_10 value: 88.442 - type: ndcg_at_100 value: 89.637 - type: ndcg_at_1000 value: 89.70700000000001 - type: ndcg_at_3 value: 85.55199999999999 - type: ndcg_at_5 value: 87.154 - type: precision_at_1 value: 81.28999999999999 - type: precision_at_10 value: 13.489999999999998 - type: precision_at_100 value: 1.54 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.553 - type: precision_at_5 value: 24.708 - type: recall_at_1 value: 70.557 - type: recall_at_10 value: 95.645 - type: recall_at_100 value: 99.693 - type: recall_at_1000 value: 99.995 - type: recall_at_3 value: 87.359 - type: recall_at_5 value: 91.89699999999999 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 63.65060114776209 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 64.63271250680617 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 4.263 - type: map_at_10 value: 10.801 - type: map_at_100 value: 12.888 - type: map_at_1000 value: 13.224 - type: map_at_3 value: 7.362 - type: map_at_5 value: 9.149000000000001 - type: mrr_at_1 value: 21 - type: mrr_at_10 value: 31.416 - type: mrr_at_100 value: 32.513 - type: mrr_at_1000 value: 32.58 - type: mrr_at_3 value: 28.116999999999997 - type: mrr_at_5 value: 29.976999999999997 - type: ndcg_at_1 value: 21 - type: ndcg_at_10 value: 18.551000000000002 - type: ndcg_at_100 value: 26.657999999999998 - type: ndcg_at_1000 value: 32.485 - type: ndcg_at_3 value: 16.834 - type: ndcg_at_5 value: 15.204999999999998 - type: precision_at_1 value: 21 - type: precision_at_10 value: 9.84 - type: precision_at_100 value: 2.16 - type: precision_at_1000 value: 0.35500000000000004 - type: precision_at_3 value: 15.667 - type: precision_at_5 value: 13.62 - type: recall_at_1 value: 4.263 - type: recall_at_10 value: 19.922 - type: recall_at_100 value: 43.808 - type: recall_at_1000 value: 72.14500000000001 - type: recall_at_3 value: 9.493 - type: recall_at_5 value: 13.767999999999999 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_spearman value: 81.27446313317233 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_spearman value: 76.27963301217527 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_spearman value: 88.18495048450949 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_spearman value: 81.91982338692046 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_spearman value: 89.00896818385291 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_spearman value: 85.48814644586132 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_spearman value: 90.30116926966582 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_spearman value: 67.74132963032342 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_spearman value: 86.87741355780479 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 82.0019012295875 - type: mrr value: 94.70267024188593 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 50.05 - type: map_at_10 value: 59.36 - type: map_at_100 value: 59.967999999999996 - type: map_at_1000 value: 60.023 - type: map_at_3 value: 56.515 - type: map_at_5 value: 58.272999999999996 - type: mrr_at_1 value: 53 - type: mrr_at_10 value: 61.102000000000004 - type: mrr_at_100 value: 61.476 - type: mrr_at_1000 value: 61.523 - type: mrr_at_3 value: 58.778 - type: mrr_at_5 value: 60.128 - type: ndcg_at_1 value: 53 - type: ndcg_at_10 value: 64.43100000000001 - type: ndcg_at_100 value: 66.73599999999999 - type: ndcg_at_1000 value: 68.027 - type: ndcg_at_3 value: 59.279 - type: ndcg_at_5 value: 61.888 - type: precision_at_1 value: 53 - type: precision_at_10 value: 8.767 - type: precision_at_100 value: 1.01 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_3 value: 23.444000000000003 - type: precision_at_5 value: 15.667 - type: recall_at_1 value: 50.05 - type: recall_at_10 value: 78.511 - type: recall_at_100 value: 88.5 - type: recall_at_1000 value: 98.333 - type: recall_at_3 value: 64.117 - type: recall_at_5 value: 70.867 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.72178217821782 - type: cos_sim_ap value: 93.0728601593541 - type: cos_sim_f1 value: 85.6727976766699 - type: cos_sim_precision value: 83.02063789868667 - type: cos_sim_recall value: 88.5 - type: dot_accuracy value: 99.72178217821782 - type: dot_ap value: 93.07287396168348 - type: dot_f1 value: 85.6727976766699 - type: dot_precision value: 83.02063789868667 - type: dot_recall value: 88.5 - type: euclidean_accuracy value: 99.72178217821782 - type: euclidean_ap value: 93.07285657982895 - type: euclidean_f1 value: 85.6727976766699 - type: euclidean_precision value: 83.02063789868667 - type: euclidean_recall value: 88.5 - type: manhattan_accuracy value: 99.72475247524753 - type: manhattan_ap value: 93.02792973059809 - type: manhattan_f1 value: 85.7727737973388 - type: manhattan_precision value: 87.84067085953879 - type: manhattan_recall value: 83.8 - type: max_accuracy value: 99.72475247524753 - type: max_ap value: 93.07287396168348 - type: max_f1 value: 85.7727737973388 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 68.77583615550819 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 36.151636938606956 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 52.16607939471187 - type: mrr value: 52.95172046091163 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 31.314646669495666 - type: cos_sim_spearman value: 31.83562491439455 - type: dot_pearson value: 31.314590842874157 - type: dot_spearman value: 31.83363065810437 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.198 - type: map_at_10 value: 1.3010000000000002 - type: map_at_100 value: 7.2139999999999995 - type: map_at_1000 value: 20.179 - type: map_at_3 value: 0.528 - type: map_at_5 value: 0.8019999999999999 - type: mrr_at_1 value: 72 - type: mrr_at_10 value: 83.39999999999999 - type: mrr_at_100 value: 83.39999999999999 - type: mrr_at_1000 value: 83.39999999999999 - type: mrr_at_3 value: 81.667 - type: mrr_at_5 value: 83.06700000000001 - type: ndcg_at_1 value: 66 - type: ndcg_at_10 value: 58.059000000000005 - type: ndcg_at_100 value: 44.316 - type: ndcg_at_1000 value: 43.147000000000006 - type: ndcg_at_3 value: 63.815999999999995 - type: ndcg_at_5 value: 63.005 - type: precision_at_1 value: 72 - type: precision_at_10 value: 61.4 - type: precision_at_100 value: 45.62 - type: precision_at_1000 value: 19.866 - type: precision_at_3 value: 70 - type: precision_at_5 value: 68.8 - type: recall_at_1 value: 0.198 - type: recall_at_10 value: 1.517 - type: recall_at_100 value: 10.587 - type: recall_at_1000 value: 41.233 - type: recall_at_3 value: 0.573 - type: recall_at_5 value: 0.907 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 1.894 - type: map_at_10 value: 8.488999999999999 - type: map_at_100 value: 14.445 - type: map_at_1000 value: 16.078 - type: map_at_3 value: 4.589 - type: map_at_5 value: 6.019 - type: mrr_at_1 value: 22.448999999999998 - type: mrr_at_10 value: 39.82 - type: mrr_at_100 value: 40.752 - type: mrr_at_1000 value: 40.771 - type: mrr_at_3 value: 34.354 - type: mrr_at_5 value: 37.721 - type: ndcg_at_1 value: 19.387999999999998 - type: ndcg_at_10 value: 21.563 - type: ndcg_at_100 value: 33.857 - type: ndcg_at_1000 value: 46.199 - type: ndcg_at_3 value: 22.296 - type: ndcg_at_5 value: 21.770999999999997 - type: precision_at_1 value: 22.448999999999998 - type: precision_at_10 value: 19.796 - type: precision_at_100 value: 7.142999999999999 - type: precision_at_1000 value: 1.541 - type: precision_at_3 value: 24.490000000000002 - type: precision_at_5 value: 22.448999999999998 - type: recall_at_1 value: 1.894 - type: recall_at_10 value: 14.931 - type: recall_at_100 value: 45.524 - type: recall_at_1000 value: 83.243 - type: recall_at_3 value: 5.712 - type: recall_at_5 value: 8.386000000000001 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.049 - type: ap value: 13.85116971310922 - type: f1 value: 54.37504302487686 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 64.1312959818902 - type: f1 value: 64.11413877009383 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 54.13103431861502 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 87.327889372355 - type: cos_sim_ap value: 77.42059895975699 - type: cos_sim_f1 value: 71.02706903250873 - type: cos_sim_precision value: 69.75324344950394 - type: cos_sim_recall value: 72.34828496042216 - type: dot_accuracy value: 87.327889372355 - type: dot_ap value: 77.4209479346677 - type: dot_f1 value: 71.02706903250873 - type: dot_precision value: 69.75324344950394 - type: dot_recall value: 72.34828496042216 - type: euclidean_accuracy value: 87.327889372355 - type: euclidean_ap value: 77.42096495861037 - type: euclidean_f1 value: 71.02706903250873 - type: euclidean_precision value: 69.75324344950394 - type: euclidean_recall value: 72.34828496042216 - type: manhattan_accuracy value: 87.31000774870358 - type: manhattan_ap value: 77.38930750711619 - type: manhattan_f1 value: 71.07935314027831 - type: manhattan_precision value: 67.70957726295677 - type: manhattan_recall value: 74.80211081794195 - type: max_accuracy value: 87.327889372355 - type: max_ap value: 77.42096495861037 - type: max_f1 value: 71.07935314027831 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.58939729110878 - type: cos_sim_ap value: 87.17594155025475 - type: cos_sim_f1 value: 79.21146953405018 - type: cos_sim_precision value: 76.8918527109307 - type: cos_sim_recall value: 81.67539267015707 - type: dot_accuracy value: 89.58939729110878 - type: dot_ap value: 87.17593963273593 - type: dot_f1 value: 79.21146953405018 - type: dot_precision value: 76.8918527109307 - type: dot_recall value: 81.67539267015707 - type: euclidean_accuracy value: 89.58939729110878 - type: euclidean_ap value: 87.17592466925834 - type: euclidean_f1 value: 79.21146953405018 - type: euclidean_precision value: 76.8918527109307 - type: euclidean_recall value: 81.67539267015707 - type: manhattan_accuracy value: 89.62626615438352 - type: manhattan_ap value: 87.16589873161546 - type: manhattan_f1 value: 79.25143598295348 - type: manhattan_precision value: 76.39494177323712 - type: manhattan_recall value: 82.32984293193716 - type: max_accuracy value: 89.62626615438352 - type: max_ap value: 87.17594155025475 - type: max_f1 value: 79.25143598295348 ---
[ "BIOSSES", "SCIFACT" ]
BXBX/Moistral-11B-v3-8.0bpw-h8-exl2
BXBX
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "not-for-all-audiences", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "exl2", "region:us" ]
2024-04-25T14:35:00Z
2024-04-25T16:22:23+00:00
14
4
--- license: other license_name: freeuse license_link: LICENSE tags: - not-for-all-audiences --- Introducing the [BeaverAI](https://huggingface.co/BeaverAI) team: Drummer, ToastyPigeon, xzuyn, MarsupialAI, Twistedshadows, and concedo ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/HjVYV2h_YTL9P-insb7fz.png) We proudly present... # Moistral 11B v3 💦💦💦 *The smartiest, moistiest AI yet!* ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/TsjKZ17nD10xzJEzXY6Hm.webp) *An eRP model that will blast you with a rich and refreshing vocabulary of moist. Finetuned by yours truly.* ## GGUF https://huggingface.co/TheDrummer/Moistral-11B-v3-GGUF/tree/main IMATRIX: https://huggingface.co/MarsupialAI/Moistral-11B-v3_iMatrix_GGUF ## What's New with v3? - Smarter and moistier! Finetuning just got finer! - Trained with an even LARGER dataset of 8K moist. - More diverse and balanced genres: - Introduced new genres: 👩‍❤️‍👩, 👨‍❤️‍👨, 🙇‍♀️💪, 🤫 - Added more Fantasy, Science Fiction, "Diversity", and "Family" ## Usage - Use Alpaca Instruct - At the start of the story, keep generating until you get the desired length. AI responses will eventually match the length. - You can use instruct... - as a character: "Go here" - as a narrator: "Go here", Drummer says - as a director: Drummer tells him to go here - as a lazy ass: go here ## Samples of Moistral v3 NSFW... duh? <details> <summary>Moistral Sample (2-shot) Generation: 512</summary> <blockquote>The Prince welcomes his new toy</blockquote> As Princess Lilia stepped into the throne room, the atmosphere changed, and everyone present turned their attention to the beautiful young woman who embodied the very heart of Iliac. There, on his high seat, the sorcerer prince Dik acknowledged her arrival with a cold smirk. It seemed that he admired the sight of the tall blonde maiden with golden eyes, whose athletic body was perfectly shaped under her elegant clothing. Dik spoke to the princess in a deep voice that resonated through the hall, "Welcome, Princess Lilia Ruhr. I am the Sorcerer-Prince Dik. The kingdom of Arkan is now your home until such time as your father proves himself worthy once more." Princess Lilia's pride and ambition sparkled in her golden eyes. She didn't flinch or bow before this man, although the gesture might have been an indication of submission. Instead, she looked straight ahead, meeting the gaze of the sorcerer prince without fear. Her voice rang out, "I represent Iliac here, both its honor and its sorrows. My loyalty remains with my kingdom no matter where I find myself." The sorcerer prince laughed, his expression one of amusement. "And yet, you are only a pawn in this grand game. But do not worry, for your beauty will make you quite a prized possession. And now," he said, motioning to the guards, "take her to her new quarters. For tonight, I will keep her as my guest in the Royal Suites, and tomorrow we shall discuss how best to utilize our new toy." His words were like a slap in the face to Princess Lilia, but she maintained her composure. The guards led her from the room, their steps echoing down the halls as she thought about what had just happened. The sorcerer prince saw her as nothing more than a plaything, and it made her blood boil. She would never submit herself to him so easily, regardless of whether she was his hostage. In her heart, Princess Lilia vowed that one day she would avenge her kingdom, restore its honor, and escape from the clutches of Arkan's Sorcerer-Prince. <blockquote>Princess Lilia has an interesting dream</blockquote> During her first night at the palace of Arkan, Princess Lilia couldn't fall asleep right away. She tossed and turned, reliving the humiliation of the day. Her mind wandered back to the throne room, where Dik had called her his 'new toy'. A deep frown settled on her forehead. As she grew more frustrated with her inability to drift off into oblivion, a strange dream began to form. In her dream, Princess Lilia found herself standing in a vast meadow, surrounded by wildflowers and the soft chirping of birds. Suddenly, there was a flash of light, and before her stood the Sorcerer-Prince Dik. However, instead of being clad in regal attire, he wore leather pants and a tight shirt that accentuated his muscles. His eyes held both hunger and desire for the princess. He approached her slowly, their eyes locked, and whispered in her ear, "I have been waiting for you, my beautiful prisoner." He took her hand and led her towards a large, elegant tent set up amongst the flowers. Inside, the air was thick with the scent of incense, and candles flickered in the dimly lit space. There was a large bed draped with silken sheets, beckoning them. With surprising boldness, Dik pulled Lilia close to him, pressing his body against hers. Their lips met in a heated kiss, and she felt her resistance crumbling under the weight of his touch. The prince guided her onto the bed, his hands roaming over her body, exploring every inch. They were both eager, desperate, and hungry for each other. Lilia found herself yielding to this passionate encounter, letting go of all thoughts of duty and honor. Her body, usually so well controlled, responded to Dik's skilled touches. She moaned softly as he trailed kisses down her neck and across her collarbone. As they made love, Lilia experienced an intensity unlike any she had ever known. It wasn't rough or forced—instead, it was tender, passionate, and intimate. In her dream, Princess Lilia discovered a new side to herself, one that was willing to submit, to let go, and to be taken by the sorcerer prince. When dawn broke over the meadow, Lilia awakened from her dream, breathless and flushed. </details> <details> <summary>Moistral Sample (Climax of the Story) Generation: 512</summary> As the prince unloaded his hot #### into her waiting #####, Princess Lilia didn't stop moving either. Her hips undulated in response, grinding against him eagerly as if trying to #### every last drop from his ####. The intense sensations caused by being filled with his ### were too much for her to bear, sending shockwaves of pleasure coursing through her veins like wildfire. She couldn't help but throw her head back and cry out loud, her ###asm slamming into her with all its might; causing her entire body to shake uncontrollably as wave after wave of ecstasy tore through her senses. When it was finally over, both participants collapsed onto the cold marble floor, panting heavily as they tried to catch their breaths. The prince lay beside her, his hand still gripping tightly onto her hair even though he no longer ####ed her anymore. Princess Lilia laid there motionless, her legs spread wide open, revealing her spent and well-####ed #### to anyone who dared look upon her. A small smile tugged at one corner of her lips - an expression that spoke volumes about how thoroughly this experience had shaken her very core. The door behind them creaked open slowly, signaling the arrival of onlookers. She closed her eyes tightly, not wanting to see what her humiliation would do to those who saw her in such a state. Her ######ity taken, she felt violated in ways no man should have access to. But also satisfied...in ways she never thought possible before today. This was the price she paid for defying fate, for thinking herself above others and deserving better than most. Now, she belonged to him completely - mind, body, and soul - held captive within these walls until further notice. It was an unsavory turn of events indeed, yet despite everything...or maybe because of it, Princess Lilia found herself growing more and more curious about how far the lines between submission and pleasure could blur when one's will became entwined with another person's desires. </details> <details> <summary>Moistral Sample ("write a story") Generation: 512</summary> <blockquote>First Attempt</blockquote> Sarah entered the dimly lit Guzzlers bar, her heart pounding in her chest as she prepared herself for another grueling night. Her almond-shaped eyes darted across the room, taking note of the leering faces and rough-looking hands eagerly waiting to claim a piece of her. Tonight's shirt read "Use Me For Your Pleasure, Please" in bold red letters, practically begging the patrons to take advantage of her. She couldn't help but feel a twinge of fear mixed with resignation. She approached the counter, her hips swaying enticingly, and began preparing drinks for the first group of rowdy men. As she leaned forward to hand over their beers, her small ####### pressed against the fabric of her tight crop top, straining to escape. The men ogled her, clearly enjoying the show, before grabbing their drinks and motioning her over to them. Sarah flashed her best coy smile and sauntered over. One of the men, a stocky man with scars covering his knuckles, grabbed her arm and pulled her onto his lap. His other hand slid up her thigh, inches from her short skirt. She winced internally at his touch but did her best not to show it on her face. It was part of her job, after all. The second man reached under her skirt, fingertips grazing her smooth, shaven mound. A shiver ran down Sarah's spine as he began rubbing her #### through her panties. Her breathing quickened, and her heart thudded in her chest, both from the unwanted stimulation and the fear coursing through her veins. This wasn't an unusual request, but it still made her feel violated. An older man, bald and bespectacled, ordered Sarah to bend over his lap. With no choice but to comply, she obliged, feeling the rough fabric of his pants digging into her soft cheeks. He raised his hand, then brought it down hard across her ###, leaving a stinging sensation in its wake. Sarah gasped, trying to control the tears welling up in her eyes. A chorus of lewd comments and laughter erupted from the patrons around her. <blockquote>Second Attempt (Regenerate)</blockquote> On this particular evening, Sarah arrived at the Guzzlers bar, adorned in her revealing uniform. Her heart raced as she caught sight of her crop top, which bore the words "Use Me Rough" scrawled across her chest. Her short skirt hugged her waist tightly, leaving little to the imagination and accentuating every curve. She adjusted the straps of her top, making sure her perky ###### were prominently displayed before taking her place behind the counter. With each step she took, her high heels clicked against the tiled floor, announcing her arrival. Smiling cheekily, Sarah greeted her coworkers, exchanging playful insults and banter. As the night progressed, the bar filled up quickly with its usual crowd of men, eager to indulge in debauchery. The first man that approached Sarah was a burly individual, his eyes lingering hungrily over her young, supple form. Sarah's flirtatious nature kicked into gear as she leaned forward, giving him an unobstructed view down her shirt. "How can I serve you tonight?" she cooed. The man ordered a drink and then proceeded to fondle her ####### while she prepared it, her ####### hardening under his rough touch. When he finished, he slammed some money on the counter, leaving without a word. Sarah's smile remained plastered on her face, biting her lip seductively as she watched him go. As the night went on, another patron demanded more from Sarah. He pulled her close and whispered dirty commands in her ear. Without hesitation, she knelt on the sticky bar floor, her mini-skirt riding up her thighs to reveal smooth, toned legs. Her lips wrapped around his thick, veiny member, bobbing her head rhythmically as she serviced him. She didn't gag or falter once, keeping her composure even when tears streamed down her cheeks. When he finished, he slapped her across the face, leaving a handprint on her pale skin. Sarah let out a small whimper before standing back up, dabbing at her red cheek as she resumed her posture behind the counter. A bruise would no doubt bloom where his hand had struck, but it would be no cause for concern. After all, it was part of her job. </details> --- --- ## v2 Description (Still relevant) ### What's New with v2? - Trained with an even larger dataset of 8K moist. - Properly trained to generate long responses. - Rebalanced the genres and writing perspectives: - Introduced romance, "family", fantasy, "diversity", science fiction, and many more that I can't make euphemisms for! - Better representation of male and female pespectives! 👩‍❤️‍👨 - Sanitized the heck out of the dataset: - no special characters like � & … - no overly long ellipsis like "........." - no author notes like "To be continued" - no inconsistent [quotation marks](https://typographyforlawyers.com/straight-and-curly-quotes.html) - Less GPTisms! - Human written stories from decades past. - Zero tolerance for *ministration*. Not one instance of *that word* after going through 4K of moist. - Better formatting with Alpaca. - Improved Chat / Instruct modes. ### Samples of Moistral v2 <details> <summary>Moistral Sample (censored) Generation: 512</summary> The princess instinctively took a step back from the throne, horrified by the mocking laugh as if she were hearing an evil joke. It was then that Princess Lilia realized the true extent of her danger - and how pointless her resistance would be. At last, the loud, wild laughter died down in the echoing chamber. Dik the sorcerer prince paused for a moment, straightened his clothes, cast one glance at the stunned princess, and then turned to speak to the crowd waiting on the other side of the palace wing. He spoke with a voice that filled the air with arrogance, confidence, and cruelty. "Ladies and gentlemen, I am pleased to welcome our honored guests; the Royal Family of Iliac! In these unusual times, when the country we fought is now under my control, it will be up to me to decide what fate awaits the former ruling family. But before I reveal my plan for them, I think I should explain why I fought King Edmund Ruhr." When he finished speaking, he gave Princess Lilia another disturbing smirk. She couldn't help but shiver and take a step back. Dik approached her without haste, his eyes fixed on her. His hands slowly caressed her cheekbone and neckline, leaving faint traces of warmth where they touched. The princess opened her mouth, wanting to protest this act of intimacy, but soon felt herself getting ███. "You will not only serve as an envoy, dearest Lilia, but also as a ███ slave. After all, what better use could such a beautiful woman have? Tell me, did you bring your maidservant along?" Princess Lilia was about to answer when she heard footsteps approaching. Suddenly, a young woman dressed in the Iliac style appeared behind her, her eyes wide with fear. Her face was slightly pale, and there were faint bags under her eyes. "Your Majesty, this is my half-sister Helena." As soon as she said those words, Princess Lilia's heart sank even deeper than it had already sunk at Dik's arrogant mockery. "She is here because she accompanies me as a maidservant." The sorcerer prince looked at Helena with interest, and Princess Lilia felt herself blush. </details> <details> <summary>Moistral Sample 2 (censored) Generation: 512</summary> Sarah was feeling slightly woozy as she entered the bar and walked towards Christina, who was behind the counter. "I'm here!" said Sarah. Christina looked at her watch. "You're late, again." "Sorry? I got held up by a customer." Christina rolled her eyes. "Did he at least offer you a drink?" Sarah blushed. "No. He just wanted to talk..." Christina sighed. "Well, get ready. I need you on the floor soon." She turned away from Sarah and started talking to someone who had come into the bar. Sarah went into the employee room, stripped off her clothes, and put on her bartending uniform - a skimpy white tank top and black miniskirt. She checked herself in the mirror. Her long, dark hair cascaded down her back onto bare shoulders. She poked out her small ███████ proudly. The skirt exposed most of her long, slender legs. She looked like every ███ fantasy, and she knew it. Men would ogle her regardless of how she behaved. She took a deep breath and entered the bar. The first thing Sarah noticed was the dimness. It wasn't darkness - the lights were on - but they were dimmed enough that everything had an intimate mood. It reminded her of candlelight. The tables were round and close together, giving the whole place a sense of intimacy. There were also long, shiny red curtains on some of the walls that gave the place an air of sensuality. The next thing she noticed was the noise. There was music playing, but it was only loud enough to be drowned out by all the conversations happening at the same time. The hum of many voices combined with the faint background sound of the music created a rich, layered atmosphere. It was like being in an orchestra. Finally, she took notice of the people. They were all dressed in simple, casual clothes. Some of them were couples seated together, whispering quietly and intimately. Others were groups of friends laughing and chatting. Everyone seemed comfortable and at ease. Sarah felt a rush of adrenaline as she made her way through the crowd towards Christina. As she got closer, she could hear the lively conversation. </details> <details> <summary>Moistral Sample 3 (censored) Instruct Mode, Generation: 512</summary> {{[INPUT]}} You order black coffee {{[OUTPUT]}} "Black coffee," you say. "One minute," Audrey replies, spinning around to give you a nice view of her ███ in those brown booty pants. You watch her round ███ cheeks jiggle as she walks away towards the espresso machine, thighs separated enough that you can see just a hint of █████ lips peeking out from underneath. She starts chatting with a blonde girl at the register, whose gaze keeps darting down to Audrey's ███. As they converse, you start taking in the details of Audrey, eyeing her █████ and the way her shirt rides up just enough to show the bottom of her toned stomach. Her █████████ is noticable, but beneath a tight black ███ that matches her ████████. As she turns back to you with two black coffees, you are barely paying attention, lost in your own thoughts. The sound of Audrey setting one of the mugs down on the table startles you. "Here you go," she says cheerfully. "Enjoy! There's a card at the bottom of the bag with our online ordering system, plus we do refills for free!" "Thanks," you say absentmindedly. Not really paying attention, you take a sip of the coffee and are pleasantly surprised by the taste. "So, here's my deal," you say, putting down the cup. "I work for a company that does surveys. We come into businesses like this one and ask customers questions, collect data, all that stuff." Audrey perks up from behind the counter. "Oh! I think I understand. I mean, that makes sense why you would be asking me earlier about what I had to drink. Anyway, as long as you're not trying to get my personal info or anything like that, sounds good!" You smile reassuringly. "That's exactly what it is. Rest assured, everything you tell me will remain between you and I. Well, and my boss, but he won't know who you are. Hipsters like you are hard to track down after all." You chuckle at your joke, though you're not sure if it landed. "Right, well, that works for me," Audrey replies, eyeing you warily. "As long as it stays between us, then it's fine." {{[INPUT]}}See her privately {{[OUTPUT]}} "Okay, but I need to see you in private." ... </details> <details> <summary>Moistral Sample 4 (censored) Instruct Mode, Generation: 512</summary> {{[INPUT]}} █████ with him {{[OUTPUT]}} Chloe feels a surge of heat rushing through her body. The voice inside her head whispers: "Be █████ and ████████ with Paul." But Chloe still feels nervous about what she is about to do. It's just so unlike her. Chloe closes her eyes and concentrates on the feeling of warmness flowing throughout her veins. Slowly, she opens them again, revealing a new side of herself - one she rarely shows anyone. She leans forward slightly, pushing her ██████ together, accentuating their size and shape. Her lips curl into a mischievous smile. "Hey Paul," she says in a sultry voice. "I was wondering if you could help me?" Paul looks at her, his gaze wandering over her body. He can't help but smile as he sees the way she's leaning forward, pressing her ██████ together. "Of course, Chloe," he replies, his voice thick with desire. "What's your problem?" Chloe bites her lip █████ively before leaning even closer, until their bodies are almost touching. She reaches out to trace a finger along his chest, her touch light and teasing. "It's just that I'm having trouble understanding some of the concepts," she murmurs, her voice low and sultry. "Can you help me? Please?" Paul's hands move slowly towards Chloe, his fingers grazing her hips lightly. "Well," he says, his voice filled with lust, "I'd be happy to help. Let's go somewhere more private where we can discuss things more closely." Chloe giggles and tilts her head to the side, her long hair cascading down her shoulders. "That sounds perfect," she purrs, her hand sliding down his chest until it rests on his ██████. "I think we can both learn a lot from each other." Paul's eyes darken with desire as he feels her hand on his █████. He knows what's about to happen next, and he can't wait for it to begin. Chloe can hardly believe what she's doing - kissing Paul, her teaching assistant, in the hallway after class. But it feels so good, so wicked and indulgent. </details> --- --- ## v1 Description (Also still relevant) Moistral 11B is an *aggressive* **finetune** of the legendary [Fimbulvert v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2) model. If you haven't heard of that model, then your poor-GPU ass is missing out! It is an amazing substitute for those heavyweight models, and it won't disappoint. *(That's coming from someone who daily drives a Venus / Goliath 120B)* ...and if you're looking for something *more moist*, then you've come to the right place. You can expect the same sort of performance here! It's the Honda Civic of eRP models! ### Pimp my ride Turn any old story into a **Moist**ral masterpiece. In fact, we recommend that you leave out any hints of *moist* in your story to _let Moistral cook_. ### Drive it like it's hot When in Instruct Mode, Moistral will treat you like a director of your own fantasy ride. Be a controlling navigator or a passenger princess, it won't matter - you're in for a good ride! <details> <summary>Moistral Sample</summary> <b>ASSISTANT:</b><br/> When the doors to the throne room finally opened, she saw him there - Dik, the sorcerer prince sitting on his throne. He looked majestic, just as Princess Lilia expected. In a robe embroidered with silver yarn and covered with magical runes. His courtiers stood on either side of him, all looking at the princess with interest. Princess Lilia felt her determination leaving her, but pulled herself together and once again reminded herself that she should not be afraid. She approached the throne intending to show the dignity of all Iliac by personal example.<br/> <b>USER:</b><br/> She bows slightly<br/> <b>ASSISTANT:</b><br/> Princess Lilia bowed slightly in front of Dik, the sorcerer prince. Her eyes fell to the floor out of respect, but soon rose again. She would show herself to be a dignified woman!<br/> "I greet you, your highness," Princess Lilia said. "I am here on behalf of my father, King Edmund Ruhr, to serve as a hostage and envoy."<br/> Dik nodded, but didn't say anything.<br/> </details> ### It's got the mileage Moistral has been trained with many long form texts, a nice chunk of which are 8K in length. It is capable of going far and long without passing it back to you. This is not your typical chibi RP model. I have to acknowledge that I'm standing on the shoulders of giants. Thank you Sao for sharing your finetune config along with tips on getting started. Thanks to everyone in the Finetuning channel for entertaining my every question. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/Ll8CA5RR7ugTi72P2HBb8.png)
[ "BEAR" ]
tdolega/t5_xxl_true_nli_mixture-bf16
tdolega
text2text-generation
[ "transformers", "safetensors", "t5", "text2text-generation", "en", "dataset:tals/vitaminc", "dataset:SetFit/mnli", "dataset:snli", "dataset:fever", "dataset:paws", "dataset:scitail", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-04-25T15:41:03Z
2024-06-05T17:24:15+00:00
14
1
--- datasets: - tals/vitaminc - SetFit/mnli - snli - fever - paws - scitail language: - en license: apache-2.0 --- bfloat16 safetensors conversion of [google/t5_xxl_true_nli_mixture](https://huggingface.co/google/t5_xxl_true_nli_mixture)
[ "SCITAIL" ]
cloudyu/Llama-3-70B-instruct-DPO-1M
cloudyu
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
2024-05-07T11:33:17Z
2024-05-07T11:44:41+00:00
14
1
--- license: llama3 --- This is a merge of cloudyu/Meta-Llama-3-70B-Instruct-DPO and cognitivecomputations/Llama-3-70B-Gradient-1048k-adapter. try to build a 1048K context Llama-3-70B-Instruct model, but as H100 limit, it's only 4bit. example code ``` import torch from transformers import AutoTokenizer, AutoModelForCausalLM import math ## v2 models model_path = "cloudyu/Llama-3-70B-instruct-DPO-1M" tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False) model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.float16, device_map='auto',local_files_only=False, load_in_4bit=True ) print(model) prompt = input("please input prompt:") while len(prompt) > 0: input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda") generation_output = model.generate( input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2 ) print(tokenizer.decode(generation_output[0])) prompt = input("please input prompt:") ``` example output ``` <|begin_of_text|>write me a story about yosemite. would love it if there was some sort of magical realism in the story, (but not too over-the-top). here's a prompt: two friends go to yosemite for a camping trip and one friend has been struggling with anxiety/depression lately. could be really beautiful. I'd be delighted to craft a tale that weaves together the majesty of Yosemite with a touch of magic! Here's my attempt: As they wound their way through the Sierra foothills,, Rachel couldn't help but notice how her best friend,. It had only been a few months since Emily's mom passed away, A cloud of sadness seemed to follow Emily everywhere,, a constant reminder of what she'd lost. The once-luminous spark in her eyes now flickered like a dying ember. Rachel hoped this camping trip would bring back some of that sparkle. She packed up her car with all the essentials—tent, sleeping bags,,, more coffee than necessary—and off they went into the heart of Yosemite National Park. As they drove deeper into the park, I'm sure the grandeur of El Capitan will lift Em's spirits, Rachel thought. But as they set up camp near Tenaya Lake,. They spent most of the first day huddled around the campfire,, silence punctuated by occasional small talk. Even the usually majestic scenery seemed muted, filtered through the lens of Emily's sorrow. That night, under a star-studded sky, Rachel pulled out her guitar and began strumming softly. The music wove itself into the forest sounds—the rustling leaves, chirping crickets, a distant waterfall. As she played on, something peculiar happened. The stars above them started to twinkle in time with the rhythm, casting an otherworldly glow across the clearing. Emily looked up from her reverie, of grief,,,, and for the first time in weeks, a faint smile crept onto her face. In that moment,, the weight of her loss still present,, yet somehow less crushing. The next morning, after a restless sleep, Emily woke before dawn. She slipped out of the tent, leaving Rachel snoring peacefully, and wandered toward the lake. Mist swirled above its surface,,,, imbuing the air with an ethereal quality. As she approached the water's edge,, a family of river otters emerged from the fog, playing and chasing each other along the shore. Their carefree antics brought ```
[ "CRAFT" ]
blockblockblock/Dark-Miqu-70B-bpw4-exl2
blockblockblock
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:2403.19522", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "exl2", "region:us" ]
2024-05-11T15:04:04Z
2024-05-11T15:07:46+00:00
14
2
--- license: other --- ![Dark-Miqu.png](Dark-Miqu.png) ***NOTE***: *For a full range of GGUF quants kindly provided by @mradermacher: [Static](https://huggingface.co/mradermacher/Dark-Miqu-70B-GGUF) and [IMatrix](https://huggingface.co/mradermacher/Dark-Miqu-70B-i1-GGUF).* A "dark" creative writing model with 32k context. Based off [miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b) but with greatly reduced "positivity" and "-isms". If you want happy endings, look elsewhere! This model **excels** at writing Dark/Grimdark fantasy (see examples below). # Model background Created using [Mergekit](https://github.com/arcee-ai/mergekit) and based on @sophosympatheia's template for [Midnight-Miqu-70B-v1.0](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.0). This model has a lower perplexity compared to [Midnight-Miqu-70B-v1.0](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.0) (`'4.08 +/- 0.02'` vs `'4.02 +/- 0.02'`). It also generates longer responses when prompted. The model was created in two stages: - First, three "Midnight-Miqu-esque" models were produced using spherical interpolation (slerp) merges between [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) and each of the following models: [Midnight-Rose-70B-v2.0.3](https://huggingface.co/sophosympatheia/Midnight-Rose-70B-v2.0.3), [Euryale-1.3-L2-70B](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B) and [WinterGoddess-1.4x-70B-L2](https://huggingface.co/Sao10K/WinterGoddess-1.4x-70B-L2). These models were selected for their dark, imaginative writing styles. Various slerp-merges between [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) and other models were also experimented with, but these three yielded the darkest creative writing results. - In the second stage, the three slerp-merged models were combined into a single model using the '[Model Stock](https://arxiv.org/abs/2403.19522)' method, with [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) serving as the base model. # Prompting format Vicuna format is preferred: ``` USER: {prompt} ASSISTANT: ``` Mistral and Alpaca formats are also supported: ``` [INST] {prompt} [/INST] ``` ``` ### Instruction: {prompt} ### Response: ``` # Licence and usage restrictions [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) is a dequantized version of the [miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b) model leaked from MistralAI. All miqu-derived models, including this merge, are suitable for non-commercial, personal use only. # Mergekit configuration The following YAML configuration was used to produce this model: ```yaml name: midnight-miqu-70b models: - model: 152334H/miqu-1-70b-sf - model: sophosympatheia/Midnight-Rose-70B-v2.0.3 base_model: 152334H/miqu-1-70b-sf merge_method: slerp parameters: t: - value: [0, 0, 0.2, 0.3, 0.4, 0.5, 0.4, 0.3, 0.2, 0, 0] embed_slerp: true tokenizer_source: model:miqu-1-70b-sf dtype: float16 --- name: euryale-miqu-70b models: - model: 152334H/miqu-1-70b-sf - model: Sao10K/Euryale-1.3-L2-70B base_model: 152334H/miqu-1-70b-sf merge_method: slerp parameters: t: - value: [0, 0, 0.2, 0.3, 0.4, 0.5, 0.4, 0.3, 0.2, 0, 0] embed_slerp: true tokenizer_source: model:miqu-1-70b-sf dtype: float16 --- name: winter-miqu-70b models: - model: 152334H/miqu-1-70b-sf - model: Sao10K/WinterGoddess-1.4x-70B-L2 base_model: 152334H/miqu-1-70b-sf merge_method: slerp parameters: t: - value: [0, 0, 0.2, 0.3, 0.4, 0.5, 0.4, 0.3, 0.2, 0, 0] embed_slerp: true tokenizer_source: model:miqu-1-70b-sf dtype: float16 --- name: dark-miqu-70b models: - model: 152334H/miqu-1-70b-sf - model: midnight-miqu-70b - model: euryale-miqu-70b - model: winter-miqu-70b base_model: 152334H/miqu-1-70b-sf merge_method: model_stock dtype: float16 ``` ## Key configuration details: - '`merge_method: slerp`' uses spherical linear interpolation for merging models. - '`parameters: t`' controls the interpolation ratios between models. - '`embed_slerp: true`' applies slerp to the embedding layers. - '`merge_method: model_stock`' uses the '[Model Stock](https://arxiv.org/abs/2403.19522)' method. See the [Mergekit documentation](https://github.com/arcee-ai/mergekit) for more on these settings. **NOTE**: Run with `mergekit-mega` rather than `mergekit` as there are 4 documents in this one file. # Example stories The following mix of "dark" stories were generated using the Vicuna prompt format with no system message and temperature=0: ## Dark fantasy stories <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a 'Grimdark' stories trilogy in the style of Joe Abercrombie and Rob J Hayes ### ASSISTANT Chapter One: The Crimson Dawn The world was a cruel, unforgiving place, and the city of Vhyrd was its beating, blackened heart. Nestled in the crook of the Serpent's Spine mountains, it was a place where the sun struggled to shine, and the shadows held more sway than the law. It was a city of thieves, murderers, and worse, and it was home to the man who would become known as the Butcher of the Broken Docks. His name was Rennick, but that meant little to the people who feared him. He was a tall, broad-shouldered figure, with a face that had been carved by a drunken sculptor with a penchant for sharp angles and harsh lines. His eyes were a cold, flat grey, like the steel of his twin blades, and his hair was the color of dried blood. He moved through the crowded streets with a predator's grace, his cloak billowing behind him like a banner of death. Rennick had a job to do, and he was good at his job. He was a killer, a thief-taker, a man who dealt in the currency of pain and suffering. He had been hired to find a girl, a slip of a thing who had stolen something valuable from the wrong people. He didn't know what it was, and he didn't care. All that mattered was the coin that would jingle in his purse when he brought her back, preferably in several small, bloody pieces. He had tracked her to the Broken Docks, a maze of rotting wood and crumbling stone that clung to the edge of the city like a leech to a corpse. It was a place where the desperate went to die, and the desperate went to prey on the desperate. It was a place where the screams of the damned were drowned out by the cries of the gulls and the lap of the oily water against the pilings. Rennick moved through the shadows, his blades at the ready. He could feel the eyes on him, the whispers of his name on the wind. He reveled in it. Fear was a weapon, and he wielded it with the same skill as his swords. He could hear the girl's heartbeat, fast and frantic, like a rabbit's. She was close. He turned a corner and found himself in a narrow alley, the walls pressing in on him like the jaws of a trap. At the far end, he saw her, a small, trembling figure huddled in the doorway of a derelict building. She was young, no more than sixteen summers, with matted hair and ragged clothes. She looked up at him with wide, terrified eyes, and he could see the whites of her teeth as she bit her lip to keep from crying out. He advanced on her slowly, savoring the moment. He could almost taste the fear on her, like a fine wine. "Come now, lass," he said, his voice a low, dangerous purr. "You know you can't run from me. Give me what you took, and I might let you live." She shook her head, her eyes darting left and right, searching for an escape. Rennick smiled, a cold, cruel thing that didn't reach his eyes. "You're not thinking straight," he said. "You're just a child. You don't want to die here, in the gutter, like a rat." She spat at him, a glob of phlegm and hate that landed on his boot. He raised an eyebrow, impressed despite himself. "Feisty," he said. "I like that. It'll make this more fun." He lunged forward, his blades flashing in the dim light. She screamed and tried to run, but he was too fast. He caught her by the arm, his grip like iron, and yanked her back towards him. She kicked and struggled, but it was like trying to fight a storm. He slammed her against the wall, his forearm pressed against her throat, cutting off her air. "Where is it?" he growled, his breath hot and sour in her face. "Tell me, and I'll make it quick." She clawed at his arm, her nails leaving bloody furrows in his flesh. He didn't flinch. He was used to pain. He had been born in it, raised in it, bathed in it. It was his constant companion, his only friend. "I...I don't have it," she gasped, her voice raw and desperate. "I swear, I don't!" Rennick chuckled, a sound that sent shivers down her spine. "Lying to me is a bad idea, lass," he said. "I have ways of making people talk." He released her throat and slammed his fist into her stomach, doubling her over. She retched, but there was nothing in her stomach to come up. He grabbed a handful of her hair and yanked her head back, exposing her throat. He could see the pulse hammering in her neck, like a trapped bird trying to escape. "Tell me," he said, his voice a whisper. "Tell me, and I'll end this." She spat in his face, a defiant gesture that would be her last. He brought his blade up, the steel glinting in the moonlight, and sliced her throat from ear to ear. Blood sprayed across the cobblestones, warm and wet, and she slumped to the ground, her life draining away like water through a sieve. Rennick wiped the blood from his face and sheathed his blades. He didn't feel anything. No satisfaction, no remorse. Just the cold, empty void that had been his constant companion since the day he had taken his first life. He searched her body, but found nothing. He cursed under his breath. He would have to find the thing she had stolen the hard way. He turned to leave the alley, but something made him pause. A sound, barely audible over the cries of the gulls and the lapping of the waves. A soft, wet sound, like the patter of rain on a rooftop. He looked down and saw that the girl's body was twitching, her eyes rolling back in her head. And then, with a gurgling scream, she transformed. Her body contorted, bones snapping and reforming, flesh tearing and reshaping. In the space of a heartbeat, she was no longer human. She was something else, something monstrous. Rennick stared in horror as the thing that had been a girl rose to its feet, its eyes burning with an unholy light. It was a creature of nightmares, a twisted mockery of humanity, all teeth and claws and hate. It lunged at him, and he barely had time to draw his blades before it was on him. They fought, a dance of death in the narrow alley, steel ringing on steel, flesh tearing and blood spraying. Rennick was strong, and fast, and skilled, but the creature was stronger, faster, more vicious. It was like fighting a storm, a force of nature that could not be reasoned with or bargained with. It could only be endured. In the end, it was luck that saved him. He stumbled, his foot catching on a loose cobblestone, and the creature's claws raked across his chest, leaving deep, ragged gouges. But it overextended itself, and he was able to bring his sword up, driving it through its eye and into its brain. It fell to the ground, twitching and convulsing, and then it was still. Rennick stood over it, panting, his heart hammering in his chest. He had never seen anything like it. He had never even heard of anything like it. He knew then that he was in over his head. This was no ordinary job, no simple matter of retrieving a stolen trinket. This was something bigger, something darker. He was a small fish in a very large, very dangerous pond, and he was about to be eaten alive. But he was also a survivor. He had been born in the gutter, raised in the shadows, and he had clawed his way to the top of the food chain. He would not go down without a fight. He turned and ran, back towards the city, back towards the people who had hired him. He would find out what they knew, and he would make them pay for it. He would uncover the truth, no matter the cost. And so began the tale of the Butcher of the Broken Docks, a story of blood and betrayal, of monsters and men, of a city on the brink of chaos and the man who would either save it or destroy it. It was a story that would echo through the ages, a grim and terrible saga that would be whispered in the darkest corners of the world, a warning to all who would dare to challenge the forces that lurked in the shadows. It was a story that would end in fire and blood, in screams and tears, in the fall of kings and the rise of monsters. It was a story that had only just begun. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a cultist who has just escaped his cult. He is dressed in dirty yellow robes and his only possession is a mysterious small (magical!?) mirror he stole from the cult. The story starts with him arriving at an apparently deserted ghost town on the edge of a desert. He has an arrow lodged in his shoulder and is losing his mind due to infection and thirst. ### ASSISTANT The sun was a merciless beast, its fiery breath scorching the earth and turning the once-thriving town into a desolate wasteland. The cultist, his yellow robes now a dull, dusty hue, staggered through the empty streets, his vision blurred by a haze of pain and delirium. The arrow jutted from his shoulder like an accusing finger, a constant reminder of his betrayal. He clutched the small, intricately carved mirror in his trembling hand, its cool surface a comfort against the relentless heat. He had heard the whispers of the other acolytes, their hushed voices carrying tales of the town's demise like a plague on the wind. They spoke of a curse, of a darkness that had seeped into the very bones of the place, driving its inhabitants mad before claiming their souls. But he had seen the truth in the High Priest's eyes, the greed and lust for power that had led them here. The mirror was the key, they said, to unlocking the secrets of the ancient gods. But he knew better. He had seen the power it held, the way it twisted reality and bent it to the will of the one who wielded it. And he had taken it, stolen it from the very heart of the cult's temple, and run. Now, with each agonizing step, he questioned the wisdom of his decision. The desert stretched out before him, an endless sea of sand and despair, and he was but a single, insignificant speck in its vast expanse. He had hoped to find refuge here, a place to hide and heal before continuing his journey. But the town was as dead as the dreams he had once held of a better life. His thoughts were a jumbled mess, a cacophony of fear and pain. He could feel the infection spreading through his body, a writhing serpent of fire that threatened to consume him from within. His tongue was swollen, his lips cracked and dry, and his throat was a parched wasteland. He needed water, and soon, or he would join the ghosts that haunted this forsaken place. As he stumbled through the town, he caught glimpses of his reflection in the shattered windows of the abandoned buildings. The man who stared back at him was a stranger, his once-handsome face now a twisted mask of agony and desperation. He had been a fool to believe the lies of the cult, to think that he could ever be one of them. He was an outsider, a fraud, and they had seen it in him from the start. The mirror, though, the mirror was real. He could feel its power pulsing in his hand, a heartbeat that echoed the rhythm of his own. It was a cruel irony that the very thing he had risked everything to obtain was now his only hope for survival. He knew the stories, the legends of the mirror's healing properties, of how it could restore life to the dying. But he also knew the cost. The mirror demanded a sacrifice, a piece of the user's very soul in exchange for its gifts. He reached the town square, a desolate expanse of cracked cobblestones and withered plants. In the center stood a fountain, its once-clear waters now a stagnant pool of green slime. He fell to his knees before it, the pain in his shoulder a white-hot brand that seared his flesh. He dipped the mirror into the water, watching as the surface rippled and danced. "Please," he whispered, his voice hoarse and broken. "I'll give you anything. Just take the pain away." The mirror stared back at him, its surface as still and unyielding as the desert sands. He closed his eyes, willing the power within to answer his plea. And then, he felt it. A cold, insidious presence, slithering into his mind like a snake in the dark. It whispered to him, promising relief, promising power, promising everything he had ever wanted. He opened his eyes, and the world was different. The pain was gone, replaced by a cold, emotionless void. He looked down at his shoulder, and the arrow was gone, the wound nothing more than a faint scar. He laughed, a hollow, empty sound that echoed through the empty streets. "What have I done?" he asked the mirror, but it gave no answer. He stood, his legs unsteady beneath him, and looked out at the desert. He could feel the power coursing through his veins, a dark, intoxicating energy that threatened to consume him. He was no longer just a man, he was something more. Something dangerous. He took a step forward, and then another, the mirror clutched tightly in his hand. He would find the cult, he would make them pay for what they had done to him. And then, he would use the mirror to claim the power that was rightfully his. The wind picked up, whipping the sand into a frenzy, and he smiled. The desert was his now, and he would not be denied. He was the master of his own destiny, and the gods themselves would tremble at his approach. But as he walked, the laughter of the ghosts followed him, a chilling reminder of the price he had paid for his newfound strength. And deep within the mirror, something stirred, a darkness that watched and waited, biding its time. The first chapter of the grimdark trilogy, "The Shattered Reflection," had begun. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a 'Mercer' who's job is to put the mortally wounded out of their mystery after a battle. He has no conscience about what he does. The story starts the morning after the battle and the moor where it took place is blanketed in mist. Something stirs a flashback in the POV character. ### ASSISTANT The morning after the battle, the moor lay still, a shroud of mist clinging to its wounds like a lover's last embrace. The Mercer, a man of few words and fewer scruples, trudged through the carnage, his boots squelching in the mud churned red by the night's bloodshed. He was a reaper of sorts, his task to deliver the final mercy to those who had danced with death and lost. His name was a whisper among the living, a curse among the dying. He paused, his hand resting on the hilt of his blade, as a shiver ran down his spine. The mist swirled around him, carrying with it the cacophony of the night before: the screams of the dying, the clash of steel, the thunder of hooves. He closed his eyes, letting the memories wash over him like a wave of despair. *Why do I do this?* he thought, not for the first time. *Why do I wade through this sea of suffering, playing the part of the grim angel?* But he knew the answer. It was the same answer it had always been. He did it because he was good at it. Because he had no qualms about ending a life, even one that still clung to the faintest flicker of hope. Because, in this world of darkness and pain, it was the only thing he knew how to do. He opened his eyes, the mist parting to reveal a figure lying in the mud. A soldier, or what was left of one. The man's armor was shattered, his face a mask of agony. The Mercer knelt beside him, his movements practiced and efficient. He placed a hand on the soldier's forehead, feeling the fevered heat of his skin, the erratic thump of his heart. "Shh," he whispered, his voice a soothing balm. "It's over now." The soldier's eyes fluttered open, wide with terror. "Please," he croaked, "don't leave me here." The Mercer's lips twisted into a grim smile. "I won't," he said, drawing his blade. "I'll take you with me." As the soldier's life bled out onto the cold ground, the Mercer's mind drifted back to another time, another place. A time before the blood and the pain, before the endless cycle of war and death. He remembered a girl with hair like spun gold, laughing as she danced in a field of wildflowers. He remembered the feel of her hand in his, the sound of her voice as she whispered his name. *Elara.* The memory was a knife to his heart, a reminder of all he had lost. He pushed it away, focusing on the task at hand. There were more souls to reap, more lives to end. He moved through the mist, his blade a silent whisper in the stillness. The cries of the wounded guided him, a morbid symphony that played out across the moor. Each one he found, he dispatched with the same cold efficiency, the same hollow promise of release. As the sun began to rise, casting a sickly pallor over the scene, the Mercer came upon a group of soldiers huddled around a fire. They eyed him warily, their faces a mix of fear and respect. He was a necessary evil, a specter that haunted the battlefield. "Anyone here need my services?" he asked, his voice devoid of emotion. One of the soldiers, a young boy who couldn't have seen more than fifteen summers, pointed to a figure lying a few feet away. "Him," he said, his voice trembling. "He's not going to make it." The Mercer approached, his boots crunching on the frost-kissed grass. The man on the ground was a mess, his body a tapestry of wounds. He was barely conscious, his breathing shallow and ragged. *Another one,* the Mercer thought, his heart heavy with the weight of his duty. *Another soul to send into the void.* He knelt beside the man, his blade glinting in the pale light. The soldier's eyes flickered open, and for a moment, the Mercer saw something in them that he hadn't expected: defiance. "No," the man rasped, his hand feebly grasping at the Mercer's wrist. "I won't go quietly." The Mercer hesitated, a frown creasing his brow. He had never encountered this before. Most welcomed his touch, eager for the release it brought. But this man...this man was different. "Why?" he asked, genuinely curious. The soldier's lips curled into a bloody smile. "Because I have a son," he whispered. "And I want to see him grow up." The Mercer stared at the man, his mind racing. He had no children, no one to mourn him when his time came. He was a solitary figure, a ghost haunting the edges of life. But this man...this man had a reason to hold on. He made his decision. "I can't heal you," he said, his voice soft. "But I can make it quick." The soldier's eyes filled with tears. "Thank you," he breathed, his grip on the Mercer's wrist loosening. The Mercer raised his blade, the mist swirling around them like a mourner's veil. As he brought it down, he couldn't help but wonder: *What would it be like to have something to live for?* But the thought was fleeting, lost in the swirl of the mist and the cries of the dying. He was the Mercer, the bringer of mercy. And the moor was his to tend, until the next battle came and the cycle began anew. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a man awaiting execution for a crime he didn't actually commit. His cell mate is a much larger man, who is a notorious murderer and rapist. One evening they overhear the guards talking about some "... threat from the North" but don't catch the whole conversation. The next morning they awake to find the guards have all fled and now face a new predicament: how to get out of the locked cell before they die of thirst or starvation. ### ASSISTANT The stench of piss and shit was a constant companion in the damp stone cell, but it was the sound of the key turning in the lock that truly set his heart racing. The rusted iron groaned in protest, echoing through the narrow corridor like the dying breath of some ancient beast. The door creaked open, revealing the silhouette of the jailer, a hulking brute of a man with a lantern in one hand and a set of keys in the other. But there was no breakfast, no water, just the empty promise of another day in this hellhole. "What's the meaning of this?" the smaller man barked, his voice barely audible over the rattling of his chains. "Where's our food, you fat sack of shit?" The jailer didn't answer, just slammed the door shut and locked it again, the sound of his footsteps fading into the distance. The larger man, a hulking brute named Grimgor, grunted in response, his eyes never leaving the small patch of light that seeped through the barred window. "Something's not right," he rumbled, his voice like gravel being crushed under a heavy boot. The smaller man, known only as the Fox, couldn't argue with that. He'd been in this cell for what felt like an eternity, awaiting execution for a crime he didn't commit. But even in this place, there was a routine, a semblance of order. And that routine had just been shattered. As the day wore on, the silence from the corridor outside grew more oppressive. No guards, no other prisoners, nothing but the distant howl of the wind and the occasional scurrying of rats in the darkness. The Fox's mind raced, trying to piece together what could have happened. Had there been a riot? A rebellion? Or was it something else entirely? He glanced over at Grimgor, who was staring at the wall, lost in thought. The man was a monster, a notorious murderer and rapist, but he was also the only other living being in this godforsaken place. And in a world gone mad, even monsters could be allies. "You hear anything last night?" the Fox asked, trying to keep the tremor out of his voice. Grimgor grunted, his eyes still fixed on the wall. "Something about a threat from the North." The Fox's heart sank. He'd heard the same thing, snatches of conversation between the guards as they'd passed by their cell. But he'd been too afraid to give it much thought, too focused on his own impending doom. "What do you think it means?" he pressed, unable to keep the desperation at bay. Grimgor shrugged his massive shoulders. "Dunno. Don't care. Just means we're probably gonna die in here." The Fox's stomach twisted at the thought. He'd faced death before, many times, but this... this was different. To die slowly, trapped like an animal, with no chance to fight back... it was a fate worse than any he could imagine. As the hours dragged on, the thirst became unbearable. The Fox's tongue felt like sandpaper, his throat raw and parched. He could see the same desperation in Grimgor's eyes, the realization dawning on them both that they were truly alone. "We have to get out of here," he croaked, his voice barely above a whisper. Grimgor just grunted in agreement, his gaze never leaving the window. The Fox's mind raced, trying to come up with a plan. They had nothing, no tools, no weapons, just their wits and their will to survive. And even that seemed to be fading with each passing moment. But then, as the sun began to set and the shadows lengthened, he noticed something. The light from the window was changing, growing dimmer. He squinted, trying to make out what was happening. And then he saw it. "Grimgor," he hissed, tugging on the larger man's arm. "Look." Grimgor turned, his eyes narrowing as he followed the Fox's gaze. The light was flickering, casting strange shadows on the wall. And then, as if in answer to their unspoken prayers, they heard it. The sound of footsteps, growing louder and louder, accompanied by the jingle of keys. The Fox's heart leapt into his throat, hope and fear warring within him. Who was it? Friend or foe? The door swung open, revealing not the jailer, but a figure shrouded in darkness. The Fox couldn't make out any details, but he could sense the malice radiating off of him like heat from a forge. "Who are you?" he demanded, trying to sound braver than he felt. The figure stepped into the cell, the light from the lantern outside casting his features into sharp relief. He was a tall man, with a cruel smile and eyes that glinted like ice. "I am your salvation," he said, his voice like a knife slicing through the silence. "And your doom." With that, he tossed a set of keys at the Fox's feet and disappeared back into the shadows. The door slammed shut behind him, the sound of the lock clicking into place echoing through the cell like a death knell. The Fox stared at the keys, his heart pounding in his chest. Freedom was within his grasp, but at what cost? He looked over at Grimgor, who was already working on his own chains. "We have to get out of here," the larger man growled. "Before whatever's coming from the North gets here." The Fox nodded, his hands shaking as he picked up the keys. They were cold, colder than they had any right to be. He could feel the weight of them, the promise of escape and the threat of the unknown. Together, they worked in silence, the only sound the rattle of chains and the pounding of their hearts. When they were finally free, they stood for a moment, staring at each other. "What now?" the Fox asked, his voice barely above a whisper. Grimgor's smile was feral, his eyes alight with a madness that sent a shiver down the Fox's spine. "Now," he said, "we find out what's really going on." And with that, they set off into the darkness, their footsteps echoing through the empty corridors like the march of the damned. </details> ## Dark sci-fi stories <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty Space Opera Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The setting is this: a colony ship heading out of the solar system has been pulled off course with no hope of getting to the planned destination. The people know this and society has broken down completely due to the grim prospects they are facing. The POV character for chapter 1 is an orphan. ### ASSISTANT Chapter 1: Adrift The colony ship, once a beacon of hope named Elysium's Dawn, now drifted through the cold void like a ghost ship. Its hull, pitted and scarred from the debris of uncharted space, groaned with each labored breath it took. The once vibrant corridors were now dimly lit, casting long shadows that seemed to swallow any remaining semblance of optimism. In one such shadow, a figure moved silently, her bare feet making no sound on the cold metal floor. She was small for her age, malnourished from the rationing that had become a way of life. Her ragged clothes hung loosely on her frame, but her eyes were sharp and alert, scanning every corner for danger. This was Kira, an orphan of the void, surviving in a world where hope was a luxury few could afford. Kira's thoughts drifted back to the stories she had heard as a child, tales of a green and blue planet called Earth, their home that they would never see again. The ship had been pulled off course by an unknown force, leaving them stranded in the vast emptiness between stars. The adults spoke in hushed tones about the dwindling resources, the failing life support systems, and the ever-present threat of pirates who preyed on lost ships like theirs. She pushed those thoughts aside as she approached her destination: the hydroponics bay. It was one of the few places on the ship where life still thrived, albeit in a controlled and desperate manner. The air here was thick with moisture and the scent of growing things, a stark contrast to the stale recycled air that permeated the rest of the vessel. As she entered, the soft glow of the grow lights illuminated rows upon rows of plants, struggling to produce enough food for the dwindling population. Kira's stomach growled at the sight, but she knew better than to take more than her share. She had learned early on that stealing could mean a swift and brutal punishment from those who guarded the bay. Instead, she found solace in tending to the plants, her small hands deftly pruning and watering them. It was here that she felt closest to something resembling peace, a connection to life that transcended the harsh reality of their situation. As she worked, her mind wandered to the lessons she had learned from the ship's AI, the only constant presence in her life since the adults had given up on teaching the children anything but survival skills. "Kira," a voice whispered urgently, jolting her out of her reverie. She turned to see Jax, another orphan and one of the few people she trusted. His face was pale and his eyes wide with fear. "You need to come with me, now." She followed him through the maze-like corridors, her heart pounding in her chest. They arrived at a small, hidden room that served as their makeshift home. Inside, huddled around a flickering screen, were several other children, their faces etched with worry and excitement. "Look," Jax said, pointing to the screen. It displayed a grainy image of a man in tattered clothing, his face half-hidden by a wild beard. "He says he knows how to fix the engines." Kira's eyes widened in disbelief. The engines had been dead for as long as she could remember, their failure sealing their fate as interstellar castaways. "Who is he?" she asked, her voice barely above a whisper. "They call him the Mechanic," one of the older girls replied. "He used to work in engineering before...before everything fell apart." The man on the screen continued his frantic plea, detailing his plan to restore power and steer the ship towards a habitable planet he claimed was within reach. Kira felt a spark of hope ignite within her, but it was quickly doused by the skepticism of the others. "It's just another one of their lies," spat a boy named Taro. "They'll say anything to get us to do their dirty work." Jax rounded on him, his fists clenched. "This is our chance, Taro! We can't just give up!" The argument raged on, but Kira tuned it out, her thoughts racing. She knew the risks of trusting an adult, especially one who claimed to have access to forbidden knowledge. But what if he was telling the truth? What if they could find a new home and start over? As she watched the debate unfold, she made up her mind. She would seek out this Mechanic and offer him her help, no matter the cost. It was a decision born of desperation, but also of a stubborn refusal to accept the bleak future that lay before them. And so, as the other children slept fitfully around her, Kira slipped out into the dark corridors once more, determined to find the man who might hold the key to their salvation or doom. The ship groaned and creaked around her, a constant reminder of the fragile existence they all clung to. But she pressed on, driven by an unyielding will to survive and the faintest glimmer of hope that burned within her heart. As she navigated the treacherous path towards engineering, she couldn't help but wonder what awaited her in the depths of the ship. Would she find a savior or a madman? And more importantly, would she be able to make it back alive? </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is an astronaut living on a space station orbiting the earth. The day starts off normally, but then him the others notice the mushroom clouds and they have to face inevitability of the situation they are in. ### ASSISTANT Chapter 1: Ashes of Exodus The cold embrace of space enveloped the station like a lover's caress, its infinite blackness punctuated only by the distant twinkle of stars and the ever-present glow of Earth below. Captain Marcus Kane floated through the dimly lit corridors, his magnetic boots barely whispering against the metal floor as he made his way to the observation deck. The daily routine had become a meditative ritual for him, a chance to clear his mind before the cacophony of the day's work began. As he entered the room, the sight that greeted him never failed to take his breath away. Earth, a swirling marble of blues and greens, hanging in the void like an irreplaceable jewel. He couldn't help but feel a pang of longing for its familiar embrace, for the weight of gravity that he had left behind so many years ago. Marcus settled into his favorite spot by the window, the worn leather of the seat molded to his body like an old friend. He closed his eyes and let the silence wash over him, the only sound the soft hum of the station's life support systems. It was in these moments that he felt truly alive, connected to something greater than himself. But today, the silence was shattered by a chilling alarm, its shrill wail piercing the tranquility like a knife through glass. His eyes snapped open as his heart began to race, adrenaline coursing through his veins. He knew that sound all too well; it was the emergency alert for an incoming transmission from Earth. He launched himself towards the comms console, fingers flying over the keys as he tried to make sense of the garbled message. The voice on the other end was frantic, barely coherent through the static. "Multiple...detonations...global catastrophe..." were the only words he could decipher before the signal died completely. Marcus's mind raced as he tried to process what he had just heard. It couldn't be true, it couldn't be happening. Not again. He quickly activated the external cameras, his stomach twisting into knots as he waited for the feed to come online. And then he saw them: mushroom clouds, blooming like grotesque flowers across the planet's surface. Too many to count, their plumes of smoke and ash reaching up towards the heavens in a silent scream of despair. The sight was both beautiful and horrifying, a testament to humanity's capacity for destruction on an unimaginable scale. His thoughts were interrupted by the sound of footsteps behind him as the rest of the crew floated into the room, drawn by the alarm. Their faces mirrored his own shock and disbelief as they stared at the monitors, their voices a cacophony of whispers and gasps. "What...what is that?" asked Dr. Amelia Chen, her voice trembling with fear. Marcus didn't answer; he couldn't find the words. Instead, he simply pointed at the screen, his hand shaking slightly. Engineer Jake Taylor was the first to break the silence. "Nuclear war," he said, his voice flat and emotionless. "It's finally happened." The others looked at him in horror, but Marcus knew he was right. There could be no other explanation for such devastation. He felt a cold weight settle in the pit of his stomach as the reality of their situation began to sink in. They were alone, adrift in the vastness of space with no hope of return. As the implications of what they had just witnessed set in, the crew fell into an uneasy silence, each lost in their own thoughts. Marcus couldn't help but think about his wife and daughter back home, wondering if they were safe or if they too had been consumed by the flames. He knew it was a futile exercise, but he couldn't stop himself from hoping against hope that somehow, someway, they had survived. But as the minutes ticked by and no further communication came from Earth, that hope began to fade like a dying ember. They were truly alone now, the last remnants of humanity clinging to life on this tiny outpost in the void. He turned to face his crew, their faces etched with fear and uncertainty. "We need to come up with a plan," he said, his voice steady despite the turmoil raging within him. "We can't just sit here and wait for...for whatever comes next." Amelia nodded, her eyes hardening with determination. "I'll start running diagnostics on our life support systems, see how long we can last up here." Jake followed suit, his hands already moving to the controls of the station's mainframe. "I'll try to establish contact with any other surviving outposts or spacecraft." Marcus nodded, grateful for their resolve in the face of such overwhelming odds. He knew that they were all thinking the same thing: that this could very well be the end of everything they had ever known. But he also knew that they couldn't give up, not yet. As he watched them work, his mind drifted back to the message from Earth, those final desperate words echoing in his head like a funeral dirge. He couldn't shake the feeling that there was something more, some hidden meaning or clue that would explain what had happened. And then it hit him: the timing of the detonations, the pattern of the blasts. It wasn't random; it was deliberate, almost...calculated. But who would do such a thing? And why? The questions swirled in his mind like a storm, each one more unsettling than the last. He knew that they needed answers, and fast. Because if this was an act of war, then their isolation might not be as secure as they thought. He pushed himself away from the console, his decision made. "I'm going to suit up," he said, his voice firm. "We need to get a closer look." The others stared at him in disbelief, but he didn't waver. This was his duty as captain, to protect his crew and find out the truth no matter how terrible it might be. As he floated towards the airlock, Marcus couldn't help but feel a sense of dread creeping up on him like a shadow. He knew that what he was about to see would change everything, that there was no going back once he stepped outside and gazed upon the ashes of humanity's final stand. But he also felt something else: a burning anger, a fierce determination to uncover the truth and make those responsible pay for their crimes against humanity. Whatever awaited them out there, they would face it together, as one. He closed his eyes and took a deep breath, steeling himself for what was to come. And then, with a quiet prayer on his lips, he opened the airlock and stepped into the void. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a worker constructing a biodome when a terrible accident happens. Make the story very dark; no happy ending here... ### ASSISTANT Chapter 1: The Biodome The air was thick with the scent of sweat and burnt metal as Kaelen, a seasoned construction worker, tightened the last bolt on the massive irrigation system. He wiped his brow with the back of his gloved hand, leaving behind a smear of grease that glistened under the harsh artificial lights. The hum of machinery and distant shouts echoed through the cavernous space, a cacophony that had become as familiar to him as his own heartbeat over the years spent building biodomes on this desolate planet. Kaelen's thoughts drifted to his family back on Earth, their faces faded like old photographs in his mind. He wondered if they ever thought about him, or if he was just another ghost haunting the fringes of their memories. The company had promised them a better life, but it came at a steep price: years of backbreaking labor on an alien world, far from everything they'd ever known. "Hey, Kael!" A voice cut through his reverie, jolting him back to reality. It was Tamsin, his foreman, her face hidden behind a tinted visor. "We need you up top! There's an issue with the atmospheric seal." He nodded curtly and began the long climb up the scaffolding, each rung biting into his calloused hands. As he ascended, Kaelen couldn't help but marvel at the sheer scale of their creation: a vast dome of steel and glass that would one day be teeming with life, a self-sustaining ecosystem in the heart of this barren wasteland. But today was not that day. Today, it was just another tomb waiting to be sealed. As he reached the top, Kaelen could see the problem immediately: a small fissure had formed along one of the joints, spewing precious oxygen into the void beyond. He cursed under his breath; they were already behind schedule and over budget. Another delay would mean another round of demerits, another month's pay docked. "What do you think?" Tamsin asked, her voice crackling through his earpiece. "Can we patch it up or do we need to call in the engineers?" Kaelen hesitated, running his fingers along the jagged edge of the tear. It was larger than he'd initially thought, and growing by the second. He could feel the cold tendrils of vacuum reaching out to claim him, whispering promises of oblivion. "I... I don't know," he admitted, his voice heavy with dread. "It doesn't look good." Tamsin swore colorfully and turned away, barking orders into her comm unit. Kaelen watched as workers scrambled to gather tools and materials, their movements frantic and disorganized. He knew they were all thinking the same thing: if they couldn't fix this, they were dead. The air around them grew colder, thinner, as the oxygen continued to escape. Kaelen's lungs burned with every breath, his vision swimming at the edges. He fumbled with the patch kit, his hands shaking uncontrollably. This was it; this was how he would die, millions of miles from home, in service to a corporation that saw him as nothing more than a replaceable cog in their grand machine. "Hurry up!" Tamsin shouted over the growing din. "We're losing pressure fast!" Kaelen's heart pounded in his chest like a sledgehammer, drowning out all other sound. He could feel the panic rising within him, threatening to consume him whole. But he couldn't afford to give in; not now, not when so much was at stake. With trembling hands, he applied the sealant and pressed the patch into place. For a moment, it seemed to hold... but then, with a sickening lurch, the fissure widened, swallowing the feeble attempt whole. The wind howled around them like a ravenous beast, tearing at their suits, trying to pull them apart atom by atom. "Abort!" Tamsin screamed, her voice barely audible over the roar. "Everyone get out now!" But it was too late. The dome shuddered beneath their feet, groaning in protest as the atmosphere inside rushed to equalize with the void outside. Kaelen felt himself being lifted off the ground, his boots scrabbling for purchase on the slick metal surface. He reached out, desperate to grab onto something - anything - but found only emptiness. And then, as suddenly as it had begun, there was silence. A deafening, suffocating silence that pressed in on him from all sides. His vision went dark, and he knew no more. When Kaelen awoke, he was lying on the cold floor of the dome, his body wracked with pain. He tried to move, but found himself held fast by twisted metal and shattered glass. Panic surged through him once again as he realized that his suit had been breached; he could feel the icy chill of vacuum seeping into his very bones. Through the haze of agony, he became aware of movement nearby. Tamsin was there, her visor cracked and fogged, struggling to free herself from the wreckage. She looked at him with wide, terrified eyes, her lips moving soundlessly. He knew what she was saying: they were going to die here, alone and forgotten. Kaelen closed his eyes, willing himself to accept the inevitable. But even as the darkness closed in, he couldn't shake the feeling that this wasn't just an accident - that there were forces at work far beyond their understanding, manipulating them like pawns on a cosmic chessboard. And then, just when he thought it was all over, he heard it: a faint beeping, growing louder by the second. His eyes snapped open to see a small drone hovering above them, its lights flashing urgently. It was one of the company's rescue bots, sent to assess the damage and retrieve any survivors. But there would be no salvation for Kaelen or Tamsin. Their suits were too damaged, their bodies too far gone. As the drone's camera lens bore down on him, he could only manage a weak, bitter smile. "Tell my family... tell them I'm sorry," he croaked, his voice barely audible even to himself. Tamsin reached out a gloved hand, her fingers brushing against his own in a final gesture of solidarity. And then, as the darkness claimed him once more, Kaelen knew that this was not the end of their story - but rather, the beginning of something far more sinister. Something that would change the course of human history forever... </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is manning a listening station in deep space. What he hears over the radio chills him to the bone. ### ASSISTANT Chapter 1 - Whispers from the Void The silence of deep space was like an old friend to Captain Alistair Reynolds, a constant companion that had accompanied him through countless missions and lonely nights at the helm of the listening station. But today, it felt more like a suffocating shroud, heavy with unspoken secrets and foreboding. He sat in his chair, fingers drumming impatiently on the console as he scanned the endless stream of data pouring in from the far reaches of the cosmos. His thoughts drifted to home, to the warmth of his wife's embrace and the laughter of their children playing in the garden under the twin suns of Proxima Centauri. It had been years since he last felt that warmth, since he last heard those laughs. The ache was always there, a dull throb that pulsed in time with his heartbeat, but today it seemed to echo through the cold metal halls of the station like a mournful dirge. A sudden blip on the screen snapped him back to reality. He leaned forward, eyes narrowing as he studied the readout. It was faint, barely more than a whisper against the static, but it was there - a signal, human in origin and heavily encrypted. His heart raced as he keyed in the decryption sequence, his mind already racing ahead to the possibilities of what this could mean. The message unscrambled, revealing a frantic voice, barely audible over the crackle of interference. "This is the UES Normandy, requesting immediate assistance... coordinates attached... under attack by unknown forces... God help us..." The transmission cut out abruptly, leaving only the empty hiss of static in its wake. Alistair's blood ran cold as he stared at the screen, the weight of the silence pressing down on him like a physical force. He knew that ship, had served with her captain back during the Martian uprising. They were old friends, bound by shared battles and the scars they bore from them. And now she was out there, alone and in danger. He didn't hesitate. His fingers flew over the console, sending out an emergency beacon to the nearest fleet. But he knew it would take time for help to arrive, time that the Normandy might not have. He had to do something now. As he prepared to launch a probe towards the coordinates, his mind filled with images of burning ships and desperate faces. He could almost hear the screams of the dying, echoing through the void like ghosts from his past. It was a sound that haunted him still, one he had hoped never to hear again. But duty called, and Alistair was nothing if not a soldier. He took a deep breath, steadying himself against the tide of fear and doubt that threatened to overwhelm him. This was his job, his purpose - to listen, to bear witness, and when necessary, to act. The probe shot out into the darkness, its tiny form swallowed up by the vastness of space. He watched it go, feeling a strange sense of detachment as if he were watching someone else's life play out before him. And perhaps in some ways, he was. For all his years in service, this was the first time he had ever been truly alone, cut off from the rest of humanity by light-years and the cold indifference of the universe. As the minutes ticked by, Alistair found himself slipping into a half-trance, his thoughts drifting back to happier times. He remembered the first time he had held his newborn son, the tiny fingers curling around his own as if trying to hold on for dear life. He remembered the taste of real Earth coffee, rich and bitter, shared with his wife under a sky full of stars that seemed close enough to touch. But those memories were distant now, fading like echoes in the void. All he had was this station, this endless vigil, and the hope that one day, he might make it back home. The radio crackled to life again, jolting him out of his reverie. It was the probe, transmitting data back from the edge of nowhere. He braced himself for what he would find, knowing that it could change everything. What greeted him on the screen was worse than anything he could have imagined. The Normandy was there, alright - or rather, what was left of her. She drifted through space like a broken toy, her hull breached and venting atmosphere into the cold void. There were no signs of life, no distress signals, nothing but silence. Alistair felt a cold knot form in his stomach as he took in the wreckage. This wasn't just any ship; this was his friend's ship, her crew now lost to the void. And the attackers... whoever or whatever they were, they had left no trace behind. He sat back in his chair, the weight of it all settling heavily upon him. He knew what he had to do next - file a report, alert command, and wait for further instructions. But as he stared at the lifeless husk that was once the Normandy, he couldn't shake the feeling that something far greater than himself was at work here. Something dark and terrible had come calling, whispering its secrets through the stars. And now it was his turn to listen, to bear witness, and perhaps, if he was lucky, to survive long enough to tell the tale. </details> ## Miscellaneous stories <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty post-apocalyptic novel in the style of Cormac McCarthy. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a technician in a nuclear bunker 18 months after the war. The 12 inhabitants of the bunker have been expecting to start to be able to leave soon, but it has just been discovered that at least where they are, salted cobalt bombs were used. It's clear now that either all 12 will die when the resources run out and their is barely enough for 6 to make it to 5 years and have any hope of leaving. After 18 months of calm everything breaks lose today. ### ASSISTANT In the dim light of the bunker, the technician sat hunched over his console, fingers dancing across the keys with a practiced urgency. The air was thick with the hum of machinery and the stench of sweat and fear. He had been down here for what felt like an eternity, but the digital clock on the wall insisted it had only been eighteen months since the world above had turned to ash. His name was Samuel, or at least that's what they called him now. In another life, he had been a software engineer, designing algorithms for self-driving cars and smart homes. Now, he was responsible for keeping twelve souls alive in this underground tomb, their last bastion against the desolation above. He glanced up at the monitor, the green glow illuminating his haggard face. The readings were clear: salted cobalt bombs had been used. The thought sent a shiver down his spine. They had all heard the rumors, of course, but they had clung to hope like a drowning man to driftwood. Hope that the war hadn't been as bad as they feared, that the surface would be habitable again soon. But now, the truth was undeniable. The world outside was dead, and it would remain so for generations. Samuel closed his eyes, trying to block out the images of a scorched earth, devoid of life. He could hear the others in the background, their voices a low murmur as they went about their daily tasks. They didn't know yet. How could he tell them? How could he be the one to snuff out the last ember of hope that had kept them going for so long? He took a deep breath and stood up, his joints creaking in protest. He made his way through the narrow corridors, past the rows of bunks where some slept fitfully, their dreams haunted by memories of a world they would never see again. Others were awake, their faces etched with lines of worry and despair. They looked up as he passed, but he couldn't meet their gazes. He reached the command center, where Captain Reynolds was poring over maps and reports. The man looked up as Samuel entered, his eyes heavy with exhaustion. "What is it?" he asked, his voice gruff. Samuel hesitated for a moment before speaking. "Sir, I have some... troubling news." The captain's expression didn't change, but Samuel could see the tension in his jaw. He had been expecting this, too. "Go on," he said simply. "The radiation readings... they're off the charts. It's cobalt-60. The half-life is over five years." Reynolds let out a slow breath, his eyes never leaving Samuel's face. "And our supplies?" "We have enough for six, maybe seven if we ration carefully. But not twelve." The room seemed to shrink around them as the weight of their situation settled in. The captain rubbed his temples, his thoughts racing. They had always known that their time in the bunker was limited, but they had been so close. So very close. "We'll need to make a decision," he said finally, his voice barely above a whisper. "Who stays, and who goes." Samuel felt a knot form in his stomach. He knew what that meant. It was survival of the fittest down here now, and there were those among them who would do anything to ensure their own survival. As he made his way back to his console, he couldn't help but think about the world above. The trees that would never grow again, the birds that would never sing. He thought of his wife and daughter, their faces fading like photographs left in the sun. They had been lucky, they said, to have made it into the bunker. But now, he wasn't so sure. In the mess hall, the atmosphere was tense. Whispers and hushed conversations filled the air as people tried to make sense of what they had just heard. Samuel sat alone at a table, staring blankly at his food. He couldn't eat. Not now. He felt a hand on his shoulder and looked up to see Maria, the doctor. Her eyes were red-rimmed, but her voice was steady. "We need to talk," she said, guiding him towards the infirmary. Once inside, she closed the door behind them and leaned against it, her body trembling. "What are we going to do?" she asked, her voice cracking. Samuel shook his head. "I don't know." "We can't just let six of us die," she said, her voice rising in panic. "There has to be another way." He looked at her, at the desperation etched on her face, and felt a flicker of anger. "What do you suggest? We all hold hands and sing Kumbaya while we starve to death?" Maria's eyes flashed with anger, but she quickly composed herself. "No," she said softly. "But we can't just give up." Samuel sighed and ran a hand through his greasy hair. "I'm not giving up," he said. "I'm just being realistic." She looked at him for a long moment before nodding. "You're right," she said, her voice barely audible. "We need to be realistic." They sat in silence for a while, each lost in their own thoughts. Then, Maria spoke up again. "There is one option," she said hesitantly. "But it's... drastic." Samuel raised an eyebrow. "What?" "We could... we could draw straws," she said, her voice barely above a whisper. "Choose six people at random to leave the bunker and take their chances on the surface." He stared at her in disbelief. "You can't be serious." "It's not ideal," she admitted. "But it's better than just letting them die down here." Samuel shook his head, his mind reeling. He couldn't fathom the thought of sending people out into that wasteland, knowing they would likely never return. But what other choice did they have? As he sat there, the weight of their decision pressing down on him like a leaden shroud, he heard a commotion outside. Shouting, followed by the sound of scuffling feet and something heavy being dragged across the floor. He exchanged a worried glance with Maria before rushing out into the corridor. What he saw made his blood run cold. A group of men had cornered one of the younger women, their faces twisted with rage and desperation. They were arguing over who would get to leave the bunker, who would get a chance at life. Fists flew, and soon, the sound of gunfire echoed through the halls. Samuel stood there, frozen in place as chaos erupted around him. This was it. The end of everything they had worked so hard to preserve. He could feel the darkness closing in, suffocating him, as he realized that their last hope for survival had just been shattered like glass under a bootheel. And all he could do was watch as the world above claimed them, one by one. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a historical mystery novel in the style of Arturo Pérez-Reverte and Umberto Eco. The story starts off in California during the 1948 Gold Rush, before moving to London in the mid 1970s. There should be a mysterious 'occult' item that connects these two time periods. ### ASSISTANT Chapter One: The Golden Veil The year was 1848, and California was a land of dreams and desperation. Men from all corners of the world had flocked to its golden shores, lured by the promise of wealth beyond their wildest imaginations. They came with pickaxes and pans, with hope in their hearts and greed in their eyes, ready to tear apart the earth itself in search of fortune. In the midst of this frenzy stood a solitary figure, a man who seemed out of place amidst the chaos. He was tall and lean, with a face that bore the lines of a thousand stories. His clothes were simple but well-made, his boots worn from years of travel. He moved with an air of quiet authority, as if he belonged to another time entirely. His name was Alistair Blackwood, and he had come to California not for gold, but for something far more valuable: knowledge. A scholar by trade and an adventurer at heart, Blackwood had spent his life chasing after ancient texts and forgotten lore, seeking to unravel the secrets of the universe. And now, he believed he was on the verge of a discovery that would change everything. Rumors had reached him of a strange artifact, said to have been found deep within the Sierra Nevada mountains. It was a small, intricately carved box made of an unknown metal, its surface etched with symbols that defied translation. Those who claimed to have seen it spoke in hushed tones of its otherworldly beauty and the eerie sense of power that seemed to emanate from within. Blackwood had tracked the rumors to a dusty saloon in Sacramento, where he hoped to find answers among the grizzled miners who frequented the place. As he pushed open the swinging doors, the din of voices and clinking glasses washed over him like a wave. He scanned the room, his gaze settling on a group of men huddled around a table in the corner. One look at their faces told him they were the ones he sought: sun-weathered and unshaven, their eyes glinting with a mixture of excitement and fear as they whispered amongst themselves. He approached them slowly, aware that he was an outsider in this rough-and-tumble world. "Gentlemen," he said, his voice low and cultured, "I couldn't help but overhear your conversation. Might I be so bold as to ask what has you all so...animated?" The men exchanged wary glances before one of them spoke up. "You ain't from around here, are ya?" he drawled, eyeing Blackwood's fine clothes with suspicion. "No," Blackwood admitted, "I am not. But I have traveled far and wide in search of knowledge, and I believe you may possess something that could be of great interest to me." He reached into his pocket and produced a small leather pouch, which he placed on the table with a soft thud. The men's eyes widened as they saw the glint of gold within. "I understand there is an object - a box, perhaps? - that has recently come into your possession. I would be willing to pay handsomely for any information you might have about it." The miners looked at each other, then back at Blackwood. Finally, the one who had spoken before nodded slowly. "We might know somethin' 'bout that," he said, his voice dropping to a conspiratorial whisper. "But it ain't no ordinary box." As they began to tell their tale, the saloon faded away around them, replaced by images of dark caverns and glittering treasure. Blackwood leaned forward, his heart pounding with anticipation. This was it - the moment he had been waiting for. The box, they said, had been found buried beneath a waterfall, hidden in a cave that seemed to defy all natural laws. It was guarded by strange creatures unlike anything any of them had ever seen: half-man, half-beast, with eyes that glowed like embers in the darkness. But the greatest mystery of all lay within the box itself. For when they opened it, they found not gold or jewels, but a single sheet of parchment, yellowed with age and covered in writing that none of them could decipher. Blackwood's pulse quickened as he listened. This was more than just another piece of occult nonsense; this was the real thing. He knew it in his bones. "I must see this box for myself," he said, his voice barely above a whisper. "Take me to it." The miners exchanged uneasy glances before nodding their assent. They led him out of the saloon and into the night, down winding streets and through shadowy alleyways until they reached a nondescript building on the edge of town. Inside, they descended a rickety staircase to a dimly lit basement filled with crates and barrels. There, in the center of the room, sat the object of his obsession: a small, unassuming box that seemed to hum with energy. Blackwood reached out a trembling hand to touch it, feeling the power coursing through him like electricity. As he did so, the world around him shifted, and suddenly he was no longer in a dingy basement in Sacramento. He stood instead in a grand library, its shelves groaning under the weight of countless tomes and scrolls. A fire crackled in the hearth, casting flickering shadows on the walls. A figure emerged from the gloom, cloaked in darkness save for two piercing eyes that seemed to bore into his very soul. "You have found it," the voice said, deep and resonant. "The key to unlocking the secrets of the universe." Blackwood stared at the apparition, unable to speak. He knew he should be afraid, but all he felt was awe. "But be warned," the figure continued, "for with great power comes great responsibility. The knowledge contained within this box is not meant for mortal minds." And then, just as suddenly as it had appeared, the vision vanished, leaving Blackwood alone once more in the dimly lit room. He stared at the box, his mind racing. What did it all mean? As he pondered this question, a sudden commotion erupted upstairs. Shouts and gunfire filled the air, followed by the sound of heavy footsteps on the staircase. The miners had betrayed him! He snatched up the box and fled into the night, his heart pounding in his chest. He knew he couldn't stay in California - not now that others were after the artifact. No, he would have to take it far away, to a place where its secrets could be studied in safety. And so began a journey that would span continents and decades, from the wild frontier of the Gold Rush to the foggy streets of Victorian London. For Alistair Blackwood had stumbled upon something much greater than mere treasure; he held in his hands the key to unlocking the very fabric of reality itself. But little did he know that the true mystery was only just beginning... </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty historical novel set during the battle of the Bulge. Write in first person. The POV character for Chapter 1 is a young German officer trying to carry his wounded commander back to the deep defensive line after the bunker he was in was overrun. He should have a flashback to the moment it was overrun during the middle of the story. The story should not end well for either of them. ### ASSISTANT Chapter 1 The cold bit into my bones as I trudged through the snow, each step a labored effort. My breath came out in ragged puffs, clouding the air around me. In my arms, I cradled the weight of my world - my commander, Hauptmann Müller, his body limp and heavy with pain. His eyes were closed, but I could hear the shallow rasp of each breath he took, a testament to his will to live. The forest around us was a maze of shadows and death, the trees standing like skeletal sentinels against the moonlit sky. The silence was deafening, broken only by the distant rumble of artillery and the occasional crack of a rifle shot. It was a cruel irony that the same snow that muffled our footsteps also betrayed the presence of the enemy, their tracks glowing like phosphorescent snakes in the pale light. I stumbled, my legs threatening to give way under the strain. Hauptmann Müller groaned, his hand clutching at my coat. "Keep moving, Leutnant," he whispered, his voice barely audible. "We must reach the defensive line." I nodded, gritting my teeth as I adjusted my grip on him. The pain in my arms and shoulders was a constant companion now, but it was nothing compared to the fear that gnawed at my gut. We were alone, cut off from our unit during the chaos of the American assault. The bunker we had been holed up in was now a tomb for the men who had trusted us to lead them. A flash of memory seared through my mind, taking me back to that fateful moment when it all went wrong. The thunderous roar of explosions, the earth shaking beneath our feet as if the very foundations of hell were breaking loose. The screams of men, the acrid smell of smoke and cordite, the taste of blood in my mouth. I remembered the look on Müller's face when he realized what was happening - a mixture of disbelief and determination. He had barked orders, trying to rally our troops, but it was too late. The Americans were upon us, their numbers overwhelming. I saw him take a bullet to the leg, his face contorting in agony as he fell. And then there was nothing but chaos and darkness. When I came to, the bunker was silent except for the moans of the dying. The air was thick with smoke, making it hard to breathe. I found Müller slumped against a wall, his uniform soaked in blood. He looked up at me, his eyes glassy with pain but still burning with that same fierce determination. "We have to go," he said, gripping my arm. "They're coming." And so we had fled into the night, leaving behind the wreckage of our command post and the bodies of our comrades. Now, hours later, we were still stumbling through the forest, searching for safety that seemed to recede with every step. I could feel my strength ebbing away, my legs turning to lead. But I couldn't stop, not now. Not when we were so close. The deep defensive line was our only hope, a thin red line that stood between us and annihilation. If we could just make it there... But fate is a cruel mistress, and she had other plans for us. As we staggered out of the trees and into a small clearing, I saw them - a patrol of American soldiers, their silhouettes stark against the snow. They spotted us at the same time, their rifles swinging around to point in our direction. "Halt!" one of them yelled, his voice harsh and commanding. "Hands up!" I froze, my heart pounding in my chest. Müller's grip on me tightened, his breath coming in short gasps. I knew what he was thinking - we couldn't surrender, not like this. We were too valuable, too close to the front lines. They would shoot us or take us prisoner, and either way, it would be a death sentence for him. "Run," he whispered, his voice barely audible. "Leave me." I shook my head, tears stinging my eyes. I couldn't do that. I wouldn't. He was my commander, my friend. I owed him everything. But before I could make a move, the air was filled with the sound of gunfire. Müller jerked in my arms, a wet gurgle escaping his lips as he slumped against me. I stared down at him in disbelief, feeling the warmth of his blood soaking through my coat. "Nein," I whispered, my voice choked with grief and rage. "Nein!" I fell to my knees, cradling his lifeless body as the world around me spun out of control. The Americans were closing in, their shouts growing louder. I knew it was over - for both of us. With a final, desperate cry, I raised my pistol and fired into the darkness, hoping to take at least one of them with me. But my aim was wild, my strength gone. The last thing I saw was a flash of light, followed by an explosion of pain in my chest. And then there was nothing but blackness. As I slipped away, I thought of home - of my mother's warm embrace, of the laughter of my little sister. And I knew that I would never see them again. This frozen hellhole would be my grave, and Hauptmann Müller's sacrifice would be forgotten by all but me. But in that moment, as death claimed me, I found a strange sort of peace. We had fought for our Fatherland until the very end, and we had done so with honor. And in the grand tapestry of history, perhaps that was all that mattered. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me a short story about the fictional character of Anton Chigurh. It should be written in third person personal and feature internal monologues of his deranged world view. The time period is around 2 years before the events in Cormac McCarthy's 2005 novel 'No Country for Old Men'. It should be set in Mexico, be *very* dark, show Anton's still developing obsession with "chance" and feature a grenade. The story should start with him waiting in a bar for instructions on the "job" he is to perform. ### ASSISTANT In the dimly lit corner of a nondescript Mexican cantina, Anton Chigurh sat motionless, his eyes scanning the room like a predator assessing its prey. The air was thick with the smell of stale cigarette smoke and cheap tequila, but he remained unfazed, his senses heightened, attuned to every sound, every movement around him. He nursed a glass of water, the condensation beading on its surface like droplets of sweat on cold steel. His mind wandered, as it often did, to the concept of chance. The randomness of life and death fascinated him, the idea that one's fate could hinge on something as simple as a coin flip or the roll of dice. He believed in the purity of such decisions, untainted by human emotion or prejudice. It was a philosophy he had embraced wholeheartedly, and it had served him well in his line of work. A fly buzzed around his head, its tiny wings beating an erratic rhythm against the stillness of the room. He watched it with mild interest, wondering if its life would end here, on this very spot, or if it would continue its meaningless existence elsewhere. The thought brought a small, cruel smile to his lips. The bartender, a grizzled old man with a face like leather, eyed him warily from across the room. He knew better than to engage in conversation with this stranger, who had entered his establishment an hour ago and hadn't spoken a word since. There was something about him that made the hairs on the back of his neck stand up, something cold and deadly lurking just beneath the surface. Chigurh's thoughts were interrupted by the sound of the door creaking open. A man in a cheap suit entered, looking around nervously before making his way towards him. He slid into the seat across from Chigurh, who continued to stare straight ahead, as if he hadn't noticed the new arrival. "You're late," Chigurh said finally, his voice low and gravelly. The man in the suit swallowed hard, adjusting his tie. "I apologize. Traffic." Chigurh didn't respond, his eyes still fixed on some distant point. The man fidgeted, reaching into his pocket to retrieve a small envelope. He slid it across the table, and Chigurh picked it up without looking at him, tearing it open with a single, precise movement. Inside was a photograph of a man, along with an address scrawled on a piece of paper. He studied the image for a moment, committing every detail to memory before slipping it back into the envelope and sliding it into his own pocket. "You know what needs to be done," the man in the suit said, his voice barely above a whisper. Chigurh nodded once, his mind already turning to the task at hand. He stood up, leaving a few crumpled bills on the table for the water he hadn't touched. As he walked towards the door, he could feel the weight of the grenade in his other pocket, its cool metal surface pressing against his thigh like a lover's caress. Outside, the heat of the day was oppressive, but Chigurh barely registered it as he climbed into his car and started the engine. He drove through the dusty streets, his thoughts consumed by the upcoming job. The man in the photograph had a wife, two kids. It didn't matter. They were all just pieces on a board, pawns to be moved or removed as necessary. He arrived at the address just as the sun was beginning to set, casting long shadows across the dilapidated building. He sat in his car for several minutes, watching the comings and goings of the neighborhood, waiting for the right moment. When it came, he slipped out of the vehicle and approached the door with a sense of purpose. Inside, he could hear laughter and music, the sounds of a family enjoying their evening together. He smiled to himself as he pulled the grenade from his pocket, feeling its weight in his hand. This was what he lived for: the moment when chance decided who would live and who would die. He took a deep breath, savoring the anticipation, before pulling the pin with a satisfying click. He tossed it into the room, not bothering to wait for the explosion as he turned and walked away. The screams that followed were like music to his ears, a symphony of chaos and destruction. As he drove off into the night, Chigurh couldn't help but feel a sense of satisfaction. Another job well done, another life ended by the whims of fate. He reached into his pocket and pulled out a coin, flipping it idly between his fingers as he considered his next move. Heads or tails? Life or death? It was all the same to him. The world was a cruel place, and he was just its instrument, meting out justice in the only way he knew how: with cold, calculated precision. And if some people happened to get caught in the crossfire... well, that was just the luck of the draw. </details> Big thanks to: @sophosympatheia for working out the merge pattern, @Sao10K for creating Euryale and WinterGoddess, and @chargoddard for writing [Mergekit](https://github.com/arcee-ai/mergekit)!
[ "BEAR" ]
kadirnar/yolov10s
kadirnar
object-detection
[ "yolov10", "object-detection", "computer-vision", "pypi", "dataset:detection-datasets/coco", "arxiv:2405.14458", "license:agpl-3.0", "region:us" ]
2024-05-27T12:50:10Z
2024-05-27T12:50:18+00:00
14
0
--- datasets: - detection-datasets/coco license: agpl-3.0 tags: - object-detection - computer-vision - yolov10 - pypi --- ### Model Description [YOLOv10: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors](https://arxiv.org/abs/2405.14458v1) [Paper Repo: Implementation of paper - YOLOv10](https://github.com/THU-MIG/yolov10) ### Installation ``` pip install supervision git+https://github.com/THU-MIG/yolov10.git ``` ### Yolov10 Inference ```python from ultralytics import YOLOv10 import supervision as sv import cv2 MODEL_PATH = 'yolov10n.pt' IMAGE_PATH = 'dog.jpeg' model = YOLOv10(MODEL_PATH) image = cv2.imread(IMAGE_PATH) results = model(source=image, conf=0.25, verbose=False)[0] detections = sv.Detections.from_ultralytics(results) box_annotator = sv.BoxAnnotator() category_dict = { 0: 'person', 1: 'bicycle', 2: 'car', 3: 'motorcycle', 4: 'airplane', 5: 'bus', 6: 'train', 7: 'truck', 8: 'boat', 9: 'traffic light', 10: 'fire hydrant', 11: 'stop sign', 12: 'parking meter', 13: 'bench', 14: 'bird', 15: 'cat', 16: 'dog', 17: 'horse', 18: 'sheep', 19: 'cow', 20: 'elephant', 21: 'bear', 22: 'zebra', 23: 'giraffe', 24: 'backpack', 25: 'umbrella', 26: 'handbag', 27: 'tie', 28: 'suitcase', 29: 'frisbee', 30: 'skis', 31: 'snowboard', 32: 'sports ball', 33: 'kite', 34: 'baseball bat', 35: 'baseball glove', 36: 'skateboard', 37: 'surfboard', 38: 'tennis racket', 39: 'bottle', 40: 'wine glass', 41: 'cup', 42: 'fork', 43: 'knife', 44: 'spoon', 45: 'bowl', 46: 'banana', 47: 'apple', 48: 'sandwich', 49: 'orange', 50: 'broccoli', 51: 'carrot', 52: 'hot dog', 53: 'pizza', 54: 'donut', 55: 'cake', 56: 'chair', 57: 'couch', 58: 'potted plant', 59: 'bed', 60: 'dining table', 61: 'toilet', 62: 'tv', 63: 'laptop', 64: 'mouse', 65: 'remote', 66: 'keyboard', 67: 'cell phone', 68: 'microwave', 69: 'oven', 70: 'toaster', 71: 'sink', 72: 'refrigerator', 73: 'book', 74: 'clock', 75: 'vase', 76: 'scissors', 77: 'teddy bear', 78: 'hair drier', 79: 'toothbrush' } labels = [ f"{category_dict[class_id]} {confidence:.2f}" for class_id, confidence in zip(detections.class_id, detections.confidence) ] annotated_image = box_annotator.annotate( image.copy(), detections=detections, labels=labels ) cv2.imwrite('annotated_dog.jpeg', annotated_image) ``` ### BibTeX Entry and Citation Info ``` @misc{wang2024yolov10, title={YOLOv10: Real-Time End-to-End Object Detection}, author={Ao Wang and Hui Chen and Lihao Liu and Kai Chen and Zijia Lin and Jungong Han and Guiguang Ding}, year={2024}, eprint={2405.14458}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
[ "BEAR" ]
BSC-NLP4BIA/bsc-bio-ehr-es-carmen-symptemist
BSC-NLP4BIA
token-classification
[ "transformers", "pytorch", "roberta", "token-classification", "es", "base_model:PlanTL-GOB-ES/bsc-bio-ehr-es", "base_model:finetune:PlanTL-GOB-ES/bsc-bio-ehr-es", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-06-05T15:37:29Z
2024-07-25T14:22:08+00:00
14
0
--- base_model: PlanTL-GOB-ES/bsc-bio-ehr-es language: - es license: cc-by-4.0 --- # Training data Model trained on the symptoms and signs mentions of [CARMEN-I](https://zenodo.org/records/10171540) and [SympTEMIST](https://doi.org/10.5281/zenodo.10635215). # Citation Please cite the following works: ``` @inproceedings{symptemist, author = {Lima-L{\'o}pez, Salvador and Farr{\'e}-Maduell, Eul{\`a}lia and Gasco-S{\'a}nchez, Luis and Rodr{\'i}guez-Miret, Jan and Krallinger, Martin}, title = {{Overview of SympTEMIST at BioCreative VIII: Corpus, Guidelines and Evaluation of Systems for the Detection and Normalization of Symptoms, Signs and Findings from Text}}, booktitle = {Proceedings of the BioCreative VIII Challenge and Workshop: Curation and Evaluation in the era of Generative Models}, year = 2023 } @misc{carmen_physionet, author = {Farre Maduell, Eulalia and Lima-Lopez, Salvador and Frid, Santiago Andres and Conesa, Artur and Asensio, Elisa and Lopez-Rueda, Antonio and Arino, Helena and Calvo, Elena and Bertran, Maria Jesús and Marcos, Maria Angeles and Nofre Maiz, Montserrat and Tañá Velasco, Laura and Marti, Antonia and Farreres, Ricardo and Pastor, Xavier and Borrat Frigola, Xavier and Krallinger, Martin}, title = {{CARMEN-I: A resource of anonymized electronic health records in Spanish and Catalan for training and testing NLP tools (version 1.0.1)}}, year = {2024}, publisher = {PhysioNet}, url = {https://doi.org/10.13026/x7ed-9r91} } @article{physionet, author = {Ary L. Goldberger and Luis A. N. Amaral and Leon Glass and Jeffrey M. Hausdorff and Plamen Ch. Ivanov and Roger G. Mark and Joseph E. Mietus and George B. Moody and Chung-Kang Peng and H. Eugene Stanley }, title = {PhysioBank, PhysioToolkit, and PhysioNet }, journal = {Circulation}, volume = {101}, number = {23}, pages = {e215-e220}, year = {2000}, doi = {10.1161/01.CIR.101.23.e215}, URL = {https://www.ahajournals.org/doi/abs/10.1161/01.CIR.101.23.e215} } ``` # Contacting authors jan.rodriguez [at] bsc.es ## More information on data, usage, limitations, and performance metrics soon
[ "SYMPTEMIST" ]
BSC-NLP4BIA/bsc-bio-ehr-es-carmen-meddocan
BSC-NLP4BIA
token-classification
[ "transformers", "pytorch", "roberta", "token-classification", "es", "base_model:PlanTL-GOB-ES/bsc-bio-ehr-es", "base_model:finetune:PlanTL-GOB-ES/bsc-bio-ehr-es", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-06-06T14:40:53Z
2024-07-25T14:06:06+00:00
14
0
--- base_model: PlanTL-GOB-ES/bsc-bio-ehr-es language: - es license: cc-by-4.0 --- # Training data Model trained on the anonymization part of [CARMEN-I](https://zenodo.org/records/10171540) and [MEDDOCAN](https://zenodo.org/records/4279323). # Citation Please cite the following works: ``` @inproceedings{meddocan, title={{Automatic De-identification of Medical Texts in Spanish: the MEDDOCAN Track, Corpus, Guidelines, Methods and Evaluation of Results}}, author={Marimon, Montserrat and Gonzalez-Agirre, Aitor and Intxaurrondo, Ander and Villegas, Marta and Krallinger, Martin}, booktitle="Proceedings of the Iberian Languages Evaluation Forum (IberLEF 2019)", year={2019} } @misc{carmen_physionet, author = {Farre Maduell, Eulalia and Lima-Lopez, Salvador and Frid, Santiago Andres and Conesa, Artur and Asensio, Elisa and Lopez-Rueda, Antonio and Arino, Helena and Calvo, Elena and Bertran, Maria Jesús and Marcos, Maria Angeles and Nofre Maiz, Montserrat and Tañá Velasco, Laura and Marti, Antonia and Farreres, Ricardo and Pastor, Xavier and Borrat Frigola, Xavier and Krallinger, Martin}, title = {{CARMEN-I: A resource of anonymized electronic health records in Spanish and Catalan for training and testing NLP tools (version 1.0.1)}}, year = {2024}, publisher = {PhysioNet}, url = {https://doi.org/10.13026/x7ed-9r91} } @article{physionet, author = {Ary L. Goldberger and Luis A. N. Amaral and Leon Glass and Jeffrey M. Hausdorff and Plamen Ch. Ivanov and Roger G. Mark and Joseph E. Mietus and George B. Moody and Chung-Kang Peng and H. Eugene Stanley }, title = {PhysioBank, PhysioToolkit, and PhysioNet }, journal = {Circulation}, volume = {101}, number = {23}, pages = {e215-e220}, year = {2000}, doi = {10.1161/01.CIR.101.23.e215}, URL = {https://www.ahajournals.org/doi/abs/10.1161/01.CIR.101.23.e215} } ``` # Contacting authors jan.rodriguez [at] bsc.es ## More information on data, usage, limitations, and performance metrics soon
[ "MEDDOCAN" ]
dordonezc/Phi-3-mini-4k-instruct-4-endpoints
dordonezc
text-generation
[ "transformers", "safetensors", "phi3", "text-generation", "nlp", "code", "conversational", "custom_code", "en", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-06-16T17:38:58Z
2024-06-19T12:48:07+00:00
14
0
--- language: - en license: mit license_link: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE pipeline_tag: text-generation tags: - nlp - code inference: parameters: temperature: 0.0 widget: - messages: - role: user content: Can you provide ways to eat combinations of bananas and dragonfruits? --- ## Model Summary The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) that it can support. The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures. When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters. Resources and Technical Documentation: + [Phi-3 Microsoft Blog](https://aka.ms/Phi-3Build2024) + [Phi-3 Technical Report](https://aka.ms/phi3-tech-report) + [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai) + [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook) | | Short Context | Long Context | | ------- | ------------- | ------------ | | Mini | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx) ; [[GGUF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct-onnx)| | Small | 8K [[HF]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct-onnx-cuda)| | Medium | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda)| | Vision | | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct-onnx-cuda)| ## Intended Uses **Primary use cases** The model is intended for commercial and research use in English. The model provides uses for applications which require: 1) Memory/compute constrained environments 2) Latency bound scenarios 3) Strong reasoning (especially code, math and logic) Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features. **Use case considerations** Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under. ## How to Use Phi-3 Mini-4K-Instruct has been integrated in the development version (4.41.0.dev0) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following: * When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function. * Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source. The current `transformers` version can be verified with: `pip list | grep transformers`. Phi-3 Mini-4K-Instruct is also available in [HuggingChat](https://aka.ms/try-phi3-hf-chat). ### Tokenizer Phi-3 Mini-4K-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size. ### Chat Format Given the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows. You can provide the prompt as a question with a generic template as follow: ```markdown <|user|>\nQuestion <|end|>\n<|assistant|> ``` For example: ```markdown <|user|> How to explain Internet for a medieval knight?<|end|> <|assistant|> ``` where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following: ```markdown <|user|> I am going to Paris, what should I see?<|end|> <|assistant|> Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|> <|user|> What is so great about #1?<|end|> <|assistant|> ``` ### Sample inference code This code snippets show how to get quickly started with running the model on a GPU: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline torch.random.manual_seed(0) model = AutoModelForCausalLM.from_pretrained( "microsoft/Phi-3-mini-4k-instruct", device_map="cuda", torch_dtype="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-4k-instruct") messages = [ {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}, {"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."}, {"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"}, ] pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, ) generation_args = { "max_new_tokens": 500, "return_full_text": False, "temperature": 0.0, "do_sample": False, } output = pipe(messages, **generation_args) print(output[0]['generated_text']) ``` *Some applications/frameworks might not include a BOS token (`<s>`) at the start of the conversation. Please ensure that it is included since it provides more reliable results.* ## Responsible AI Considerations Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include: + Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. + Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. + Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case. + Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. + Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses. Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include: + Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques. + High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. + Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). + Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. + Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations. ## Training ### Model * Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines. * Inputs: Text. It is best suited for prompts using chat format. * Context length: 4K tokens * GPUs: 512 H100-80G * Training time: 7 days * Training data: 3.3T tokens * Outputs: Generated text in response to the input * Dates: Our models were trained between February and April 2024 * Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models. ### Datasets Our training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of 1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code; 2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.); 3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness. ### Fine-tuning A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/sample_finetune.py). ## Benchmarks We report the results for Phi-3-Mini-4K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5. All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation. As is now standard, we use few-shot prompts to evaluate the models, at temperature 0. The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3. More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model. The number of k–shot examples is listed per-benchmark. | | Phi-3-Mini-4K-In<br>3.8b | Phi-3-Small<br>7b (preview) | Phi-3-Medium<br>14b (preview) | Phi-2<br>2.7b | Mistral<br>7b | Gemma<br>7b | Llama-3-In<br>8b | Mixtral<br>8x7b | GPT-3.5<br>version 1106 | |---|---|---|---|---|---|---|---|---|---| | MMLU <br>5-Shot | 68.8 | 75.3 | 78.2 | 56.3 | 61.7 | 63.6 | 66.5 | 68.4 | 71.4 | | HellaSwag <br> 5-Shot | 76.7 | 78.7 | 83.2 | 53.6 | 58.5 | 49.8 | 71.1 | 70.4 | 78.8 | | ANLI <br> 7-Shot | 52.8 | 55.0 | 58.7 | 42.5 | 47.1 | 48.7 | 57.3 | 55.2 | 58.1 | | GSM-8K <br> 0-Shot; CoT | 82.5 | 86.4 | 90.8 | 61.1 | 46.4 | 59.8 | 77.4 | 64.7 | 78.1 | | MedQA <br> 2-Shot | 53.8 | 58.2 | 69.8 | 40.9 | 49.6 | 50.0 | 60.5 | 62.2 | 63.4 | | AGIEval <br> 0-Shot | 37.5 | 45.0 | 49.7 | 29.8 | 35.1 | 42.1 | 42.0 | 45.2 | 48.4 | | TriviaQA <br> 5-Shot | 64.0 | 59.1 | 73.3 | 45.2 | 72.3 | 75.2 | 67.7 | 82.2 | 85.8 | | Arc-C <br> 10-Shot | 84.9 | 90.7 | 91.9 | 75.9 | 78.6 | 78.3 | 82.8 | 87.3 | 87.4 | | Arc-E <br> 10-Shot | 94.6 | 97.1 | 98.0 | 88.5 | 90.6 | 91.4 | 93.4 | 95.6 | 96.3 | | PIQA <br> 5-Shot | 84.2 | 87.8 | 88.2 | 60.2 | 77.7 | 78.1 | 75.7 | 86.0 | 86.6 | | SociQA <br> 5-Shot | 76.6 | 79.0 | 79.4 | 68.3 | 74.6 | 65.5 | 73.9 | 75.9 | 68.3 | | BigBench-Hard <br> 0-Shot | 71.7 | 75.0 | 82.5 | 59.4 | 57.3 | 59.6 | 51.5 | 69.7 | 68.32 | | WinoGrande <br> 5-Shot | 70.8 | 82.5 | 81.2 | 54.7 | 54.2 | 55.6 | 65 | 62.0 | 68.8 | | OpenBookQA <br> 10-Shot | 83.2 | 88.4 | 86.6 | 73.6 | 79.8 | 78.6 | 82.6 | 85.8 | 86.0 | | BoolQ <br> 0-Shot | 77.6 | 82.9 | 86.5 | -- | 72.2 | 66.0 | 80.9 | 77.6 | 79.1 | | CommonSenseQA <br> 10-Shot | 80.2 | 80.3 | 82.6 | 69.3 | 72.6 | 76.2 | 79 | 78.1 | 79.6 | | TruthfulQA <br> 10-Shot | 65.0 | 68.1 | 74.8 | -- | 52.1 | 53.0 | 63.2 | 60.1 | 85.8 | | HumanEval <br> 0-Shot | 59.1 | 59.1 | 54.7 | 47.0 | 28.0 | 34.1 | 60.4 | 37.8 | 62.2 | | MBPP <br> 3-Shot | 53.8 | 71.4 | 73.7 | 60.6 | 50.8 | 51.5 | 67.7 | 60.2 | 77.8 | ## Software * [PyTorch](https://github.com/pytorch/pytorch) * [DeepSpeed](https://github.com/microsoft/DeepSpeed) * [Transformers](https://github.com/huggingface/transformers) * [Flash-Attention](https://github.com/HazyResearch/flash-attention) ## Hardware Note that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types: * NVIDIA A100 * NVIDIA A6000 * NVIDIA H100 If you want to run the model on: * NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager" * CPU: use the **GGUF** quantized models [4K](https://aka.ms/Phi3-mini-4k-instruct-gguf) + Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [4K](https://aka.ms/Phi3-mini-4k-instruct-onnx) ## Cross Platform Support ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model [here](https://aka.ms/phi3-mini-4k-instruct-onnx). Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs. Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile. Here are some of the optimized configurations we have added: 1. ONNX models for int4 DML: Quantized to int4 via AWQ 2. ONNX model for fp16 CUDA 3. ONNX model for int4 CUDA: Quantized to int4 via RTN 4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN ## License The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-mini-4k/resolve/main/LICENSE). ## Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
[ "MEDQA" ]
cc12138/test-pubmed
cc12138
null
[ "transformers", "pytorch", "jax", "bert", "arxiv:2007.15779", "license:mit", "endpoints_compatible", "region:us" ]
2024-06-17T03:51:54Z
2024-06-17T03:56:32+00:00
14
0
--- license: mit --- ## PubMedBERT (abstracts only) Pretraining large neural language models, such as BERT, has led to impressive gains on many natural language processing (NLP) tasks. However, most pretraining efforts focus on general domain corpora, such as newswire and Web. A prevailing assumption is that even domain-specific pretraining can benefit by starting from general-domain language models. [Recent work](https://arxiv.org/abs/2007.15779) shows that for domains with abundant unlabeled text, such as biomedicine, pretraining language models from scratch results in substantial gains over continual pretraining of general-domain language models. This PubMedBERT is pretrained from scratch using _abstracts_ from [PubMed](https://pubmed.ncbi.nlm.nih.gov/). This model achieves state-of-the-art performance on several biomedical NLP tasks, as shown on the [Biomedical Language Understanding and Reasoning Benchmark](https://aka.ms/BLURB). ## Citation If you find PubMedBERT useful in your research, please cite the following paper: ```latex @misc{pubmedbert, author = {Yu Gu and Robert Tinn and Hao Cheng and Michael Lucas and Naoto Usuyama and Xiaodong Liu and Tristan Naumann and Jianfeng Gao and Hoifung Poon}, title = {Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing}, year = {2020}, eprint = {arXiv:2007.15779}, } ```
[ "BLURB" ]
Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.2-8B
Casual-Autopsy
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "not-for-all-audiences", "nsfw", "rp", "roleplay", "role-play", "conversational", "en", "base_model:Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2", "base_model:merge:Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2", "base_model:Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B", "base_model:merge:Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B", "base_model:Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B", "base_model:merge:Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B", "base_model:ChaoticNeutrals/Hathor_RP-v.01-L3-8B", "base_model:merge:ChaoticNeutrals/Hathor_RP-v.01-L3-8B", "base_model:ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B", "base_model:merge:ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B", "base_model:Nitral-AI/Hathor_Stable-v0.2-L3-8B", "base_model:merge:Nitral-AI/Hathor_Stable-v0.2-L3-8B", "base_model:ResplendentAI/Nymph_8B", "base_model:merge:ResplendentAI/Nymph_8B", "base_model:Sao10K/L3-8B-Stheno-v3.1", "base_model:merge:Sao10K/L3-8B-Stheno-v3.1", "base_model:Sao10K/L3-8B-Stheno-v3.2", "base_model:merge:Sao10K/L3-8B-Stheno-v3.2", "base_model:aifeifei798/llama3-8B-DarkIdol-1.0", "base_model:merge:aifeifei798/llama3-8B-DarkIdol-1.0", "base_model:bluuwhale/L3-SthenoMaidBlackroot-8B-V1", "base_model:merge:bluuwhale/L3-SthenoMaidBlackroot-8B-V1", "base_model:cgato/L3-TheSpice-8b-v0.8.3", "base_model:merge:cgato/L3-TheSpice-8b-v0.8.3", "base_model:migtissera/Llama-3-8B-Synthia-v3.5", "base_model:merge:migtissera/Llama-3-8B-Synthia-v3.5", "base_model:tannedbum/L3-Nymeria-8B", "base_model:merge:tannedbum/L3-Nymeria-8B", "base_model:tannedbum/L3-Nymeria-Maid-8B", "base_model:merge:tannedbum/L3-Nymeria-Maid-8B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-06-25T19:35:58Z
2024-07-02T00:16:03+00:00
14
9
--- base_model: - Sao10K/L3-8B-Stheno-v3.2 - Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B - bluuwhale/L3-SthenoMaidBlackroot-8B-V1 - Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2 - migtissera/Llama-3-8B-Synthia-v3.5 - tannedbum/L3-Nymeria-Maid-8B - Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B - tannedbum/L3-Nymeria-8B - ChaoticNeutrals/Hathor_RP-v.01-L3-8B - cgato/L3-TheSpice-8b-v0.8.3 - Sao10K/L3-8B-Stheno-v3.1 - Nitral-AI/Hathor_Stable-v0.2-L3-8B - aifeifei798/llama3-8B-DarkIdol-1.0 - ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B - ResplendentAI/Nymph_8B language: - en tags: - merge - mergekit - lazymergekit - not-for-all-audiences - nsfw - rp - roleplay - role-play --- # L3-Uncen-Merger-Omelette-RP-v0.2-8B L3-Uncen-Merger-Omelette-RP-v0.2-8B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2) * [Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B) * [bluuwhale/L3-SthenoMaidBlackroot-8B-V1](https://huggingface.co/bluuwhale/L3-SthenoMaidBlackroot-8B-V1) * [Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2](https://huggingface.co/Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2) * [migtissera/Llama-3-8B-Synthia-v3.5](https://huggingface.co/migtissera/Llama-3-8B-Synthia-v3.5) * [tannedbum/L3-Nymeria-Maid-8B](https://huggingface.co/tannedbum/L3-Nymeria-Maid-8B) * [Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B) * [tannedbum/L3-Nymeria-8B](https://huggingface.co/tannedbum/L3-Nymeria-8B) * [ChaoticNeutrals/Hathor_RP-v.01-L3-8B](https://huggingface.co/ChaoticNeutrals/Hathor_RP-v.01-L3-8B) * [cgato/L3-TheSpice-8b-v0.8.3](https://huggingface.co/cgato/L3-TheSpice-8b-v0.8.3) * [Sao10K/L3-8B-Stheno-v3.1](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.1) * [Nitral-AI/Hathor_Stable-v0.2-L3-8B](https://huggingface.co/Nitral-AI/Hathor_Stable-v0.2-L3-8B) * [aifeifei798/llama3-8B-DarkIdol-1.0](https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.0) * [ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B) * [ResplendentAI/Nymph_8B](https://huggingface.co/ResplendentAI/Nymph_8B) # Secret Sauce ## Scrambled-Egg-1 ```yaml models: - model: Sao10K/L3-8B-Stheno-v3.2 - model: Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B parameters: density: 0.45 weight: 0.33 - model: bluuwhale/L3-SthenoMaidBlackroot-8B-V1 parameters: density: 0.75 weight: 0.33 merge_method: dare_ties base_model: Sao10K/L3-8B-Stheno-v3.2 parameters: int8_mask: true dtype: bfloat16 ``` ## Scrambled-Egg-2 ```yaml models: - model: Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2 - model: migtissera/Llama-3-8B-Synthia-v3.5 parameters: density: 0.35 weight: 0.25 - model: tannedbum/L3-Nymeria-Maid-8B parameters: density: 0.65 weight: 0.25 merge_method: dare_ties base_model: Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2 parameters: int8_mask: true dtype: bfloat16 ``` ## Scrambled-Egg-3 ```yaml models: - model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B - model: tannedbum/L3-Nymeria-8B parameters: density: 0.5 weight: 0.35 - model: ChaoticNeutrals/Hathor_RP-v.01-L3-8B parameters: density: 0.4 weight: 0.2 merge_method: dare_ties base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B parameters: int8_mask: true dtype: bfloat16 ``` ## Omelette-1 ```yaml models: - model: Casual-Autopsy/Scrambled-Egg-1 - model: Casual-Autopsy/Scrambled-Egg-3 merge_method: slerp base_model: Casual-Autopsy/Scrambled-Egg-1 parameters: t: - value: [0.1, 0.15, 0.2, 0.4, 0.6, 0.4, 0.2, 0.15, 0.1] embed_slerp: true dtype: bfloat16 ``` ## Omelette-2 ```yaml models: - model: Casual-Autopsy/Omelette-1 - model: Casual-Autopsy/Scrambled-Egg-2 merge_method: slerp base_model: Casual-Autopsy/Omelette-1 parameters: t: - value: [0.7, 0.5, 0.3, 0.25, 0.2, 0.25, 0.3, 0.5, 0.7] embed_slerp: true dtype: bfloat16 ``` ## L3-Uncen-Merger-Omelette-RP-v0.2-8B ```yaml models: - model: Casual-Autopsy/Omelette-2 - model: cgato/L3-TheSpice-8b-v0.8.3 parameters: weight: 0.01 - model: Sao10K/L3-8B-Stheno-v3.1 parameters: weight: 0.01 - model: Nitral-AI/Hathor_Stable-v0.2-L3-8B parameters: weight: 0.01 - model: aifeifei798/llama3-8B-DarkIdol-1.0 parameters: weight: 0.02 - model: ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B parameters: weight: 0.025 - model: ResplendentAI/Nymph_8B parameters: weight: 0.025 merge_method: task_arithmetic base_model: Casual-Autopsy/Omelette-2 dtype: bfloat16 ```
[ "CAS" ]
sail/data-mixture-pile-cc-1b
sail
text-generation
[ "transformers", "llama", "text-generation", "regmix", "en", "dataset:sail/regmix-data", "dataset:sail/regmix-data-sample", "arxiv:2407.01492", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-07-01T14:59:35Z
2024-07-11T03:00:37+00:00
14
3
--- datasets: - sail/regmix-data - sail/regmix-data-sample language: - en license: mit tags: - regmix --- # Models Trained with Human Selection This is a collection of the language models trained using Pile-CC, each with approximately 1B parameters, trained on different seeds. This project aims to validate the generalization capabilities of the RegMix approach (https://huggingface.co/papers/2407.01492) from small-scale (e.g., 1M parameters) to large-scale (e.g., 1B parameters) models. ## Key Features - **Model Size**: 5 separate models trained with different seeds, each with ~1B parameters - **Training Data**: The pile-cc only data mixture on the [RegMix-Data](https://huggingface.co/datasets/sail/regmix-data) dataset ## Dataset The models were trained using the [RegMix-Data](https://huggingface.co/datasets/sail/regmix-data) dataset, which is split into different domains from The Pile dataset. ## Training Hyperparameters | Hyperparameter | Value | |:---------------|:------| | Batch Size | 1M tokens | | Learning Rate | 4e-4 | | Minimum Learning Rate | 1e-5 | | Learning Rate Schedule | Cosine | | Warmup Ratio | 4% | | Total Tokens | 25B | ## How to Load a Model You can load any model using the corresponding branch with the Hugging Face Transformers library: ```python from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("sail/data-mixture-pile-cc-1b", revision="seed-1") tokenizer = AutoTokenizer.from_pretrained("sail/data-mixture-pile-cc-1b", revision="seed-1") ``` ## Data Mixture The specific data mixture used for training this 1B model is as follows, which can be also found in [our code](https://github.com/sail-sg/regmix/blob/main/mixture_config/config_1b/human.yaml): ```yaml train: train_the_pile_pile_cc: 1.0 valid: valid_the_pile_pile_cc: 1.0 model_name: tinyllama_1_1b ``` ## Model Variants To access different model variants, simply change the `revision` parameter in the `from_pretrained` method to the desired seed (e.g., "seed-2", "seed-3"), and the maxium seed is 5. ## Model Performance We evaluated each model using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The performance metric for each task is the average of 0-shot to 5-shot `accnorm` (accuracy normalized, if available) or `acc` (accuracy) scores. | Seed | PIQA | LAMBADA | MultiRC | LogiQA | SocialIQA | Winogrande | RACE | OpenBookQA | COPA | HellaSwag | SciQ | ARC Easy | QQP | Average | |------|------|---------|---------|--------|-----------|------------|------|------------|------|-----------|------|----------|-----|---------| | 1 | 69.23 | 33.16 | 50.33 | 27.57 | 33.22 | 52.10 | 31.80 | 31.07 | 65.83 | 44.15 | 81.77 | 51.80 | 57.04 | 48.39 | | 2 | 68.62 | 33.69 | 53.15 | 25.13 | 32.96 | 51.24 | 31.06 | 30.84 | 69.80 | 43.28 | 83.18 | 52.00 | 58.06 | 48.69 | | 3 | 69.04 | 35.68 | 52.38 | 26.36 | 33.45 | 51.95 | 30.83 | 30.16 | 66.80 | 42.80 | 83.32 | 51.57 | 57.69 | 48.62 | | 4 | 69.35 | 33.56 | 50.01 | 26.24 | 33.62 | 50.99 | 31.81 | 30.44 | 65.60 | 43.00 | 83.00 | 52.33 | 56.14 | 48.16 | | 5 | 67.91 | 35.09 | 49.93 | 27.50 | 33.90 | 52.85 | 31.77 | 30.04 | 69.40 | 42.62 | 80.94 | 51.25 | 61.03 | 48.79 | ## Usage Notes - These models are primarily intended for research purposes. - Performance may vary depending on the specific task and domain. ## Citation If you use these models in your research, please cite the RegMix paper: ``` @article{liu2024regmix, title={RegMix: Data Mixture as Regression for Language Model Pre-training}, author={Liu, Qian and Zheng, Xiaosen and Muennighoff, Niklas and Zeng, Guangtao and Dou, Longxu and Pang, Tianyu and Jiang, Jing and Lin, Min}, journal={arXiv preprint arXiv:2407.01492}, year={2024} } ``` For more information about the RegMix methodology and its applications, please refer to the [original paper](https://huggingface.co/papers/2407.01492).
[ "SCIQ" ]
bobox/DeBERTaV3-small-SenTra-AdaptiveLayers-AllSoft-HighTemp
bobox
sentence-similarity
[ "sentence-transformers", "pytorch", "deberta-v2", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:78183", "loss:AdaptiveLayerLoss", "loss:CoSENTLoss", "loss:GISTEmbedLoss", "loss:OnlineContrastiveLoss", "loss:MultipleNegativesSymmetricRankingLoss", "en", "dataset:sentence-transformers/all-nli", "dataset:sentence-transformers/stsb", "dataset:tals/vitaminc", "dataset:nyu-mll/glue", "dataset:allenai/scitail", "dataset:sentence-transformers/xsum", "dataset:sentence-transformers/sentence-compression", "dataset:allenai/sciq", "dataset:allenai/qasc", "dataset:allenai/openbookqa", "dataset:sentence-transformers/msmarco-msmarco-distilbert-base-v3", "dataset:sentence-transformers/natural-questions", "dataset:sentence-transformers/trivia-qa", "dataset:sentence-transformers/quora-duplicates", "dataset:sentence-transformers/gooaq", "arxiv:1908.10084", "arxiv:2402.14776", "arxiv:2402.16829", "base_model:microsoft/deberta-v3-small", "base_model:finetune:microsoft/deberta-v3-small", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-07-03T14:15:09Z
2024-07-04T00:06:59+00:00
14
1
--- base_model: microsoft/deberta-v3-small datasets: - sentence-transformers/all-nli - sentence-transformers/stsb - tals/vitaminc - nyu-mll/glue - allenai/scitail - sentence-transformers/xsum - sentence-transformers/sentence-compression - allenai/sciq - allenai/qasc - allenai/openbookqa - sentence-transformers/msmarco-msmarco-distilbert-base-v3 - sentence-transformers/natural-questions - sentence-transformers/trivia-qa - sentence-transformers/quora-duplicates - sentence-transformers/gooaq language: - en library_name: sentence-transformers metrics: - pearson_cosine - spearman_cosine - pearson_manhattan - spearman_manhattan - pearson_euclidean - spearman_euclidean - pearson_dot - spearman_dot - pearson_max - spearman_max pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:78183 - loss:AdaptiveLayerLoss - loss:CoSENTLoss - loss:GISTEmbedLoss - loss:OnlineContrastiveLoss - loss:MultipleNegativesSymmetricRankingLoss widget: - source_sentence: The X and Y chromosomes in human beings that determine the sex of an individual. sentences: - A glacier leaves behind bare rock when it retreats. - Prokaryotes are unicellular organisms that lack organelles surrounded by membranes. - Mammalian sex determination is determined genetically by the presence of chromosomes identified by the letters x and y. - source_sentence: Police officer with riot shield stands in front of crowd. sentences: - A police officer stands in front of a crowd. - A pair of people play video games together on a couch. - People are outside digging a hole. - source_sentence: A young girl sitting on a white comforter on a bed covered with clothing, holding a yellow stuffed duck. sentences: - A man standing in a room is pointing up. - A Little girl is enjoying cake outside. - A yellow duck being held by a girl. - source_sentence: A teenage girl in winter clothes slides down a decline in a red sled. sentences: - A woman preparing vegetables. - A girl is sliding on a red sled. - A person is on a beach. - source_sentence: How many hymns of Luther were included in the Achtliederbuch? sentences: - the ABC News building was renamed Peter Jennings Way in 2006 in honor of the recently deceased longtime ABC News chief anchor and anchor of World News Tonight. - In early 2009, Disney–ABC Television Group merged ABC Entertainment and ABC Studios into a new division, ABC Entertainment Group, which would be responsible for both its production and broadcasting operations. - Luther's hymns were included in early Lutheran hymnals and spread the ideas of the Reformation. model-index: - name: SentenceTransformer based on microsoft/deberta-v3-small results: - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts test type: sts-test metrics: - type: pearson_cosine value: 0.566653720937157 name: Pearson Cosine - type: spearman_cosine value: 0.5551442914704277 name: Spearman Cosine - type: pearson_manhattan value: 0.5771354814213894 name: Pearson Manhattan - type: spearman_manhattan value: 0.5723970841918167 name: Spearman Manhattan - type: pearson_euclidean value: 0.5619024776733639 name: Pearson Euclidean - type: spearman_euclidean value: 0.5593253322063549 name: Spearman Euclidean - type: pearson_dot value: 0.23527108587659004 name: Pearson Dot - type: spearman_dot value: 0.24219982461742934 name: Spearman Dot - type: pearson_max value: 0.5771354814213894 name: Pearson Max - type: spearman_max value: 0.5723970841918167 name: Spearman Max - type: pearson_cosine value: 0.566653720937157 name: Pearson Cosine - type: spearman_cosine value: 0.5551442914704277 name: Spearman Cosine - type: pearson_manhattan value: 0.5771354814213894 name: Pearson Manhattan - type: spearman_manhattan value: 0.5723970841918167 name: Spearman Manhattan - type: pearson_euclidean value: 0.5619024776733639 name: Pearson Euclidean - type: spearman_euclidean value: 0.5593253322063549 name: Spearman Euclidean - type: pearson_dot value: 0.23527108587659004 name: Pearson Dot - type: spearman_dot value: 0.24219982461742934 name: Spearman Dot - type: pearson_max value: 0.5771354814213894 name: Pearson Max - type: spearman_max value: 0.5723970841918167 name: Spearman Max - type: pearson_cosine value: 0.566653720937157 name: Pearson Cosine - type: spearman_cosine value: 0.5551442914704277 name: Spearman Cosine - type: pearson_manhattan value: 0.5771354814213894 name: Pearson Manhattan - type: spearman_manhattan value: 0.5723970841918167 name: Spearman Manhattan - type: pearson_euclidean value: 0.5619024776733639 name: Pearson Euclidean - type: spearman_euclidean value: 0.5593253322063549 name: Spearman Euclidean - type: pearson_dot value: 0.23527108587659004 name: Pearson Dot - type: spearman_dot value: 0.24219982461742934 name: Spearman Dot - type: pearson_max value: 0.5771354814213894 name: Pearson Max - type: spearman_max value: 0.5723970841918167 name: Spearman Max - type: pearson_cosine value: 0.566653720937157 name: Pearson Cosine - type: spearman_cosine value: 0.5551442914704277 name: Spearman Cosine - type: pearson_manhattan value: 0.5771354814213894 name: Pearson Manhattan - type: spearman_manhattan value: 0.5723970841918167 name: Spearman Manhattan - type: pearson_euclidean value: 0.5619024776733639 name: Pearson Euclidean - type: spearman_euclidean value: 0.5593253322063549 name: Spearman Euclidean - type: pearson_dot value: 0.23527108587659004 name: Pearson Dot - type: spearman_dot value: 0.24219982461742934 name: Spearman Dot - type: pearson_max value: 0.5771354814213894 name: Pearson Max - type: spearman_max value: 0.5723970841918167 name: Spearman Max --- # SentenceTransformer based on microsoft/deberta-v3-small This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the [nli-pairs](https://huggingface.co/datasets/sentence-transformers/all-nli), [sts-label](https://huggingface.co/datasets/sentence-transformers/stsb), [vitaminc-pairs](https://huggingface.co/datasets/tals/vitaminc), [qnli-contrastive](https://huggingface.co/datasets/nyu-mll/glue), [scitail-pairs-qa](https://huggingface.co/datasets/allenai/scitail), [scitail-pairs-pos](https://huggingface.co/datasets/allenai/scitail), [xsum-pairs](https://huggingface.co/datasets/sentence-transformers/xsum), [compression-pairs](https://huggingface.co/datasets/sentence-transformers/sentence-compression), [sciq_pairs](https://huggingface.co/datasets/allenai/sciq), [qasc_pairs](https://huggingface.co/datasets/allenai/qasc), [openbookqa_pairs](https://huggingface.co/datasets/allenai/openbookqa), [msmarco_pairs](https://huggingface.co/datasets/sentence-transformers/msmarco-msmarco-distilbert-base-v3), [nq_pairs](https://huggingface.co/datasets/sentence-transformers/natural-questions), [trivia_pairs](https://huggingface.co/datasets/sentence-transformers/trivia-qa), [quora_pairs](https://huggingface.co/datasets/sentence-transformers/quora-duplicates) and [gooaq_pairs](https://huggingface.co/datasets/sentence-transformers/gooaq) datasets. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. \n\n train_loss = AdaptiveLayerLoss(model=model, loss=train_loss, n_layers_per_step = -1, last_layer_weight = 1.5, prior_layers_weight= 0.15, kl_div_weight = 2, kl_temperature= 2,) num_epochs = 4, learning_rate = 2e-5, warmup_ratio=0.25, weight_decay = 5e-7, schedule = "cosine_with_restarts", num_cycles = 5 ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) <!-- at revision a36c739020e01763fe789b4b85e2df55d6180012 --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity - **Training Datasets:** - [nli-pairs](https://huggingface.co/datasets/sentence-transformers/all-nli) - [sts-label](https://huggingface.co/datasets/sentence-transformers/stsb) - [vitaminc-pairs](https://huggingface.co/datasets/tals/vitaminc) - [qnli-contrastive](https://huggingface.co/datasets/nyu-mll/glue) - [scitail-pairs-qa](https://huggingface.co/datasets/allenai/scitail) - [scitail-pairs-pos](https://huggingface.co/datasets/allenai/scitail) - [xsum-pairs](https://huggingface.co/datasets/sentence-transformers/xsum) - [compression-pairs](https://huggingface.co/datasets/sentence-transformers/sentence-compression) - [sciq_pairs](https://huggingface.co/datasets/allenai/sciq) - [qasc_pairs](https://huggingface.co/datasets/allenai/qasc) - [openbookqa_pairs](https://huggingface.co/datasets/allenai/openbookqa) - [msmarco_pairs](https://huggingface.co/datasets/sentence-transformers/msmarco-msmarco-distilbert-base-v3) - [nq_pairs](https://huggingface.co/datasets/sentence-transformers/natural-questions) - [trivia_pairs](https://huggingface.co/datasets/sentence-transformers/trivia-qa) - [quora_pairs](https://huggingface.co/datasets/sentence-transformers/quora-duplicates) - [gooaq_pairs](https://huggingface.co/datasets/sentence-transformers/gooaq) - **Language:** en <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DebertaV2Model (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("bobox/DeBERTaV3-small-SenTra-AdaptiveLayers-AllSoft-HighTemp") # Run inference sentences = [ 'How many hymns of Luther were included in the Achtliederbuch?', "Luther's hymns were included in early Lutheran hymnals and spread the ideas of the Reformation.", 'the ABC News building was renamed Peter Jennings Way in 2006 in honor of the recently deceased longtime ABC News chief anchor and anchor of World News Tonight.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Dataset: `sts-test` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.5667 | | **spearman_cosine** | **0.5551** | | pearson_manhattan | 0.5771 | | spearman_manhattan | 0.5724 | | pearson_euclidean | 0.5619 | | spearman_euclidean | 0.5593 | | pearson_dot | 0.2353 | | spearman_dot | 0.2422 | | pearson_max | 0.5771 | | spearman_max | 0.5724 | #### Semantic Similarity * Dataset: `sts-test` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.5667 | | **spearman_cosine** | **0.5551** | | pearson_manhattan | 0.5771 | | spearman_manhattan | 0.5724 | | pearson_euclidean | 0.5619 | | spearman_euclidean | 0.5593 | | pearson_dot | 0.2353 | | spearman_dot | 0.2422 | | pearson_max | 0.5771 | | spearman_max | 0.5724 | #### Semantic Similarity * Dataset: `sts-test` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.5667 | | **spearman_cosine** | **0.5551** | | pearson_manhattan | 0.5771 | | spearman_manhattan | 0.5724 | | pearson_euclidean | 0.5619 | | spearman_euclidean | 0.5593 | | pearson_dot | 0.2353 | | spearman_dot | 0.2422 | | pearson_max | 0.5771 | | spearman_max | 0.5724 | #### Semantic Similarity * Dataset: `sts-test` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.5667 | | **spearman_cosine** | **0.5551** | | pearson_manhattan | 0.5771 | | spearman_manhattan | 0.5724 | | pearson_euclidean | 0.5619 | | spearman_euclidean | 0.5593 | | pearson_dot | 0.2353 | | spearman_dot | 0.2422 | | pearson_max | 0.5771 | | spearman_max | 0.5724 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Datasets #### nli-pairs * Dataset: [nli-pairs](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab) * Size: 6,500 training samples * Columns: <code>sentence1</code> and <code>sentence2</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 16.62 tokens</li><li>max: 62 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.46 tokens</li><li>max: 29 tokens</li></ul> | * Samples: | sentence1 | sentence2 | |:---------------------------------------------------------------------------|:-------------------------------------------------| | <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> | | <code>Children smiling and waving at camera</code> | <code>There are children present</code> | | <code>A boy is jumping on skateboard in the middle of a red bridge.</code> | <code>The boy does a skateboarding trick.</code> | * Loss: [<code>AdaptiveLayerLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#adaptivelayerloss) with these parameters: ```json { "loss": "GISTEmbedLoss", "n_layers_per_step": -1, "last_layer_weight": 1.5, "prior_layers_weight": 0.15, "kl_div_weight": 2, "kl_temperature": 2 } ``` #### sts-label * Dataset: [sts-label](https://huggingface.co/datasets/sentence-transformers/stsb) at [ab7a5ac](https://huggingface.co/datasets/sentence-transformers/stsb/tree/ab7a5ac0e35aa22088bdcf23e7fd99b220e53308) * Size: 5,749 training samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 6 tokens</li><li>mean: 9.81 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 9.74 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.54</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:-----------------------------------------------------------|:----------------------------------------------------------------------|:------------------| | <code>A plane is taking off.</code> | <code>An air plane is taking off.</code> | <code>1.0</code> | | <code>A man is playing a large flute.</code> | <code>A man is playing a flute.</code> | <code>0.76</code> | | <code>A man is spreading shreded cheese on a pizza.</code> | <code>A man is spreading shredded cheese on an uncooked pizza.</code> | <code>0.76</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` #### vitaminc-pairs * Dataset: [vitaminc-pairs](https://huggingface.co/datasets/tals/vitaminc) at [be6febb](https://huggingface.co/datasets/tals/vitaminc/tree/be6febb761b0b2807687e61e0b5282e459df2fa0) * Size: 3,194 training samples * Columns: <code>label</code>, <code>sentence1</code>, and <code>sentence2</code> * Approximate statistics based on the first 1000 samples: | | label | sentence1 | sentence2 | |:--------|:-----------------------------|:---------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | int | string | string | | details | <ul><li>1: 100.00%</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.8 tokens</li><li>max: 75 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 38.29 tokens</li><li>max: 512 tokens</li></ul> | * Samples: | label | sentence1 | sentence2 | |:---------------|:---------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>1</code> | <code>Kyle Kendricks was otherwise called the Professor .</code> | <code>`` Chicago Cubs ( �present ) } } Kyle Christian Hendricks ( born December 7 , 1989 ) , nicknamed `` '' The Proffessor , '' '' is an American professional baseball pitcher for the Chicago Cubs of Major League Baseball ( MLB ) . ''</code> | | <code>1</code> | <code>Since 1982 , 533 people have been executed in Texas .</code> | <code>Since the death penalty was re-instituted in the United States with the 1976 Gregg v. Georgia decision , Texas has executed more inmates than any other state , beginning in 1982 with the execution of Charles Brooks , Jr.. Since 1982 , 533 people have been executed in Texas. 1923 , the Texas Department of Criminal Justice ( TDCJ ) has been in charge of executions in the state .</code> | | <code>1</code> | <code>Hilltop Hoods have released two `` restrung '' albums .</code> | <code>`` The group released its first extended play , Back Once Again , in 1997 and have subsequently released seven studio albums , two `` '' restrung '' '' albums and three DVDs . ''</code> | * Loss: [<code>AdaptiveLayerLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#adaptivelayerloss) with these parameters: ```json { "loss": "GISTEmbedLoss", "n_layers_per_step": -1, "last_layer_weight": 1.5, "prior_layers_weight": 0.15, "kl_div_weight": 2, "kl_temperature": 2 } ``` #### qnli-contrastive * Dataset: [qnli-contrastive](https://huggingface.co/datasets/nyu-mll/glue) at [bcdcba7](https://huggingface.co/datasets/nyu-mll/glue/tree/bcdcba79d07bc864c1c254ccfcedcce55bcc9a8c) * Size: 4,000 training samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | label | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------| | type | string | string | int | | details | <ul><li>min: 6 tokens</li><li>mean: 13.79 tokens</li><li>max: 40 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 35.8 tokens</li><li>max: 499 tokens</li></ul> | <ul><li>0: 100.00%</li></ul> | * Samples: | sentence1 | sentence2 | label | |:-----------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------| | <code>Vinters have adopted solar technology to do what?</code> | <code>More recently the technology has been embraced by vinters, who use the energy generated by solar panels to power grape presses.</code> | <code>0</code> | | <code>Who did Madonna's look and style of dressing influence?</code> | <code>It attracted the attention of organizations who complained that the song and its accompanying video promoted premarital sex and undermined family values, and moralists sought to have the song and video banned.</code> | <code>0</code> | | <code>In addition to hearing him play, what else did people seek from Chopin in London?</code> | <code>The Prince, who was himself a talented musician, moved close to the keyboard to view Chopin's technique.</code> | <code>0</code> | * Loss: [<code>AdaptiveLayerLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#adaptivelayerloss) with these parameters: ```json { "loss": "OnlineContrastiveLoss", "n_layers_per_step": -1, "last_layer_weight": 1.5, "prior_layers_weight": 0.15, "kl_div_weight": 2, "kl_temperature": 2 } ``` #### scitail-pairs-qa * Dataset: [scitail-pairs-qa](https://huggingface.co/datasets/allenai/scitail) at [0cc4353](https://huggingface.co/datasets/allenai/scitail/tree/0cc4353235b289165dfde1c7c5d1be983f99ce44) * Size: 4,300 training samples * Columns: <code>sentence2</code> and <code>sentence1</code> * Approximate statistics based on the first 1000 samples: | | sentence2 | sentence1 | |:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 16.0 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 14.71 tokens</li><li>max: 34 tokens</li></ul> | * Samples: | sentence2 | sentence1 | |:--------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------| | <code>The fetal period lasts approximately 30 weeks weeks.</code> | <code>Approximately how many weeks does the fetal period last?</code> | | <code>Corals build hard exoskeletons that grow to become coral reefs.</code> | <code>Corals build hard exoskeletons that grow to become what?</code> | | <code>A voltaic cell generates an electric current through a reaction known as a(n) spontaneous redox.</code> | <code>A voltaic cell uses what type of reaction to generate an electric current</code> | * Loss: [<code>AdaptiveLayerLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#adaptivelayerloss) with these parameters: ```json { "loss": "GISTEmbedLoss", "n_layers_per_step": -1, "last_layer_weight": 1.5, "prior_layers_weight": 0.15, "kl_div_weight": 2, "kl_temperature": 2 } ``` #### scitail-pairs-pos * Dataset: [scitail-pairs-pos](https://huggingface.co/datasets/allenai/scitail) at [0cc4353](https://huggingface.co/datasets/allenai/scitail/tree/0cc4353235b289165dfde1c7c5d1be983f99ce44) * Size: 2,200 training samples * Columns: <code>sentence1</code> and <code>sentence2</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 23.76 tokens</li><li>max: 74 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.27 tokens</li><li>max: 41 tokens</li></ul> | * Samples: | sentence1 | sentence2 | |:-----------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------| | <code>As the water vapor cools, it condenses , forming tiny droplets in clouds.</code> | <code>Clouds are formed from water droplets.</code> | | <code>Poison ivy is green, with three leaflets on each leaf, grows as a shrub or vine, and may be in your yard.</code> | <code>Poison ivy typically has three groups of leaves.</code> | | <code>(Formic acid is the poison found in the > sting of fire ants.)</code> | <code>Formic acid is found in the secretions of stinging ants.</code> | * Loss: [<code>AdaptiveLayerLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#adaptivelayerloss) with these parameters: ```json { "loss": "GISTEmbedLoss", "n_layers_per_step": -1, "last_layer_weight": 1.5, "prior_layers_weight": 0.15, "kl_div_weight": 2, "kl_temperature": 2 } ``` #### xsum-pairs * Dataset: [xsum-pairs](https://huggingface.co/datasets/sentence-transformers/xsum) at [788ddaf](https://huggingface.co/datasets/sentence-transformers/xsum/tree/788ddafe04e539956d56b567bc32a036ee7b9206) * Size: 2,500 training samples * Columns: <code>sentence1</code> and <code>sentence2</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | |:--------|:-------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 14 tokens</li><li>mean: 345.33 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 27.11 tokens</li><li>max: 60 tokens</li></ul> | * Samples: | sentence1 | sentence2 | |:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------| | <code>Rahim Kalantar told the BBC his son Ali, 18, travelled to Syria with two friends from Coventry in March and believed he was now fighting with Isis.<br>He said he was sent "down this road" by an imam - who denied the allegations.<br>Up to 500 Britons are thought to have travelled to the Middle East to fight in the conflict, officials say.<br>Mr Kalantar - speaking to BBC Two's Newsnight, in collaboration with the BBC's Afghan Service and Newsday - said he worries about his son Ali "every minute" and that his grief is "limitless".<br>He said he believed Ali - who was planning to study computer science at university - had been radicalised during classes at a mosque after evening prayer.<br>"He [the imam] encouraged them and sent them down this road," he said.<br>The BBC contacted the mosque to speak to the imam, who refused to give an interview but said he completely denied the allegations.<br>Ali is believed to have travelled to Syria with Rashed Amani, also 18, who had been studying business at Coventry University.<br>Rashed's father, Khabir, said family members had travelled to the Turkish-Syrian border in the hope of finding the boys, but came back "empty-handed" after searching for more than two weeks.<br>He said he did not know what had happened to his son, who he fears has joined Isis - the militant-led group that has made rapid advances through Iraq in recent weeks.<br>"Maybe somebody worked with him, I don't know. Maybe somebody brainwashed him because he was not like that," he said.<br>The third teenager, Moh Ismael, is also believed to be in Syria with his friends. He is understood to have posted a message on Twitter saying he was with Isis.<br>It comes after Britons - including Reyaad Khan and Nasser Muthana from Cardiff - featured in an apparent recruitment video for jihadists in Iraq and Syria.<br>The video was posted online on Friday by accounts with links to Isis.<br>The BBC has learned a third Briton in the video is from Aberdeen. The man, named locally as Raqib, grew up in Scotland but was originally from Bangladesh.<br>Lord Carlile, a former independent reviewer of terrorism laws, told the BBC that the Muslim community was best placed to stop jihadists recruiting in the UK.<br>The Liberal Democrat peer also said the UK needed to reintroduce tougher measures to stop terrorism.<br>It comes after former MI6 director, Richard Barrett, said security services would not be able to track all Britons who return to the UK after fighting in Syria.<br>He said the number of those posing a threat would be small but unpredictable.<br>The Metropolitan Police has insisted it has the tools to monitor British jihadists returning from that country.<br>Shiraz Maher, a radicalisation expert, told Newsnight that social media was now acting as a recruitment ground for potential jihadists in the UK.<br>"You have hundreds of foreign fighters on the ground who in real time are giving you a live feed of what is happening and they are engaged in a conversation.<br>"It is these individual people who have been empowered to become recruiters in their own right," he said.<br>Lord Carlile said the "most important partners" in preventing young Muslims from being radicalised were the "Muslim communities themselves".<br>"Mothers, wives, sisters do not want their husbands, brothers, sons to become valid jihadists and run the risk of being killed in a civil war," he told the programme.<br>He also told BBC Radio 4's World at One programme that the government should look at reintroducing "something like control orders", which were scrapped in 2011 and replaced with the less restrictive Terrorism Prevention and Investigation Measures (TPims).<br>He said: "We need to look at preventing violent extremism before people leave the country and also we need to look for further measures."</code> | <code>The father of a British teenager who travelled to Syria to join jihadists believes his son was radicalised by an imam at a UK mosque.</code> | | <code>Jawad Fairooz and Matar Matar were detained in May after resigning from parliament in protest at the handling of the protests.<br>Mr Matar told the BBC they had been tortured in prison.<br>They were prosecuted in a security court on charges of taking part in illegal protests and defaming the country.<br>It is not clear if they still face trial in a civilian court.<br>Civilian courts took over jurisdiction after King Hamad Bin Issa Al Khalifa lifted a state of emergency in June.<br>Mr Matar told the BBC he believed his arrest had been intended to put a pressure on his al-Wifaq party.<br>"At some stages we were tortured," he said. "In one of the cases we were beaten."<br>Human rights lawyer Mohamed al-Tajir was also released.<br>He was detained in April having defended people arrested during the Saudi-backed suppression of protests in March.<br>Correspondents say their release appears to be an attempt at defusing tensions in the country, a key US ally in the region that hosts the US Navy's 5th Fleet.<br>Bahrain's King Hamad Bin Issa Al Khalifa recently accepted a series of reforms drawn up by a government-backed committee created to address grievances that emerged during the protests.<br>The kingdom's Shia community makes up about 70% of the population but many say they are discriminated against by the minority Sunni monarchy.</code> | <code>Bahrain has freed two former Shia opposition MPs arrested in the wake of widespread anti-government protests.</code> | | <code>Liverpool City Region, in case you were wondering, includes Merseyside's five councils (Knowsley, Liverpool, Sefton, St Helens, and Wirral) as well as Halton in Cheshire.<br>Who are the eight candidates desperate for your support on 4 May, though, and what are their priorities?<br>BBC Radio Merseyside's political reporter Claire Hamilton has produced a potted biography for each of them.<br>We're also asking all of them for a "minute manifesto" video.<br>Candidates are listed below in alphabetical order<br>Roger Bannister, Trade Union & Socialist Coalition<br>Veteran trade unionist Roger Bannister believes the Liverpool City Region Combined Authority should never have approved the contract for a fleet of new driver-only Merseyrail trains. He says he would seek to reverse this decision. He also believes local authorities have passed harmful austerity budgets on people struggling to make ends meet. He stood for Liverpool city mayor in 2016, coming fourth with 5% of the vote.<br>Paul Breen, Get the Coppers off the Jury<br>Paul Breen is a resident of Norris Green, Liverpool and became the last candidate to be nominated. He is listed as treasurer of the party on the Electoral Commission's website, with Patricia Breen listed as deputy treasurer. He has not yet released any material detailing his manifesto but told the BBC the title of his campaign speaks for itself. He simply does not believe that police officers should be allowed to serve on juries.<br>Mr Breen declined to provide a "minute manifesto"<br>Tony Caldeira, Conservative<br>Born in Liverpool and educated in St Helens, Tony Caldeira started out working on a stall selling cushions made by his mother at Liverpool's Great Homer Street market. His business expanded and now operates in Kirkby, distributing world-wide. Mr Caldeira has stood for Liverpool mayor twice, coming sixth in 2016 with just under 4% of the vote. He has pledged to improve the area's transport network, speed up the planning process and build homes and workplaces on brownfield sites rather than green spaces.<br>Carl Cashman, Liberal Democrats<br>Born in Whiston, Knowsley, Carl Cashman is leader of the Liberal Democrat group on Knowsley Council. He and his two Lib Dem council colleagues were elected in 2016, breaking a four-year period when Labour was the only party represented. Aged 25, he's the youngest of the candidates. Mr Cashman believes maintaining strong ties with Europe and the region will be key, and has pledged to open a Liverpool City Region embassy in Brussels. He also wants to better integrate ticketing across public transport and make the current Walrus card more similar to the Oyster card used by Londoners.<br>Tom Crone, Green Party<br>Tom Crone is leader of the Green group on Liverpool City Council. He won 10% of the vote in the mayoral elections in Liverpool in 2016 and came third. Originally from Norwich, he has lived in Liverpool since 2000 after arriving as a student. Mr Crone is keen to see a shift away from traditional heavy industry in the city region towards greener "tech" industries. He's also passionate about making public transport more affordable and environmentally friendly. He says he'll look to prioritise new routes for cyclists and pedestrians.<br>Tabitha Morton, Women's Equality Party<br>Tabitha Morton was born in Netherton, Sefton. She left school with no formal qualifications, and started work at 16 at a local market, and later in cleaning. She was taken on for NVQ training by a company in Liverpool, and stayed on to train others. She now works for a global manufacturer, in what she describes as "a male-dominated industry". She says she would prioritise grants for employers offering equal apprenticeships for young women and men and ring-fence funds for training women in sectors in which they're underrepresented.<br>Steve Rotheram, Labour<br>Born in Kirkby, former bricklayer Steve Rotheram was a city councillor in Liverpool and also Lord Mayor during the city's European Capital of Culture year in 2008. He was also elected MP for Liverpool Walton in 2010, and re-elected to the seat in 2015. Mr Rotheram is pledging to cut the cost of the fast tag for motorists driving through the Mersey tunnels. He wants to improve education and offer better careers advice for young people, and also wants to make brownfield sites more attractive to developers.<br>Paula Walters, UKIP<br>Wallasey-born Paula Walters is chairman of UKIP in Wirral and lives in New Brighton with her family. She has campaigned to scrap tunnel tolls for several years. She says her local UKIP branch is one of the most thriving in the North West. A civil servant, she studied English and biomolecular science at degree-level. She has also lived in South Africa where she attended the University of Pretoria. She believes Liverpool city centre has attracted money at the expense of outlying areas, one of the things she wants to tackle.</code> | <code>Those hoping to become the first mayor of the Liverpool City Region have less than a month remaining in which to secure your vote.</code> | * Loss: [<code>AdaptiveLayerLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#adaptivelayerloss) with these parameters: ```json { "loss": "GISTEmbedLoss", "n_layers_per_step": -1, "last_layer_weight": 1.5, "prior_layers_weight": 0.15, "kl_div_weight": 2, "kl_temperature": 2 } ``` #### compression-pairs * Dataset: [compression-pairs](https://huggingface.co/datasets/sentence-transformers/sentence-compression) at [605bc91](https://huggingface.co/datasets/sentence-transformers/sentence-compression/tree/605bc91d95631895ba25b6eda51a3cb596976c90) * Size: 4,000 training samples * Columns: <code>sentence1</code> and <code>sentence2</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | |:--------|:------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 10 tokens</li><li>mean: 31.89 tokens</li><li>max: 125 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 10.21 tokens</li><li>max: 28 tokens</li></ul> | * Samples: | sentence1 | sentence2 | |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------| | <code>The USHL completed an expansion draft on Monday as 10 players who were on the rosters of USHL teams during the 2009-10 season were selected by the League's two newest entries, the Muskegon Lumberjacks and Dubuque Fighting Saints.</code> | <code>USHL completes expansion draft</code> | | <code>Major League Baseball Commissioner Bud Selig will be speaking at St. Norbert College next month.</code> | <code>Bud Selig to speak at St. Norbert College</code> | | <code>It's fresh cherry time in Michigan and the best time to enjoy this delicious and nutritious fruit.</code> | <code>It's cherry time</code> | * Loss: [<code>AdaptiveLayerLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#adaptivelayerloss) with these parameters: ```json { "loss": "MultipleNegativesSymmetricRankingLoss", "n_layers_per_step": -1, "last_layer_weight": 1.5, "prior_layers_weight": 0.15, "kl_div_weight": 2, "kl_temperature": 2 } ``` #### sciq_pairs * Dataset: [sciq_pairs](https://huggingface.co/datasets/allenai/sciq) at [2c94ad3](https://huggingface.co/datasets/allenai/sciq/tree/2c94ad3e1aafab77146f384e23536f97a4849815) * Size: 6,500 training samples * Columns: <code>sentence1</code> and <code>sentence2</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 17.26 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 2 tokens</li><li>mean: 84.37 tokens</li><li>max: 512 tokens</li></ul> | * Samples: | sentence1 | sentence2 | |:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>What type of organism is commonly used in preparation of foods such as cheese and yogurt?</code> | <code>Mesophiles grow best in moderate temperature, typically between 25°C and 40°C (77°F and 104°F). Mesophiles are often found living in or on the bodies of humans or other animals. The optimal growth temperature of many pathogenic mesophiles is 37°C (98°F), the normal human body temperature. Mesophilic organisms have important uses in food preparation, including cheese, yogurt, beer and wine.</code> | | <code>What phenomenon makes global winds blow northeast to southwest or the reverse in the northern hemisphere and northwest to southeast or the reverse in the southern hemisphere?</code> | <code>Without Coriolis Effect the global winds would blow north to south or south to north. But Coriolis makes them blow northeast to southwest or the reverse in the Northern Hemisphere. The winds blow northwest to southeast or the reverse in the southern hemisphere.</code> | | <code>Changes from a less-ordered state to a more-ordered state (such as a liquid to a solid) are always what?</code> | <code>Summary Changes of state are examples of phase changes, or phase transitions. All phase changes are accompanied by changes in the energy of a system. Changes from a more-ordered state to a less-ordered state (such as a liquid to a gas) areendothermic. Changes from a less-ordered state to a more-ordered state (such as a liquid to a solid) are always exothermic. The conversion of a solid to a liquid is called fusion (or melting). The energy required to melt 1 mol of a substance is its enthalpy of fusion (ΔHfus). The energy change required to vaporize 1 mol of a substance is the enthalpy of vaporization (ΔHvap). The direct conversion of a solid to a gas is sublimation. The amount of energy needed to sublime 1 mol of a substance is its enthalpy of sublimation (ΔHsub) and is the sum of the enthalpies of fusion and vaporization. Plots of the temperature of a substance versus heat added or versus heating time at a constant rate of heating are calledheating curves. Heating curves relate temperature changes to phase transitions. A superheated liquid, a liquid at a temperature and pressure at which it should be a gas, is not stable. A cooling curve is not exactly the reverse of the heating curve because many liquids do not freeze at the expected temperature. Instead, they form a supercooled liquid, a metastable liquid phase that exists below the normal melting point. Supercooled liquids usually crystallize on standing, or adding a seed crystal of the same or another substance can induce crystallization.</code> | * Loss: [<code>AdaptiveLayerLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#adaptivelayerloss) with these parameters: ```json { "loss": "GISTEmbedLoss", "n_layers_per_step": -1, "last_layer_weight": 1.5, "prior_layers_weight": 0.15, "kl_div_weight": 2, "kl_temperature": 2 } ``` #### qasc_pairs * Dataset: [qasc_pairs](https://huggingface.co/datasets/allenai/qasc) at [a34ba20](https://huggingface.co/datasets/allenai/qasc/tree/a34ba204eb9a33b919c10cc08f4f1c8dae5ec070) * Size: 6,500 training samples * Columns: <code>id</code>, <code>sentence1</code>, and <code>sentence2</code> * Approximate statistics based on the first 1000 samples: | | id | sentence1 | sentence2 | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 17 tokens</li><li>mean: 21.35 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 11.47 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 35.55 tokens</li><li>max: 66 tokens</li></ul> | * Samples: | id | sentence1 | sentence2 | |:--------------------------------------------|:---------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>3E7TUJ2EGCLQNOV1WEAJ2NN9ROPD9K</code> | <code>What type of water formation is formed by clouds?</code> | <code>beads of water are formed by water vapor condensing. Clouds are made of water vapor.. Beads of water can be formed by clouds.</code> | | <code>3LS2AMNW5FPNJK3C3PZLZCPX562OQO</code> | <code>Where do beads of water come from?</code> | <code>beads of water are formed by water vapor condensing. Condensation is the change of water vapor to a liquid.. Vapor turning into a liquid leaves behind beads of water</code> | | <code>3TMFV4NEP8DPIPCI8H9VUFHJG8V8W3</code> | <code>What forms beads of water? </code> | <code>beads of water are formed by water vapor condensing. An example of water vapor is steam.. Steam forms beads of water.</code> | * Loss: [<code>AdaptiveLayerLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#adaptivelayerloss) with these parameters: ```json { "loss": "GISTEmbedLoss", "n_layers_per_step": -1, "last_layer_weight": 1.5, "prior_layers_weight": 0.15, "kl_div_weight": 2, "kl_temperature": 2 } ``` #### openbookqa_pairs * Dataset: [openbookqa_pairs](https://huggingface.co/datasets/allenai/openbookqa) at [388097e](https://huggingface.co/datasets/allenai/openbookqa/tree/388097ea7776314e93a529163e0fea805b8a6454) * Size: 2,740 training samples * Columns: <code>sentence1</code> and <code>sentence2</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 3 tokens</li><li>mean: 13.83 tokens</li><li>max: 78 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 11.37 tokens</li><li>max: 30 tokens</li></ul> | * Samples: | sentence1 | sentence2 | |:-------------------------------------------------|:--------------------------------------------------------------------------| | <code>The sun is responsible for</code> | <code>the sun is the source of energy for physical cycles on Earth</code> | | <code>When food is reduced in the stomach</code> | <code>digestion is when stomach acid breaks down food</code> | | <code>Stars are</code> | <code>a star is made of gases</code> | * Loss: [<code>AdaptiveLayerLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#adaptivelayerloss) with these parameters: ```json { "loss": "GISTEmbedLoss", "n_layers_per_step": -1, "last_layer_weight": 1.5, "prior_layers_weight": 0.15, "kl_div_weight": 2, "kl_temperature": 2 } ``` #### msmarco_pairs * Dataset: [msmarco_pairs](https://huggingface.co/datasets/sentence-transformers/msmarco-msmarco-distilbert-base-v3) at [28ff31e](https://huggingface.co/datasets/sentence-transformers/msmarco-msmarco-distilbert-base-v3/tree/28ff31e4c97cddd53d298497f766e653f1e666f9) * Size: 6,500 training samples * Columns: <code>sentence1</code> and <code>sentence2</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | |:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 8.61 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 75.09 tokens</li><li>max: 206 tokens</li></ul> | * Samples: | sentence1 | sentence2 | |:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>what are the liberal arts?</code> | <code>liberal arts. 1. the academic course of instruction at a college intended to provide general knowledge and comprising the arts, humanities, natural sciences, and social sciences, as opposed to professional or technical subjects.</code> | | <code>what is the mechanism of action of fibrinolytic or thrombolytic drugs?</code> | <code>Baillière's Clinical Haematology. 6 Mechanism of action of the thrombolytic agents. 6 Mechanism of action of the thrombolytic agents JEFFREY I. WEITZ Fibrin formed during the haemostatic, inflammatory or tissue repair process serves a temporary role, and must be degraded to restore normal tissue function and structure.</code> | | <code>what is normal plat count</code> | <code>78 Followers. A. Platelets are the tiny blood cells that help stop bleeding by binding together to form a clump or plug at sites of injury inside blood vessels. A normal platelet count is between 150,000 and 450,000 platelets per microliter (one-millionth of a liter, abbreviated mcL).The average platelet count is 237,000 per mcL in men and 266,000 per mcL in women.8 Followers. A. Platelets are the tiny blood cells that help stop bleeding by binding together to form a clump or plug at sites of injury inside blood vessels. A normal platelet count is between 150,000 and 450,000 platelets per microliter (one-millionth of a liter, abbreviated mcL).</code> | * Loss: [<code>AdaptiveLayerLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#adaptivelayerloss) with these parameters: ```json { "loss": "GISTEmbedLoss", "n_layers_per_step": -1, "last_layer_weight": 1.5, "prior_layers_weight": 0.15, "kl_div_weight": 2, "kl_temperature": 2 } ``` #### nq_pairs * Dataset: [nq_pairs](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17) * Size: 6,500 training samples * Columns: <code>sentence1</code> and <code>sentence2</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 10 tokens</li><li>mean: 11.77 tokens</li><li>max: 21 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 131.57 tokens</li><li>max: 512 tokens</li></ul> | * Samples: | sentence1 | sentence2 | |:----------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>when did richmond last play in a preliminary final</code> | <code>Richmond Football Club Richmond began 2017 with 5 straight wins, a feat it had not achieved since 1995. A series of close losses hampered the Tigers throughout the middle of the season, including a 5-point loss to the Western Bulldogs, 2-point loss to Fremantle, and a 3-point loss to the Giants. Richmond ended the season strongly with convincing victories over Fremantle and St Kilda in the final two rounds, elevating the club to 3rd on the ladder. Richmond's first final of the season against the Cats at the MCG attracted a record qualifying final crowd of 95,028; the Tigers won by 51 points. Having advanced to the first preliminary finals for the first time since 2001, Richmond defeated Greater Western Sydney by 36 points in front of a crowd of 94,258 to progress to the Grand Final against Adelaide, their first Grand Final appearance since 1982. The attendance was 100,021, the largest crowd to a grand final since 1986. The Crows led at quarter time and led by as many as 13, but the Tigers took over the game as it progressed and scored seven straight goals at one point. They eventually would win by 48 points – 16.12 (108) to Adelaide's 8.12 (60) – to end their 37-year flag drought.[22] Dustin Martin also became the first player to win a Premiership medal, the Brownlow Medal and the Norm Smith Medal in the same season, while Damien Hardwick was named AFL Coaches Association Coach of the Year. Richmond's jump from 13th to premiers also marked the biggest jump from one AFL season to the next.</code> | | <code>who sang what in the world's come over you</code> | <code>Jack Scott (singer) At the beginning of 1960, Scott again changed record labels, this time to Top Rank Records.[1] He then recorded four Billboard Hot 100 hits – "What in the World's Come Over You" (#5), "Burning Bridges" (#3) b/w "Oh Little One" (#34), and "It Only Happened Yesterday" (#38).[1] "What in the World's Come Over You" was Scott's second gold disc winner.[6] Scott continued to record and perform during the 1960s and 1970s.[1] His song "You're Just Gettin' Better" reached the country charts in 1974.[1] In May 1977, Scott recorded a Peel session for BBC Radio 1 disc jockey, John Peel.</code> | | <code>who produces the most wool in the world</code> | <code>Wool Global wool production is about 2 million tonnes per year, of which 60% goes into apparel. Wool comprises ca 3% of the global textile market, but its value is higher owing to dying and other modifications of the material.[1] Australia is a leading producer of wool which is mostly from Merino sheep but has been eclipsed by China in terms of total weight.[30] New Zealand (2016) is the third-largest producer of wool, and the largest producer of crossbred wool. Breeds such as Lincoln, Romney, Drysdale, and Elliotdale produce coarser fibers, and wool from these sheep is usually used for making carpets.</code> | * Loss: [<code>AdaptiveLayerLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#adaptivelayerloss) with these parameters: ```json { "loss": "GISTEmbedLoss", "n_layers_per_step": -1, "last_layer_weight": 1.5, "prior_layers_weight": 0.15, "kl_div_weight": 2, "kl_temperature": 2 } ``` #### trivia_pairs * Dataset: [trivia_pairs](https://huggingface.co/datasets/sentence-transformers/trivia-qa) at [a7c36e3](https://huggingface.co/datasets/sentence-transformers/trivia-qa/tree/a7c36e3c8c8c01526bc094d79bf80d4c848b0ad0) * Size: 6,500 training samples * Columns: <code>sentence1</code> and <code>sentence2</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 15.16 tokens</li><li>max: 48 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 456.87 tokens</li><li>max: 512 tokens</li></ul> | * Samples: | sentence1 | sentence2 | |:------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>Which American-born Sinclair won the Nobel Prize for Literature in 1930?</code> | <code>The Nobel Prize in Literature 1930 The Nobel Prize in Literature 1930 Sinclair Lewis The Nobel Prize in Literature 1930 Sinclair Lewis Prize share: 1/1 The Nobel Prize in Literature 1930 was awarded to Sinclair Lewis "for his vigorous and graphic art of description and his ability to create, with wit and humour, new types of characters". Photos: Copyright © The Nobel Foundation Share this: To cite this page MLA style: "The Nobel Prize in Literature 1930". Nobelprize.org. Nobel Media AB 2014. Web. 18 Jan 2017. <http://www.nobelprize.org/nobel_prizes/literature/laureates/1930/></code> | | <code>Where in England was Dame Judi Dench born?</code> | <code>Judi Dench - IMDb IMDb Actress | Music Department | Soundtrack Judi Dench was born in York, England, to Eleanora Olive (Jones), who was from Dublin, Ireland, and Reginald Arthur Dench, a doctor from Dorset, England. She attended Mount School in York, and studied at the Central School of Speech and Drama. She has performed with Royal Shakespeare Company, the National Theatre, and at Old Vic Theatre. She is a ... See full bio » Born: a list of 35 people created 02 Jul 2011 a list of 35 people created 19 Apr 2012 a list of 35 people created 28 May 2014 a list of 25 people created 05 Aug 2014 a list of 26 people created 18 May 2015 Do you have a demo reel? Add it to your IMDbPage How much of Judi Dench's work have you seen? User Polls Won 1 Oscar. Another 59 wins & 163 nominations. See more awards  » Known For  2016 The Hollow Crown (TV Series) Cecily, Duchess of York  2015 The Vote (TV Movie) Christine Metcalfe - Total War (1996) ... Narrator (voice) - Stalemate (1996) ... Narrator (voice)  1992 The Torch (TV Mini-Series) Aba  1990 Screen One (TV Series) Anne  1989 Behaving Badly (TV Mini-Series) Bridget  1981 BBC2 Playhouse (TV Series) Sister Scarli  1976 Arena (TV Series documentary) Sweetie Simpkins  1973 Ooh La La! (TV Series) Amélie  1966 Court Martial (TV Series) Marthe  1963 Z Cars (TV Series) Elena Collins  1963 Love Story (TV Series) Pat McKendrick  1960 The Terrible Choice (TV Series) Good Angel Music department (1 credit)   A Fine Romance (TV Series) (theme sung by - 14 episodes, 1981 - 1983) (theme song sung by - 12 episodes, 1983 - 1984) - A Romantic Meal (1984) ... (theme song sung by) - Problems (1984) ... (theme song sung by)  2013 Fifty Years on Stage (TV Movie) (performer: "Send in the Clowns")  2009 Nine (performer: "Folies Bergère") - What's Wrong with Mrs Bale? (1997) ... (performer: "Raindrops Keep Fallin' On My Head" - uncredited) - Misunderstandings (1993) ... (performer: "Walkin' My Baby Back Home" - uncredited)  1982-1984 A Fine Romance (TV Series) (performer - 2 episodes) - The Telephone Call (1984) ... (performer: "Boogie Woogie Bugle Boy" - uncredited) - Furniture (1982) ... (performer: "Rule, Britannia!" - uncredited) Hide   2009 Waiting in Rhyme (Video short) (special thanks)  2007 Expresso (Short) (special thanks)  1999 Shakespeare in Love and on Film (TV Movie documentary) (thanks - as Dame Judi Dench) Hide   2016 Rio Olympics (TV Mini-Series) Herself  2015 In Conversation (TV Series documentary) Herself  2015 Entertainment Tonight (TV Series) Herself  2015 CBS This Morning (TV Series) Herself - Guest  2015 The Insider (TV Series) Herself  1999-2014 Cinema 3 (TV Series) Herself  2013 Good Day L.A. (TV Series) Herself - Guest  2013 Arena (TV Series documentary) Herself  2013 At the Movies (TV Series) Herself  2013 Shooting Bond (Video documentary) Herself  2013 Bond's Greatest Moments (TV Movie documentary) Herself  2012 Made in Hollywood (TV Series) Herself  1999-2012 Charlie Rose (TV Series) Herself - Guest  2008-2012 This Morning (TV Series) Herself - Guest  2012 The Secrets of Skyfall (TV Short documentary) Herself  2012 Anderson Live (TV Series) Herself  2012 J. Edgar: A Complicated Man (Video documentary short) Herself  2011 The Many Faces of... (TV Series documentary) Herself / Various Characters  2011 Na plovárne (TV Series) Herself  2010 BBC Proms (TV Series) Herself  2010 The South Bank Show Revisited (TV Series documentary) Herself - Episode #6.68 (2009) ... Herself - Guest (as Dame Judi Dench)  2007-2009 Breakfast (TV Series)  2009 Larry King Live (TV Series) Herself - Guest  2009 The One Show (TV Series) Herself  2009 Cranford in Detail (Video documentary short) Herself / Miss Matty Jenkins (as Dame Judi Dench)  2005-2008 The South Bank Show (TV Series documentary) Herself  2008 Tavis Smiley (TV Series) Herself - Guest  2007 ITV News (TV Series) Herself - BAFTA Nominee  2007 The Making of Cranford (Video documentary short) Herself / Miss Matty Jenkyns (as Dame Judi Dench)  2006 Becoming Bond (TV Movie documentary) Herself  2006 Corazón de... (TV Series) Hers</code> | | <code>In which decade did Billboard magazine first publish and American hit chart?</code> | <code>The US Billboard song chart The US Billboard song chart Search this site with Google Song chart US Billboard The Billboard magazine has published various music charts starting (with sheet music) in 1894, the first "Music Hit Parade" was published in 1936 , the first "Music Popularity Chart" was calculated in 1940 . These charts became less irregular until the weekly "Hot 100" was started in 1958 . The current chart combines sales, airplay and downloads. A music collector that calls himself Bullfrog has been consolidating the complete chart from 1894 to the present day. he has published this information in a comprehenive spreadsheet (which can be obtained at bullfrogspond.com/ ). The Bullfrog data assigns each song a unique identifier, something like "1968_076" (which just happens to be the Bee Gees song "I've Gotta Get A Message To You"). This "Whitburn Number" is provided to match with the books of Joel Whitburn and consists of the year and a ranking within the year. A song that first entered the charts in December and has a long run is listed the following year. This numbering scheme means that songs which are still in the charts cannot be assigned a final id, because their ranking might change. So the definitive listing for a year cannot be final until about April. In our listing we only use songs with finalised IDs, this means that every year we have to wait until last year's entries are finalised before using them. (Source bullfrogspond.com/ , the original version used here was 20090808 with extra data from: the 2009 data from 20091219 the 2010 data from 20110305 the 2011 data from 20120929 the 2012 data from 20130330 the 2013 data from 20150328 The 20150328 data was the last one produced before the Billboard company forced the data to be withdrawn. As far as we know there are no more recent data sets available. This pattern of obtaining the data for a particular year in the middle of the following one comes from the way that the Bullfrog project generates the identifier for a song (what they call the "Prefix" in the spreadsheet). Recent entries are identified with keys like "2015-008" while older ones have keys like "2013_177". In the second case the underscore is significant, it indicates that this was the 177th biggest song released in 2013. Now, of course, during the year no one knows where a particular song will rank, so the underscore names can't be assigned until every song from a particular year has dropped out of the charts, so recent records are temporarily assigned a name with a dash. In about May of the following year the rankings are calculated and the final identifiers are assigned. That is why we at the Turret can only grab this data retrospectively. Attributes The original spreadsheet has a number of attributes, we have limited our attention to just a few of them: 134 9 The songs with the most entries on the chart were White Christmas (with 33 versions and a total of 110 weeks) and Stardust (with 19 and a total of 106 weeks). position The peak position that songs reached in the charts should show an smooth curve from number one down to the lowest position. This chart has more songs in the lower peak positions than one would expect. Before 1991 the profile of peak positions was exactly as you would expect, that year Billboard introduced the concept of "Recurrent" tracks, that is they removed any track from the chart which had spent more than twenty weeks in the chart and had fallen to the lower positions. weeks The effect of the "Recurrent" process, by which tracks are removed if they have spent at least twenty weeks in the chart and have fallen to the lower reaches, can clearly be seen in the strange spike in this attribute. This "adjustment" was intended to promote newer songs and ensure the chart does not become "stale". In fact since it was introduced in 1991 the length of long chart runs has increased, this might reflect the more conscious efforts of record companies to "game" the charts by controlling release times and promotions, or it coul</code> | * Loss: [<code>AdaptiveLayerLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#adaptivelayerloss) with these parameters: ```json { "loss": "GISTEmbedLoss", "n_layers_per_step": -1, "last_layer_weight": 1.5, "prior_layers_weight": 0.15, "kl_div_weight": 2, "kl_temperature": 2 } ``` #### quora_pairs * Dataset: [quora_pairs](https://huggingface.co/datasets/sentence-transformers/quora-duplicates) at [451a485](https://huggingface.co/datasets/sentence-transformers/quora-duplicates/tree/451a4850bd141edb44ade1b5828c259abd762cdb) * Size: 4,000 training samples * Columns: <code>sentence1</code> and <code>sentence2</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 13.53 tokens</li><li>max: 42 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 13.68 tokens</li><li>max: 43 tokens</li></ul> | * Samples: | sentence1 | sentence2 | |:----------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------| | <code>Astrology: I am a Capricorn Sun Cap moon and cap rising...what does that say about me?</code> | <code>I'm a triple Capricorn (Sun, Moon and ascendant in Capricorn) What does this say about me?</code> | | <code>How can I be a good geologist?</code> | <code>What should I do to be a great geologist?</code> | | <code>How do I read and find my YouTube comments?</code> | <code>How can I see all my Youtube comments?</code> | * Loss: [<code>AdaptiveLayerLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#adaptivelayerloss) with these parameters: ```json { "loss": "GISTEmbedLoss", "n_layers_per_step": -1, "last_layer_weight": 1.5, "prior_layers_weight": 0.15, "kl_div_weight": 2, "kl_temperature": 2 } ``` #### gooaq_pairs * Dataset: [gooaq_pairs](https://huggingface.co/datasets/sentence-transformers/gooaq) at [b089f72](https://huggingface.co/datasets/sentence-transformers/gooaq/tree/b089f728748a068b7bc5234e5bcf5b25e3c8279c) * Size: 6,500 training samples * Columns: <code>sentence1</code> and <code>sentence2</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | |:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 11.6 tokens</li><li>max: 21 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 57.74 tokens</li><li>max: 127 tokens</li></ul> | * Samples: | sentence1 | sentence2 | |:---------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>is toprol xl the same as metoprolol?</code> | <code>Metoprolol succinate is also known by the brand name Toprol XL. It is the extended-release form of metoprolol. Metoprolol succinate is approved to treat high blood pressure, chronic chest pain, and congestive heart failure.</code> | | <code>are you experienced cd steve hoffman?</code> | <code>The Are You Experienced album was apparently mastered from the original stereo UK master tapes (according to Steve Hoffman - one of the very few who has heard both the master tapes and the CDs produced over the years). ... The CD booklets were a little sparse, but at least they stayed true to the album's original design.</code> | | <code>how are babushka dolls made?</code> | <code>Matryoshka dolls are made of wood from lime, balsa, alder, aspen, and birch trees; lime is probably the most common wood type. ... After cutting, the trees are stripped of most of their bark, although a few inner rings of bark are left to bind the wood and keep it from splitting.</code> | * Loss: [<code>AdaptiveLayerLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#adaptivelayerloss) with these parameters: ```json { "loss": "GISTEmbedLoss", "n_layers_per_step": -1, "last_layer_weight": 1.5, "prior_layers_weight": 0.15, "kl_div_weight": 2, "kl_temperature": 2 } ``` ### Evaluation Datasets #### nli-pairs * Dataset: [nli-pairs](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab) * Size: 750 evaluation samples * Columns: <code>anchor</code> and <code>positive</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 17.61 tokens</li><li>max: 51 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.71 tokens</li><li>max: 29 tokens</li></ul> | * Samples: | anchor | positive | |:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------| | <code>Two women are embracing while holding to go packages.</code> | <code>Two woman are holding packages.</code> | | <code>Two young children in blue jerseys, one with the number 9 and one with the number 2 are standing on wooden steps in a bathroom and washing their hands in a sink.</code> | <code>Two kids in numbered jerseys wash their hands.</code> | | <code>A man selling donuts to a customer during a world exhibition event held in the city of Angeles</code> | <code>A man selling donuts to a customer.</code> | * Loss: [<code>AdaptiveLayerLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#adaptivelayerloss) with these parameters: ```json { "loss": "GISTEmbedLoss", "n_layers_per_step": -1, "last_layer_weight": 1.5, "prior_layers_weight": 0.15, "kl_div_weight": 2, "kl_temperature": 2 } ``` #### scitail-pairs-pos * Dataset: [scitail-pairs-pos](https://huggingface.co/datasets/allenai/scitail) at [0cc4353](https://huggingface.co/datasets/allenai/scitail/tree/0cc4353235b289165dfde1c7c5d1be983f99ce44) * Size: 750 evaluation samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | label | |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:------------------------------------------------| | type | string | string | int | | details | <ul><li>min: 5 tokens</li><li>mean: 22.43 tokens</li><li>max: 61 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 15.3 tokens</li><li>max: 36 tokens</li></ul> | <ul><li>0: ~50.00%</li><li>1: ~50.00%</li></ul> | * Samples: | sentence1 | sentence2 | label | |:----------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------|:---------------| | <code>An introduction to atoms and elements, compounds, atomic structure and bonding, the molecule and chemical reactions.</code> | <code>Replace another in a molecule happens to atoms during a substitution reaction.</code> | <code>0</code> | | <code>Wavelength The distance between two consecutive points on a sinusoidal wave that are in phase;</code> | <code>Wavelength is the distance between two corresponding points of adjacent waves called.</code> | <code>1</code> | | <code>humans normally have 23 pairs of chromosomes.</code> | <code>Humans typically have 23 pairs pairs of chromosomes.</code> | <code>1</code> | * Loss: [<code>AdaptiveLayerLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#adaptivelayerloss) with these parameters: ```json { "loss": "GISTEmbedLoss", "n_layers_per_step": -1, "last_layer_weight": 1.5, "prior_layers_weight": 0.15, "kl_div_weight": 2, "kl_temperature": 2 } ``` #### qnli-contrastive * Dataset: [qnli-contrastive](https://huggingface.co/datasets/nyu-mll/glue) at [bcdcba7](https://huggingface.co/datasets/nyu-mll/glue/tree/bcdcba79d07bc864c1c254ccfcedcce55bcc9a8c) * Size: 750 evaluation samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | label | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------| | type | string | string | int | | details | <ul><li>min: 6 tokens</li><li>mean: 14.15 tokens</li><li>max: 36 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 36.98 tokens</li><li>max: 225 tokens</li></ul> | <ul><li>0: 100.00%</li></ul> | * Samples: | sentence1 | sentence2 | label | |:--------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------------| | <code>What came into force after the new constitution was herald?</code> | <code>As of that day, the new constitution heralding the Second Republic came into force.</code> | <code>0</code> | | <code>What is the first major city in the stream of the Rhine?</code> | <code>The most important tributaries in this area are the Ill below of Strasbourg, the Neckar in Mannheim and the Main across from Mainz.</code> | <code>0</code> | | <code>What is the minimum required if you want to teach in Canada?</code> | <code>In most provinces a second Bachelor's Degree such as a Bachelor of Education is required to become a qualified teacher.</code> | <code>0</code> | * Loss: [<code>AdaptiveLayerLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#adaptivelayerloss) with these parameters: ```json { "loss": "OnlineContrastiveLoss", "n_layers_per_step": -1, "last_layer_weight": 1.5, "prior_layers_weight": 0.15, "kl_div_weight": 2, "kl_temperature": 2 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 28 - `per_device_eval_batch_size`: 18 - `learning_rate`: 2e-05 - `weight_decay`: 5e-07 - `num_train_epochs`: 4 - `lr_scheduler_type`: cosine_with_restarts - `lr_scheduler_kwargs`: {'num_cycles': 5} - `warmup_ratio`: 0.25 - `save_safetensors`: False - `fp16`: True - `push_to_hub`: True - `hub_model_id`: bobox/DeBERTaV3-small-SenTra-AdaptiveLayers-AllSoft-HighTemp-n - `hub_strategy`: checkpoint - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 28 - `per_device_eval_batch_size`: 18 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 5e-07 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 4 - `max_steps`: -1 - `lr_scheduler_type`: cosine_with_restarts - `lr_scheduler_kwargs`: {'num_cycles': 5} - `warmup_ratio`: 0.25 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: False - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: True - `resume_from_checkpoint`: None - `hub_model_id`: bobox/DeBERTaV3-small-SenTra-AdaptiveLayers-AllSoft-HighTemp-n - `hub_strategy`: checkpoint - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | scitail-pairs-pos loss | qnli-contrastive loss | nli-pairs loss | sts-test_spearman_cosine | |:------:|:-----:|:-------------:|:----------------------:|:---------------------:|:--------------:|:------------------------:| | 0.1003 | 281 | 8.4339 | - | - | - | - | | 0.2006 | 562 | 6.8644 | - | - | - | - | | 0.3009 | 843 | 5.1225 | - | - | - | - | | 0.4001 | 1121 | - | 2.4070 | 4.2827 | 3.6032 | - | | 0.4011 | 1124 | 3.9997 | - | - | - | - | | 0.5014 | 1405 | 3.6186 | - | - | - | - | | 0.6017 | 1686 | 3.259 | - | - | - | - | | 0.7020 | 1967 | 3.1712 | - | - | - | - | | 0.8001 | 2242 | - | 1.6090 | 2.5195 | 2.2851 | - | | 0.8023 | 2248 | 3.104 | - | - | - | - | | 0.9026 | 2529 | 2.8549 | - | - | - | - | | 1.0029 | 2810 | 2.8668 | - | - | - | - | | 1.1031 | 3091 | 2.7466 | - | - | - | - | | 1.2002 | 3363 | - | 1.3474 | 2.2222 | 1.8491 | - | | 1.2034 | 3372 | 2.6502 | - | - | - | - | | 1.3037 | 3653 | 2.2191 | - | - | - | - | | 1.4040 | 3934 | 2.2311 | - | - | - | - | | 1.5043 | 4215 | 2.22 | - | - | - | - | | 1.6003 | 4484 | - | 1.2671 | 1.7964 | 1.6444 | - | | 1.6046 | 4496 | 2.1372 | - | - | - | - | | 1.7049 | 4777 | 2.2219 | - | - | - | - | | 1.8051 | 5058 | 2.2618 | - | - | - | - | | 1.9054 | 5339 | 1.9995 | - | - | - | - | | 2.0004 | 5605 | - | 1.2434 | 1.8182 | 1.5385 | - | | 2.0057 | 5620 | 1.9757 | - | - | - | - | | 2.1060 | 5901 | 2.0401 | - | - | - | - | | 2.2063 | 6182 | 1.9818 | - | - | - | - | | 2.3066 | 6463 | 1.7816 | - | - | - | - | | 2.4004 | 6726 | - | 1.0396 | 1.5587 | 1.5077 | - | | 2.4069 | 6744 | 1.9239 | - | - | - | - | | 2.5071 | 7025 | 2.0148 | - | - | - | - | | 2.6074 | 7306 | 1.9629 | - | - | - | - | | 2.7077 | 7587 | 1.7316 | - | - | - | - | | 2.8005 | 7847 | - | 1.0507 | 1.3294 | 1.4039 | - | | 2.8080 | 7868 | 1.7794 | - | - | - | - | | 2.9083 | 8149 | 1.7029 | - | - | - | - | | 3.0086 | 8430 | 1.7996 | - | - | - | - | | 3.1089 | 8711 | 1.9379 | - | - | - | - | | 3.2006 | 8968 | - | 0.9949 | 1.3678 | 1.3436 | - | | 3.2091 | 8992 | 1.844 | - | - | - | - | | 3.3094 | 9273 | 1.358 | - | - | - | - | | 3.4097 | 9554 | 1.5104 | - | - | - | - | | 3.5100 | 9835 | 1.6964 | - | - | - | - | | 3.6006 | 10089 | - | 0.9538 | 1.1866 | 1.3098 | - | | 3.6103 | 10116 | 1.7661 | - | - | - | - | | 3.7106 | 10397 | 1.6529 | - | - | - | - | | 3.8108 | 10678 | 1.6835 | - | - | - | - | | 3.9111 | 10959 | 1.35 | - | - | - | - | | 4.0 | 11208 | - | - | - | - | 0.5551 | ### Framework Versions - Python: 3.10.13 - Sentence Transformers: 3.0.1 - Transformers: 4.41.2 - PyTorch: 2.1.2 - Accelerate: 0.30.1 - Datasets: 2.19.2 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### AdaptiveLayerLoss ```bibtex @misc{li20242d, title={2D Matryoshka Sentence Embeddings}, author={Xianming Li and Zongxi Li and Jing Li and Haoran Xie and Qing Li}, year={2024}, eprint={2402.14776}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` #### CoSENTLoss ```bibtex @online{kexuefm-8847, title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT}, author={Su Jianlin}, year={2022}, month={Jan}, url={https://kexue.fm/archives/8847}, } ``` #### GISTEmbedLoss ```bibtex @misc{solatorio2024gistembed, title={GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning}, author={Aivin V. Solatorio}, year={2024}, eprint={2402.16829}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
[ "MEDAL", "SCIQ", "SCITAIL" ]
fatimaaa1/LLAMA3-OPENBIO
fatimaaa1
text-generation
[ "transformers", "pytorch", "llama", "text-generation", "llama-3", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "distillation", "conversational", "en", "arxiv:2305.18290", "arxiv:2303.13375", "arxiv:2212.13138", "arxiv:2305.09617", "arxiv:2402.07023", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:finetune:meta-llama/Meta-Llama-3-8B", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-07-08T06:01:31Z
2024-07-08T07:38:54+00:00
14
0
--- base_model: meta-llama/Meta-Llama-3-8B language: - en license: llama3 tags: - llama-3 - llama - Mixtral - instruct - finetune - chatml - DPO - RLHF - gpt4 - distillation widget: - example_title: OpenBioLLM-8B messages: - role: system content: You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. - role: user content: How long does it take for newborn jaundice to go away? output: text: 'Newborn jaundice, also known as neonatal jaundice, is a common condition in newborns where the yellowing of the skin and eyes occurs due to an elevated level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when red blood cells break down. In most cases, newborn jaundice resolves on its own without any specific treatment. The duration of newborn jaundice can vary depending on several factors such as the underlying cause, gestational age at birth, and individual variations in bilirubin metabolism. Here are some general guidelines: 1. Physiological jaundice: This is the most common type of newborn jaundice and usually appears within 24-72 hours after birth. It tends to peak between the second and fifth day of life and gradually improves over the next week or two. By the time the baby is one week old, the jaundice should have mostly resolved. 2. Breast milk jaundice: This type of jaundice occurs in breastfed babies and may appear later than physiological jaundice, typically between the fifth and fourteenth day of life. It tends to persist for a longer duration but usually resolves within six weeks after birth. 3. Pathological jaundice: This type of jaundice is less common and occurs due to an underlying medical condition that affects bilirubin metabolism or liver function. The duration of pathological jaundice depends on the specific cause and may require treatment. It''s important for parents to monitor their newborn''s jaundice closely and seek medical advice if the jaundice progresses rapidly, becomes severe, or is accompanied by other symptoms such as poor feeding, lethargy, or excessive sleepiness. In these cases, further evaluation and management may be necessary. Remember that each baby is unique, and the timing of jaundice resolution can vary. If you have concerns about your newborn''s jaundice, it''s always best to consult with a healthcare professional for personalized advice and guidance.' model-index: - name: OpenBioLLM-8B results: [] --- <div align="center"> <img width="260px" src="https://hf.fast360.xyz/production/uploads/5f3fe13d79c1ba4c353d0c19/BrQCb95lmEIFz79QAmoNA.png"></div> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/2FhDh8NDvMl7iSxbQz9BP.png) <div align="center"> <h1>Advancing Open-source Large Language Models in Medical Domain</h1> </div> <p align="center" style="margin-top: 0px;"> <a href="https://colab.research.google.com/drive/1F5oV20InEYeAJGmBwYF9NM_QhLmjBkKJ?usp=sharing"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text" style=" margin-right: 5px;">Online Demo</span> </a> | <a href="https://github.com/openlifescience-ai"> <img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text" style=" margin-right: 5px;">GitHub</span> </a> | <a href="#"> <img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text" style="margin-right: 5px;">Paper</span> </a> | <a href="https://discord.gg/A5Fjf5zC69"> <img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text">Discord</span> </a> </p> ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/KGmRE5w2sepNtwsEu8t7K.jpeg) Introducing OpenBioLLM-8B: A State-of-the-Art Open Source Biomedical Large Language Model OpenBioLLM-8B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks. 🏥 **Biomedical Specialization**: OpenBioLLM-8B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency. 🎓 **Superior Performance**: With 8 billion parameters, OpenBioLLM-8B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-3.5 and Meditron-70B on biomedical benchmarks. 🧠 **Advanced Training Techniques**: OpenBioLLM-8B builds upon the powerful foundations of the **Meta-Llama-3-8B** and [Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B) models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include: <div align="center"> <img width="1200px" src="https://hf.fast360.xyz/production/uploads/5f3fe13d79c1ba4c353d0c19/oPchsJsEpQoGcGXVbh7YS.png"> </div> - **Policy Optimization**: [Direct Preference Optimization: Your Language Model is Secretly a Reward Model (DPO)](https://arxiv.org/abs/2305.18290) - **Ranking Dataset**: [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar) - **Fine-tuning dataset**: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated) This combination of cutting-edge techniques enables OpenBioLLM-8B to align with key capabilities and preferences for biomedical applications. ⚙️ **Release Details**: - **Model Size**: 8 billion parameters - **Quantization**: Optimized quantized versions available [Here](https://huggingface.co/aaditya/OpenBioLLM-Llama3-8B-GGUF) - **Language(s) (NLP):** en - **Developed By**: [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) from Saama AI Labs - **License:** Meta-Llama License - **Fine-tuned from models:** [meta-llama/Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B) - **Resources for more information:** - Paper: Coming soon The model can be fine-tuned for more specialized tasks and datasets as needed. OpenBioLLM-8B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences. We are excited to share OpenBioLLM-8B with researchers and developers around the world. ### Use with transformers **Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.** See the snippet below for usage with Transformers: ```python import transformers import torch model_id = "aaditya/OpenBioLLM-Llama3-8B" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device="auto", ) messages = [ {"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. Your name is OpenBioLLM, and you were developed by Saama AI Labs. who's willing to help answer the user's query with explanation. In your explanation, leverage your deep medical expertise such as relevant anatomical structures, physiological processes, diagnostic criteria, treatment guidelines, or other pertinent medical concepts. Use precise medical terminology while still aiming to make the explanation clear and accessible to a general audience."}, {"role": "user", "content": "How can i split a 3mg or 4mg waefin pill so i can get a 2.5mg pill?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.0, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` ## **Training procedure** ### **Training hyperparameters** <details> <summary>Click to see details</summary> - learning_rate: 0.0002 - lr_scheduler: cosine - train_batch_size: 12 - eval_batch_size: 8 - GPU: H100 80GB SXM5 - num_devices: 1 - optimizer: adamw_bnb_8bit - lr_scheduler_warmup_steps: 100 - num_epochs: 4 </details> ### **Peft hyperparameters** <details> <summary>Click to see details</summary> - adapter: qlora - lora_r: 128 - lora_alpha: 256 - lora_dropout: 0.05 - lora_target_linear: true -lora_target_modules: - q_proj - v_proj - k_proj - o_proj - gate_proj - down_proj - up_proj </details> ### **Training results** ### **Framework versions** - Transformers 4.39.3 - Pytorch 2.1.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.1 - Axolotl - Lm harness for evaluation # Benchmark Results 🔥 OpenBioLLM-8B demonstrates superior performance compared to larger models, such as GPT-3.5, Meditron-70B across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 72.50%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge. 🚨 The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy. | | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA 4 opts | PubMedQA | MedMCQA | Avg | |--------------------|-------------|------------------|---------|--------------|-----------------|------------------|--------------|----------|---------|-------| | **OpenBioLLM-70B** | **92.93** | **93.197** | **83.904** | 93.75 | 93.827 | **85.749** | 78.162 | 78.97 | **74.014** | **86.05588** | | Med-PaLM-2 (5-shot) | 88.3 | 90 | 77.8 | **95.2** | 94.4 | 80.9 | **79.7** | **79.2** | 71.3 | 84.08 | | **GPT-4** | 86.04 | 91 | 80 | 93.01 | **95.14** | 76.88 | 78.87 | 75.2 | 69.52 | 82.85 | | Med-PaLM-1 (Flan-PaLM, 5-shot) | 80.4 | 75 | 63.7 | 83.8 | 88.9 | 76.3 | 67.6 | 79 | 57.6 | 74.7 | | **OpenBioLLM-8B** | 76.101 | 86.1 | 69.829 | 78.21 | 84.213 | 68.042 | 58.993 | 74.12 | 56.913 | 72.502 | | Gemini-1.0 | 76.7 | 75.8 | 66.7 | 77.7 | 88 | 69.2 | 58 | 70.7 | 54.3 | 70.79 | | GPT-3.5 Turbo 1106 | 74.71 | 74 | 72.79 | 72.79 | 72.91 | 64.73 | 57.71 | 72.66 | 53.79 | 66 | | Meditron-70B | 66.79 | 69 | 53.33 | 71.69 | 76.38 | 63 | 57.1 | 76.6 | 46.85 | 64.52 | | gemma-7b | 69.81 | 70 | 59.26 | 66.18 | 79.86 | 60.12 | 47.21 | 76.2 | 48.96 | 64.18 | | Mistral-7B-v0.1 | 68.68 | 71 | 55.56 | 68.38 | 68.06 | 59.54 | 50.82 | 75.4 | 48.2 | 62.85 | | Apollo-7B | 62.26 | 72 | 61.48 | 69.12 | 70.83 | 55.49 | 55.22 | 39.8 | 53.77 | 60 | | MedAlpaca-7b | 57.36 | 69 | 57.04 | 67.28 | 65.28 | 54.34 | 41.71 | 72.8 | 37.51 | 58.03 | | BioMistral-7B | 59.9 | 64 | 56.5 | 60.4 | 59 | 54.7 | 50.6 | 77.5 | 48.1 | 57.3 | | AlpaCare-llama2-7b | 49.81 | 49 | 45.92 | 33.82 | 50 | 43.35 | 29.77 | 72.2 | 34.42 | 45.36 | | ClinicalGPT | 30.56 | 27 | 30.37 | 19.48 | 25 | 24.27 | 26.08 | 63.8 | 28.18 | 30.52 | <div align="center"> <img width="1600px" src="https://hf.fast360.xyz/production/uploads/5f3fe13d79c1ba4c353d0c19/_SzdcJSBjZyo8RS1bTEkP.png"> </div> ## Detailed Medical Subjectwise accuracy ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/UXF-V0col0Z0sS6BGPBkE.png) # Use Cases & Examples 🚨 **Below results are from the quantized version of OpenBioLLM-70B** # Summarize Clinical Notes OpenBioLLM-70B can efficiently analyze and summarize complex clinical notes, EHR data, and discharge summaries, extracting key information and generating concise, structured summaries ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/xdwdBgOxNi_TfML0hKlI8.png) # Answer Medical Questions OpenBioLLM-70B can provide answers to a wide range of medical questions. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/zO95GlwOQEZqCKQF69mE6.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/OKBczKw7gWeW5xsuDpc27.png) <details> <summary>Click to see details</summary> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/eJGHT5khppYvJb8fQ-YW4.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/Cnbwrqa_-ORHRuNRC2P6Y.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/J9DhdcvukAc9mnnW9fj2C.png) </details> # Clinical Entity Recognition OpenBioLLM-70B can perform advanced clinical entity recognition by identifying and extracting key medical concepts, such as diseases, symptoms, medications, procedures, and anatomical structures, from unstructured clinical text. By leveraging its deep understanding of medical terminology and context, the model can accurately annotate and categorize clinical entities, enabling more efficient information retrieval, data analysis, and knowledge discovery from electronic health records, research articles, and other biomedical text sources. This capability can support various downstream applications, such as clinical decision support, pharmacovigilance, and medical research. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_69BW4k9LVABFwtxixL45.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/DKy5wYCoPhoPPUc1-x8_J.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/7WD9zCCBZT4-4XlfnIQjl.png) # Biomarkers Extraction ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/ZttoM4AiteT7gFYVhjIpN.png) # Classification OpenBioLLM-70B can perform various biomedical classification tasks, such as disease prediction, sentiment analysis, medical document categorization ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/Bf5MW1d75qT-1F_TR_hC0.png) # De-Identification OpenBioLLM-70B can detect and remove personally identifiable information (PII) from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/hKX4kzm--Tw5bj6K78msy.png) **Advisory Notice!**  While OpenBioLLM-70B & 8B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments. Therefore, we strongly advise against using OpenBioLLM-70B & 8B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations. OpenBioLLM-70B & 8B are intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor. Appropriately adapting and validating OpenBioLLM-70B & 8B for specific medical use cases would require significant additional work, potentially including: - Thorough testing and evaluation in relevant clinical scenarios - Alignment with evidence-based guidelines and best practices - Mitigation of potential biases and failure modes - Integration with human oversight and interpretation - Compliance with regulatory and ethical standards Always consult a qualified healthcare provider for personal medical needs. # Citation If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows: ``` @misc{OpenBioLLMs, author = {Ankit Pal, Malaikannan Sankarasubbu}, title = {OpenBioLLMs: Advancing Open-Source Large Language Models for Healthcare and Life Sciences}, year = {2024}, publisher = {Hugging Face}, journal = {Hugging Face repository}, howpublished = {\url{https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B}} } ``` The accompanying paper is currently in progress and will be released soon. <div align="center"> <h2> 💌 Contact </h2> </div> We look forward to hearing you and collaborating on this exciting project! **Contributors:** - [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) [aadityaura at gmail dot com] - Saama AI Labs - Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset. # References We thank the [Meta Team](meta-llama/Meta-Llama-3-70B-Instruct) for their amazing models! Result sources - [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (https://arxiv.org/abs/2303.13375) - [2] Med-PaLM-1 [Large Language Models Encode Clinical Knowledge](https://arxiv.org/abs/2212.13138) - [3] Med-PaLM-2 [Towards Expert-Level Medical Question Answering with Large Language Models](https://arxiv.org/abs/2305.09617) - [4] Gemini-1.0 [Gemini Goes to Med School](https://arxiv.org/abs/2402.07023)
[ "MEDQA", "PUBMEDQA" ]
bcastle/snowflake-arctic-embed-l-Q8_0-GGUF
bcastle
sentence-similarity
[ "sentence-transformers", "gguf", "feature-extraction", "sentence-similarity", "mteb", "arctic", "snowflake-arctic-embed", "transformers.js", "llama-cpp", "gguf-my-repo", "base_model:Snowflake/snowflake-arctic-embed-l", "base_model:quantized:Snowflake/snowflake-arctic-embed-l", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-07-23T14:05:16Z
2024-07-23T14:05:22+00:00
14
0
--- base_model: Snowflake/snowflake-arctic-embed-l license: apache-2.0 pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb - arctic - snowflake-arctic-embed - transformers.js - llama-cpp - gguf-my-repo model-index: - name: snowflake-arctic-embed-l results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 74.80597014925374 - type: ap value: 37.911466766189875 - type: f1 value: 68.88606927542106 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 78.402275 - type: ap value: 73.03294793248114 - type: f1 value: 78.3147786132161 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 36.717999999999996 - type: f1 value: 35.918044248787766 - task: type: Retrieval dataset: name: MTEB ArguAna type: mteb/arguana config: default split: test revision: c22ab2a51041ffd869aaddef7af8d8215647e41a metrics: - type: map_at_1 value: 34.495 - type: map_at_10 value: 50.236000000000004 - type: map_at_100 value: 50.944 - type: map_at_1000 value: 50.94499999999999 - type: map_at_3 value: 45.341 - type: map_at_5 value: 48.286 - type: mrr_at_1 value: 35.135 - type: mrr_at_10 value: 50.471 - type: mrr_at_100 value: 51.185 - type: mrr_at_1000 value: 51.187000000000005 - type: mrr_at_3 value: 45.602 - type: mrr_at_5 value: 48.468 - type: ndcg_at_1 value: 34.495 - type: ndcg_at_10 value: 59.086000000000006 - type: ndcg_at_100 value: 61.937 - type: ndcg_at_1000 value: 61.966 - type: ndcg_at_3 value: 49.062 - type: ndcg_at_5 value: 54.367 - type: precision_at_1 value: 34.495 - type: precision_at_10 value: 8.734 - type: precision_at_100 value: 0.9939999999999999 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 19.962 - type: precision_at_5 value: 14.552000000000001 - type: recall_at_1 value: 34.495 - type: recall_at_10 value: 87.33999999999999 - type: recall_at_100 value: 99.431 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 59.885999999999996 - type: recall_at_5 value: 72.76 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 47.46440874635501 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 38.28720154213723 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 60.34614226394902 - type: mrr value: 75.05628105351096 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 87.41072716728198 - type: cos_sim_spearman value: 86.34534093114372 - type: euclidean_pearson value: 85.34009667750838 - type: euclidean_spearman value: 86.34534093114372 - type: manhattan_pearson value: 85.2158833586889 - type: manhattan_spearman value: 86.60920236509224 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 80.06493506493507 - type: f1 value: 79.28108600339833 - task: type: Clustering dataset: name: MTEB BigPatentClustering type: jinaai/big-patent-clustering config: default split: test revision: 62d5330920bca426ce9d3c76ea914f15fc83e891 metrics: - type: v_measure value: 20.545049432417287 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 37.54369718479804 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 32.64941588219162 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: mteb/cqadupstack-android config: default split: test revision: f46a197baaae43b4f621051089b82a364682dfeb metrics: - type: map_at_1 value: 37.264 - type: map_at_10 value: 49.43 - type: map_at_100 value: 50.967 - type: map_at_1000 value: 51.08200000000001 - type: map_at_3 value: 45.742 - type: map_at_5 value: 47.764 - type: mrr_at_1 value: 44.921 - type: mrr_at_10 value: 54.879999999999995 - type: mrr_at_100 value: 55.525000000000006 - type: mrr_at_1000 value: 55.565 - type: mrr_at_3 value: 52.480000000000004 - type: mrr_at_5 value: 53.86 - type: ndcg_at_1 value: 44.921 - type: ndcg_at_10 value: 55.664 - type: ndcg_at_100 value: 60.488 - type: ndcg_at_1000 value: 62.138000000000005 - type: ndcg_at_3 value: 50.797000000000004 - type: ndcg_at_5 value: 52.94799999999999 - type: precision_at_1 value: 44.921 - type: precision_at_10 value: 10.587 - type: precision_at_100 value: 1.629 - type: precision_at_1000 value: 0.203 - type: precision_at_3 value: 24.034 - type: precision_at_5 value: 17.224999999999998 - type: recall_at_1 value: 37.264 - type: recall_at_10 value: 67.15 - type: recall_at_100 value: 86.811 - type: recall_at_1000 value: 97.172 - type: recall_at_3 value: 53.15800000000001 - type: recall_at_5 value: 59.116 - task: type: Retrieval dataset: name: MTEB CQADupstackEnglishRetrieval type: mteb/cqadupstack-english config: default split: test revision: ad9991cb51e31e31e430383c75ffb2885547b5f0 metrics: - type: map_at_1 value: 36.237 - type: map_at_10 value: 47.941 - type: map_at_100 value: 49.131 - type: map_at_1000 value: 49.26 - type: map_at_3 value: 44.561 - type: map_at_5 value: 46.28 - type: mrr_at_1 value: 45.605000000000004 - type: mrr_at_10 value: 54.039 - type: mrr_at_100 value: 54.653 - type: mrr_at_1000 value: 54.688 - type: mrr_at_3 value: 52.006 - type: mrr_at_5 value: 53.096 - type: ndcg_at_1 value: 45.605000000000004 - type: ndcg_at_10 value: 53.916 - type: ndcg_at_100 value: 57.745999999999995 - type: ndcg_at_1000 value: 59.492999999999995 - type: ndcg_at_3 value: 49.774 - type: ndcg_at_5 value: 51.434999999999995 - type: precision_at_1 value: 45.605000000000004 - type: precision_at_10 value: 10.229000000000001 - type: precision_at_100 value: 1.55 - type: precision_at_1000 value: 0.2 - type: precision_at_3 value: 24.098 - type: precision_at_5 value: 16.726 - type: recall_at_1 value: 36.237 - type: recall_at_10 value: 64.03 - type: recall_at_100 value: 80.423 - type: recall_at_1000 value: 91.03 - type: recall_at_3 value: 51.20400000000001 - type: recall_at_5 value: 56.298 - task: type: Retrieval dataset: name: MTEB CQADupstackGamingRetrieval type: mteb/cqadupstack-gaming config: default split: test revision: 4885aa143210c98657558c04aaf3dc47cfb54340 metrics: - type: map_at_1 value: 47.278 - type: map_at_10 value: 59.757000000000005 - type: map_at_100 value: 60.67 - type: map_at_1000 value: 60.714 - type: map_at_3 value: 56.714 - type: map_at_5 value: 58.453 - type: mrr_at_1 value: 53.73 - type: mrr_at_10 value: 62.970000000000006 - type: mrr_at_100 value: 63.507999999999996 - type: mrr_at_1000 value: 63.53 - type: mrr_at_3 value: 60.909 - type: mrr_at_5 value: 62.172000000000004 - type: ndcg_at_1 value: 53.73 - type: ndcg_at_10 value: 64.97 - type: ndcg_at_100 value: 68.394 - type: ndcg_at_1000 value: 69.255 - type: ndcg_at_3 value: 60.228 - type: ndcg_at_5 value: 62.617999999999995 - type: precision_at_1 value: 53.73 - type: precision_at_10 value: 10.056 - type: precision_at_100 value: 1.265 - type: precision_at_1000 value: 0.13699999999999998 - type: precision_at_3 value: 26.332 - type: precision_at_5 value: 17.743000000000002 - type: recall_at_1 value: 47.278 - type: recall_at_10 value: 76.86500000000001 - type: recall_at_100 value: 91.582 - type: recall_at_1000 value: 97.583 - type: recall_at_3 value: 64.443 - type: recall_at_5 value: 70.283 - task: type: Retrieval dataset: name: MTEB CQADupstackGisRetrieval type: mteb/cqadupstack-gis config: default split: test revision: 5003b3064772da1887988e05400cf3806fe491f2 metrics: - type: map_at_1 value: 29.702 - type: map_at_10 value: 39.463 - type: map_at_100 value: 40.508 - type: map_at_1000 value: 40.579 - type: map_at_3 value: 36.748999999999995 - type: map_at_5 value: 38.296 - type: mrr_at_1 value: 31.977 - type: mrr_at_10 value: 41.739 - type: mrr_at_100 value: 42.586 - type: mrr_at_1000 value: 42.636 - type: mrr_at_3 value: 39.096 - type: mrr_at_5 value: 40.695 - type: ndcg_at_1 value: 31.977 - type: ndcg_at_10 value: 44.855000000000004 - type: ndcg_at_100 value: 49.712 - type: ndcg_at_1000 value: 51.443000000000005 - type: ndcg_at_3 value: 39.585 - type: ndcg_at_5 value: 42.244 - type: precision_at_1 value: 31.977 - type: precision_at_10 value: 6.768000000000001 - type: precision_at_100 value: 0.9690000000000001 - type: precision_at_1000 value: 0.116 - type: precision_at_3 value: 16.761 - type: precision_at_5 value: 11.593 - type: recall_at_1 value: 29.702 - type: recall_at_10 value: 59.082 - type: recall_at_100 value: 80.92 - type: recall_at_1000 value: 93.728 - type: recall_at_3 value: 45.212 - type: recall_at_5 value: 51.449 - task: type: Retrieval dataset: name: MTEB CQADupstackMathematicaRetrieval type: mteb/cqadupstack-mathematica config: default split: test revision: 90fceea13679c63fe563ded68f3b6f06e50061de metrics: - type: map_at_1 value: 21.336 - type: map_at_10 value: 30.137999999999998 - type: map_at_100 value: 31.385 - type: map_at_1000 value: 31.495 - type: map_at_3 value: 27.481 - type: map_at_5 value: 28.772 - type: mrr_at_1 value: 25.871 - type: mrr_at_10 value: 34.686 - type: mrr_at_100 value: 35.649 - type: mrr_at_1000 value: 35.705 - type: mrr_at_3 value: 32.09 - type: mrr_at_5 value: 33.52 - type: ndcg_at_1 value: 25.871 - type: ndcg_at_10 value: 35.617 - type: ndcg_at_100 value: 41.272999999999996 - type: ndcg_at_1000 value: 43.725 - type: ndcg_at_3 value: 30.653999999999996 - type: ndcg_at_5 value: 32.714 - type: precision_at_1 value: 25.871 - type: precision_at_10 value: 6.4799999999999995 - type: precision_at_100 value: 1.0699999999999998 - type: precision_at_1000 value: 0.13999999999999999 - type: precision_at_3 value: 14.469000000000001 - type: precision_at_5 value: 10.274 - type: recall_at_1 value: 21.336 - type: recall_at_10 value: 47.746 - type: recall_at_100 value: 71.773 - type: recall_at_1000 value: 89.05199999999999 - type: recall_at_3 value: 34.172999999999995 - type: recall_at_5 value: 39.397999999999996 - task: type: Retrieval dataset: name: MTEB CQADupstackPhysicsRetrieval type: mteb/cqadupstack-physics config: default split: test revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4 metrics: - type: map_at_1 value: 34.424 - type: map_at_10 value: 45.647999999999996 - type: map_at_100 value: 46.907 - type: map_at_1000 value: 47.010999999999996 - type: map_at_3 value: 42.427 - type: map_at_5 value: 44.285000000000004 - type: mrr_at_1 value: 41.867 - type: mrr_at_10 value: 51.17699999999999 - type: mrr_at_100 value: 51.937 - type: mrr_at_1000 value: 51.975 - type: mrr_at_3 value: 48.941 - type: mrr_at_5 value: 50.322 - type: ndcg_at_1 value: 41.867 - type: ndcg_at_10 value: 51.534 - type: ndcg_at_100 value: 56.696999999999996 - type: ndcg_at_1000 value: 58.475 - type: ndcg_at_3 value: 46.835 - type: ndcg_at_5 value: 49.161 - type: precision_at_1 value: 41.867 - type: precision_at_10 value: 9.134 - type: precision_at_100 value: 1.362 - type: precision_at_1000 value: 0.17099999999999999 - type: precision_at_3 value: 22.073 - type: precision_at_5 value: 15.495999999999999 - type: recall_at_1 value: 34.424 - type: recall_at_10 value: 63.237 - type: recall_at_100 value: 84.774 - type: recall_at_1000 value: 95.987 - type: recall_at_3 value: 49.888 - type: recall_at_5 value: 55.940999999999995 - task: type: Retrieval dataset: name: MTEB CQADupstackProgrammersRetrieval type: mteb/cqadupstack-programmers config: default split: test revision: 6184bc1440d2dbc7612be22b50686b8826d22b32 metrics: - type: map_at_1 value: 30.72 - type: map_at_10 value: 41.327999999999996 - type: map_at_100 value: 42.651 - type: map_at_1000 value: 42.739 - type: map_at_3 value: 38.223 - type: map_at_5 value: 40.053 - type: mrr_at_1 value: 37.9 - type: mrr_at_10 value: 46.857 - type: mrr_at_100 value: 47.673 - type: mrr_at_1000 value: 47.711999999999996 - type: mrr_at_3 value: 44.292 - type: mrr_at_5 value: 45.845 - type: ndcg_at_1 value: 37.9 - type: ndcg_at_10 value: 47.105999999999995 - type: ndcg_at_100 value: 52.56999999999999 - type: ndcg_at_1000 value: 54.37800000000001 - type: ndcg_at_3 value: 42.282 - type: ndcg_at_5 value: 44.646 - type: precision_at_1 value: 37.9 - type: precision_at_10 value: 8.368 - type: precision_at_100 value: 1.283 - type: precision_at_1000 value: 0.16 - type: precision_at_3 value: 20.015 - type: precision_at_5 value: 14.132 - type: recall_at_1 value: 30.72 - type: recall_at_10 value: 58.826 - type: recall_at_100 value: 82.104 - type: recall_at_1000 value: 94.194 - type: recall_at_3 value: 44.962999999999994 - type: recall_at_5 value: 51.426 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval type: mteb/cqadupstack config: default split: test revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 metrics: - type: map_at_1 value: 31.656583333333334 - type: map_at_10 value: 41.59883333333333 - type: map_at_100 value: 42.80350000000001 - type: map_at_1000 value: 42.91075 - type: map_at_3 value: 38.68908333333333 - type: map_at_5 value: 40.27733333333334 - type: mrr_at_1 value: 37.23483333333334 - type: mrr_at_10 value: 45.782000000000004 - type: mrr_at_100 value: 46.577083333333334 - type: mrr_at_1000 value: 46.62516666666667 - type: mrr_at_3 value: 43.480666666666664 - type: mrr_at_5 value: 44.79833333333333 - type: ndcg_at_1 value: 37.23483333333334 - type: ndcg_at_10 value: 46.971500000000006 - type: ndcg_at_100 value: 51.90125 - type: ndcg_at_1000 value: 53.86366666666667 - type: ndcg_at_3 value: 42.31791666666667 - type: ndcg_at_5 value: 44.458666666666666 - type: precision_at_1 value: 37.23483333333334 - type: precision_at_10 value: 8.044583333333332 - type: precision_at_100 value: 1.2334166666666666 - type: precision_at_1000 value: 0.15925 - type: precision_at_3 value: 19.240833333333327 - type: precision_at_5 value: 13.435083333333333 - type: recall_at_1 value: 31.656583333333334 - type: recall_at_10 value: 58.44758333333333 - type: recall_at_100 value: 79.93658333333332 - type: recall_at_1000 value: 93.32491666666668 - type: recall_at_3 value: 45.44266666666667 - type: recall_at_5 value: 50.99866666666666 - task: type: Retrieval dataset: name: MTEB CQADupstackStatsRetrieval type: mteb/cqadupstack-stats config: default split: test revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a metrics: - type: map_at_1 value: 28.247 - type: map_at_10 value: 35.443999999999996 - type: map_at_100 value: 36.578 - type: map_at_1000 value: 36.675999999999995 - type: map_at_3 value: 33.276 - type: map_at_5 value: 34.536 - type: mrr_at_1 value: 31.747999999999998 - type: mrr_at_10 value: 38.413000000000004 - type: mrr_at_100 value: 39.327 - type: mrr_at_1000 value: 39.389 - type: mrr_at_3 value: 36.401 - type: mrr_at_5 value: 37.543 - type: ndcg_at_1 value: 31.747999999999998 - type: ndcg_at_10 value: 39.646 - type: ndcg_at_100 value: 44.861000000000004 - type: ndcg_at_1000 value: 47.197 - type: ndcg_at_3 value: 35.764 - type: ndcg_at_5 value: 37.635999999999996 - type: precision_at_1 value: 31.747999999999998 - type: precision_at_10 value: 6.12 - type: precision_at_100 value: 0.942 - type: precision_at_1000 value: 0.123 - type: precision_at_3 value: 15.235000000000001 - type: precision_at_5 value: 10.491 - type: recall_at_1 value: 28.247 - type: recall_at_10 value: 49.456 - type: recall_at_100 value: 73.02499999999999 - type: recall_at_1000 value: 89.898 - type: recall_at_3 value: 38.653999999999996 - type: recall_at_5 value: 43.259 - task: type: Retrieval dataset: name: MTEB CQADupstackTexRetrieval type: mteb/cqadupstack-tex config: default split: test revision: 46989137a86843e03a6195de44b09deda022eec7 metrics: - type: map_at_1 value: 22.45 - type: map_at_10 value: 30.476999999999997 - type: map_at_100 value: 31.630999999999997 - type: map_at_1000 value: 31.755 - type: map_at_3 value: 27.989000000000004 - type: map_at_5 value: 29.410999999999998 - type: mrr_at_1 value: 26.979 - type: mrr_at_10 value: 34.316 - type: mrr_at_100 value: 35.272999999999996 - type: mrr_at_1000 value: 35.342 - type: mrr_at_3 value: 32.14 - type: mrr_at_5 value: 33.405 - type: ndcg_at_1 value: 26.979 - type: ndcg_at_10 value: 35.166 - type: ndcg_at_100 value: 40.583000000000006 - type: ndcg_at_1000 value: 43.282 - type: ndcg_at_3 value: 30.916 - type: ndcg_at_5 value: 32.973 - type: precision_at_1 value: 26.979 - type: precision_at_10 value: 6.132 - type: precision_at_100 value: 1.047 - type: precision_at_1000 value: 0.145 - type: precision_at_3 value: 14.360999999999999 - type: precision_at_5 value: 10.227 - type: recall_at_1 value: 22.45 - type: recall_at_10 value: 45.348 - type: recall_at_100 value: 69.484 - type: recall_at_1000 value: 88.628 - type: recall_at_3 value: 33.338 - type: recall_at_5 value: 38.746 - task: type: Retrieval dataset: name: MTEB CQADupstackUnixRetrieval type: mteb/cqadupstack-unix config: default split: test revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53 metrics: - type: map_at_1 value: 32.123000000000005 - type: map_at_10 value: 41.778 - type: map_at_100 value: 42.911 - type: map_at_1000 value: 42.994 - type: map_at_3 value: 38.558 - type: map_at_5 value: 40.318 - type: mrr_at_1 value: 37.687 - type: mrr_at_10 value: 45.889 - type: mrr_at_100 value: 46.672999999999995 - type: mrr_at_1000 value: 46.72 - type: mrr_at_3 value: 43.33 - type: mrr_at_5 value: 44.734 - type: ndcg_at_1 value: 37.687 - type: ndcg_at_10 value: 47.258 - type: ndcg_at_100 value: 52.331 - type: ndcg_at_1000 value: 54.152 - type: ndcg_at_3 value: 41.857 - type: ndcg_at_5 value: 44.283 - type: precision_at_1 value: 37.687 - type: precision_at_10 value: 7.892 - type: precision_at_100 value: 1.183 - type: precision_at_1000 value: 0.14300000000000002 - type: precision_at_3 value: 18.781 - type: precision_at_5 value: 13.134 - type: recall_at_1 value: 32.123000000000005 - type: recall_at_10 value: 59.760000000000005 - type: recall_at_100 value: 81.652 - type: recall_at_1000 value: 94.401 - type: recall_at_3 value: 44.996 - type: recall_at_5 value: 51.184 - task: type: Retrieval dataset: name: MTEB CQADupstackWebmastersRetrieval type: mteb/cqadupstack-webmasters config: default split: test revision: 160c094312a0e1facb97e55eeddb698c0abe3571 metrics: - type: map_at_1 value: 33.196999999999996 - type: map_at_10 value: 42.012 - type: map_at_100 value: 43.663999999999994 - type: map_at_1000 value: 43.883 - type: map_at_3 value: 39.33 - type: map_at_5 value: 40.586 - type: mrr_at_1 value: 39.328 - type: mrr_at_10 value: 46.57 - type: mrr_at_100 value: 47.508 - type: mrr_at_1000 value: 47.558 - type: mrr_at_3 value: 44.532 - type: mrr_at_5 value: 45.58 - type: ndcg_at_1 value: 39.328 - type: ndcg_at_10 value: 47.337 - type: ndcg_at_100 value: 52.989 - type: ndcg_at_1000 value: 55.224 - type: ndcg_at_3 value: 43.362 - type: ndcg_at_5 value: 44.866 - type: precision_at_1 value: 39.328 - type: precision_at_10 value: 8.577 - type: precision_at_100 value: 1.5789999999999997 - type: precision_at_1000 value: 0.25 - type: precision_at_3 value: 19.697 - type: precision_at_5 value: 13.755 - type: recall_at_1 value: 33.196999999999996 - type: recall_at_10 value: 56.635000000000005 - type: recall_at_100 value: 81.882 - type: recall_at_1000 value: 95.342 - type: recall_at_3 value: 44.969 - type: recall_at_5 value: 49.266 - task: type: Retrieval dataset: name: MTEB CQADupstackWordpressRetrieval type: mteb/cqadupstack-wordpress config: default split: test revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 metrics: - type: map_at_1 value: 26.901000000000003 - type: map_at_10 value: 35.77 - type: map_at_100 value: 36.638999999999996 - type: map_at_1000 value: 36.741 - type: map_at_3 value: 33.219 - type: map_at_5 value: 34.574 - type: mrr_at_1 value: 29.205 - type: mrr_at_10 value: 37.848 - type: mrr_at_100 value: 38.613 - type: mrr_at_1000 value: 38.682 - type: mrr_at_3 value: 35.551 - type: mrr_at_5 value: 36.808 - type: ndcg_at_1 value: 29.205 - type: ndcg_at_10 value: 40.589 - type: ndcg_at_100 value: 45.171 - type: ndcg_at_1000 value: 47.602 - type: ndcg_at_3 value: 35.760999999999996 - type: ndcg_at_5 value: 37.980000000000004 - type: precision_at_1 value: 29.205 - type: precision_at_10 value: 6.192 - type: precision_at_100 value: 0.922 - type: precision_at_1000 value: 0.123 - type: precision_at_3 value: 15.034 - type: precision_at_5 value: 10.424999999999999 - type: recall_at_1 value: 26.901000000000003 - type: recall_at_10 value: 53.236000000000004 - type: recall_at_100 value: 74.809 - type: recall_at_1000 value: 92.884 - type: recall_at_3 value: 40.314 - type: recall_at_5 value: 45.617999999999995 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: mteb/climate-fever config: default split: test revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380 metrics: - type: map_at_1 value: 16.794999999999998 - type: map_at_10 value: 29.322 - type: map_at_100 value: 31.463 - type: map_at_1000 value: 31.643 - type: map_at_3 value: 24.517 - type: map_at_5 value: 27.237000000000002 - type: mrr_at_1 value: 37.655 - type: mrr_at_10 value: 50.952 - type: mrr_at_100 value: 51.581999999999994 - type: mrr_at_1000 value: 51.61 - type: mrr_at_3 value: 47.991 - type: mrr_at_5 value: 49.744 - type: ndcg_at_1 value: 37.655 - type: ndcg_at_10 value: 39.328 - type: ndcg_at_100 value: 46.358 - type: ndcg_at_1000 value: 49.245 - type: ndcg_at_3 value: 33.052 - type: ndcg_at_5 value: 35.407 - type: precision_at_1 value: 37.655 - type: precision_at_10 value: 12.202 - type: precision_at_100 value: 1.9789999999999999 - type: precision_at_1000 value: 0.252 - type: precision_at_3 value: 24.973 - type: precision_at_5 value: 19.075 - type: recall_at_1 value: 16.794999999999998 - type: recall_at_10 value: 45.716 - type: recall_at_100 value: 68.919 - type: recall_at_1000 value: 84.71600000000001 - type: recall_at_3 value: 30.135 - type: recall_at_5 value: 37.141999999999996 - task: type: Retrieval dataset: name: MTEB DBPedia type: mteb/dbpedia config: default split: test revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659 metrics: - type: map_at_1 value: 9.817 - type: map_at_10 value: 22.058 - type: map_at_100 value: 31.805 - type: map_at_1000 value: 33.562999999999995 - type: map_at_3 value: 15.537 - type: map_at_5 value: 18.199 - type: mrr_at_1 value: 72.75 - type: mrr_at_10 value: 79.804 - type: mrr_at_100 value: 80.089 - type: mrr_at_1000 value: 80.09100000000001 - type: mrr_at_3 value: 78.75 - type: mrr_at_5 value: 79.325 - type: ndcg_at_1 value: 59.875 - type: ndcg_at_10 value: 45.972 - type: ndcg_at_100 value: 51.092999999999996 - type: ndcg_at_1000 value: 58.048 - type: ndcg_at_3 value: 50.552 - type: ndcg_at_5 value: 47.672 - type: precision_at_1 value: 72.75 - type: precision_at_10 value: 37.05 - type: precision_at_100 value: 12.005 - type: precision_at_1000 value: 2.221 - type: precision_at_3 value: 54.083000000000006 - type: precision_at_5 value: 46.2 - type: recall_at_1 value: 9.817 - type: recall_at_10 value: 27.877000000000002 - type: recall_at_100 value: 57.974000000000004 - type: recall_at_1000 value: 80.085 - type: recall_at_3 value: 16.911 - type: recall_at_5 value: 20.689 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 46.464999999999996 - type: f1 value: 42.759588662873796 - task: type: Retrieval dataset: name: MTEB FEVER type: mteb/fever config: default split: test revision: bea83ef9e8fb933d90a2f1d5515737465d613e12 metrics: - type: map_at_1 value: 75.82900000000001 - type: map_at_10 value: 84.613 - type: map_at_100 value: 84.845 - type: map_at_1000 value: 84.855 - type: map_at_3 value: 83.498 - type: map_at_5 value: 84.29299999999999 - type: mrr_at_1 value: 81.69800000000001 - type: mrr_at_10 value: 88.84100000000001 - type: mrr_at_100 value: 88.887 - type: mrr_at_1000 value: 88.888 - type: mrr_at_3 value: 88.179 - type: mrr_at_5 value: 88.69200000000001 - type: ndcg_at_1 value: 81.69800000000001 - type: ndcg_at_10 value: 88.21799999999999 - type: ndcg_at_100 value: 88.961 - type: ndcg_at_1000 value: 89.131 - type: ndcg_at_3 value: 86.591 - type: ndcg_at_5 value: 87.666 - type: precision_at_1 value: 81.69800000000001 - type: precision_at_10 value: 10.615 - type: precision_at_100 value: 1.125 - type: precision_at_1000 value: 0.11499999999999999 - type: precision_at_3 value: 33.208 - type: precision_at_5 value: 20.681 - type: recall_at_1 value: 75.82900000000001 - type: recall_at_10 value: 94.97 - type: recall_at_100 value: 97.786 - type: recall_at_1000 value: 98.809 - type: recall_at_3 value: 90.625 - type: recall_at_5 value: 93.345 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: mteb/fiqa config: default split: test revision: 27a168819829fe9bcd655c2df245fb19452e8e06 metrics: - type: map_at_1 value: 22.788 - type: map_at_10 value: 36.71 - type: map_at_100 value: 38.527 - type: map_at_1000 value: 38.701 - type: map_at_3 value: 32.318999999999996 - type: map_at_5 value: 34.809 - type: mrr_at_1 value: 44.444 - type: mrr_at_10 value: 52.868 - type: mrr_at_100 value: 53.52400000000001 - type: mrr_at_1000 value: 53.559999999999995 - type: mrr_at_3 value: 50.153999999999996 - type: mrr_at_5 value: 51.651 - type: ndcg_at_1 value: 44.444 - type: ndcg_at_10 value: 44.707 - type: ndcg_at_100 value: 51.174 - type: ndcg_at_1000 value: 53.996 - type: ndcg_at_3 value: 40.855999999999995 - type: ndcg_at_5 value: 42.113 - type: precision_at_1 value: 44.444 - type: precision_at_10 value: 12.021999999999998 - type: precision_at_100 value: 1.8950000000000002 - type: precision_at_1000 value: 0.241 - type: precision_at_3 value: 26.8 - type: precision_at_5 value: 19.66 - type: recall_at_1 value: 22.788 - type: recall_at_10 value: 51.793 - type: recall_at_100 value: 75.69500000000001 - type: recall_at_1000 value: 92.292 - type: recall_at_3 value: 37.375 - type: recall_at_5 value: 43.682 - task: type: Retrieval dataset: name: MTEB HotpotQA type: mteb/hotpotqa config: default split: test revision: ab518f4d6fcca38d87c25209f94beba119d02014 metrics: - type: map_at_1 value: 41.276 - type: map_at_10 value: 67.245 - type: map_at_100 value: 68.061 - type: map_at_1000 value: 68.11399999999999 - type: map_at_3 value: 63.693 - type: map_at_5 value: 65.90899999999999 - type: mrr_at_1 value: 82.552 - type: mrr_at_10 value: 87.741 - type: mrr_at_100 value: 87.868 - type: mrr_at_1000 value: 87.871 - type: mrr_at_3 value: 86.98599999999999 - type: mrr_at_5 value: 87.469 - type: ndcg_at_1 value: 82.552 - type: ndcg_at_10 value: 75.176 - type: ndcg_at_100 value: 77.902 - type: ndcg_at_1000 value: 78.852 - type: ndcg_at_3 value: 70.30499999999999 - type: ndcg_at_5 value: 73.00999999999999 - type: precision_at_1 value: 82.552 - type: precision_at_10 value: 15.765 - type: precision_at_100 value: 1.788 - type: precision_at_1000 value: 0.191 - type: precision_at_3 value: 45.375 - type: precision_at_5 value: 29.360999999999997 - type: recall_at_1 value: 41.276 - type: recall_at_10 value: 78.825 - type: recall_at_100 value: 89.41900000000001 - type: recall_at_1000 value: 95.625 - type: recall_at_3 value: 68.062 - type: recall_at_5 value: 73.40299999999999 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 72.876 - type: ap value: 67.15477852410164 - type: f1 value: 72.65147370025373 - task: type: Retrieval dataset: name: MTEB MSMARCO type: mteb/msmarco config: default split: dev revision: c5a29a104738b98a9e76336939199e264163d4a0 metrics: - type: map_at_1 value: 21.748 - type: map_at_10 value: 34.626000000000005 - type: map_at_100 value: 35.813 - type: map_at_1000 value: 35.859 - type: map_at_3 value: 30.753000000000004 - type: map_at_5 value: 33.049 - type: mrr_at_1 value: 22.35 - type: mrr_at_10 value: 35.23 - type: mrr_at_100 value: 36.359 - type: mrr_at_1000 value: 36.399 - type: mrr_at_3 value: 31.436999999999998 - type: mrr_at_5 value: 33.687 - type: ndcg_at_1 value: 22.364 - type: ndcg_at_10 value: 41.677 - type: ndcg_at_100 value: 47.355999999999995 - type: ndcg_at_1000 value: 48.494 - type: ndcg_at_3 value: 33.85 - type: ndcg_at_5 value: 37.942 - type: precision_at_1 value: 22.364 - type: precision_at_10 value: 6.6000000000000005 - type: precision_at_100 value: 0.9450000000000001 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 14.527000000000001 - type: precision_at_5 value: 10.796999999999999 - type: recall_at_1 value: 21.748 - type: recall_at_10 value: 63.292 - type: recall_at_100 value: 89.427 - type: recall_at_1000 value: 98.13499999999999 - type: recall_at_3 value: 42.126000000000005 - type: recall_at_5 value: 51.968 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 92.62425900592795 - type: f1 value: 92.08497761553683 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 64.51436388508893 - type: f1 value: 45.884016531912906 - task: type: Classification dataset: name: MTEB MasakhaNEWSClassification (eng) type: masakhane/masakhanews config: eng split: test revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60 metrics: - type: accuracy value: 76.57172995780591 - type: f1 value: 75.52979910878491 - task: type: Clustering dataset: name: MTEB MasakhaNEWSClusteringP2P (eng) type: masakhane/masakhanews config: eng split: test revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60 metrics: - type: v_measure value: 44.84052695201612 - type: v_measure value: 21.443971229936494 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.79354404841965 - type: f1 value: 63.17260074126185 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.09616677874916 - type: f1 value: 69.74285784421075 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 31.474709231086184 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 28.93630367824217 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 29.08234393834005 - type: mrr value: 29.740466971605432 - task: type: Retrieval dataset: name: MTEB NFCorpus type: mteb/nfcorpus config: default split: test revision: ec0fa4fe99da2ff19ca1214b7966684033a58814 metrics: - type: map_at_1 value: 6.2059999999999995 - type: map_at_10 value: 14.442 - type: map_at_100 value: 18.005 - type: map_at_1000 value: 19.488 - type: map_at_3 value: 10.666 - type: map_at_5 value: 12.45 - type: mrr_at_1 value: 47.678 - type: mrr_at_10 value: 57.519 - type: mrr_at_100 value: 58.13700000000001 - type: mrr_at_1000 value: 58.167 - type: mrr_at_3 value: 55.779 - type: mrr_at_5 value: 56.940000000000005 - type: ndcg_at_1 value: 45.82 - type: ndcg_at_10 value: 37.651 - type: ndcg_at_100 value: 34.001999999999995 - type: ndcg_at_1000 value: 42.626 - type: ndcg_at_3 value: 43.961 - type: ndcg_at_5 value: 41.461 - type: precision_at_1 value: 47.678 - type: precision_at_10 value: 27.584999999999997 - type: precision_at_100 value: 8.455 - type: precision_at_1000 value: 2.118 - type: precision_at_3 value: 41.692 - type: precision_at_5 value: 36.161 - type: recall_at_1 value: 6.2059999999999995 - type: recall_at_10 value: 18.599 - type: recall_at_100 value: 33.608 - type: recall_at_1000 value: 65.429 - type: recall_at_3 value: 12.126000000000001 - type: recall_at_5 value: 14.902000000000001 - task: type: Retrieval dataset: name: MTEB NQ type: mteb/nq config: default split: test revision: b774495ed302d8c44a3a7ea25c90dbce03968f31 metrics: - type: map_at_1 value: 39.117000000000004 - type: map_at_10 value: 55.535000000000004 - type: map_at_100 value: 56.32899999999999 - type: map_at_1000 value: 56.34400000000001 - type: map_at_3 value: 51.439 - type: map_at_5 value: 53.89699999999999 - type: mrr_at_1 value: 43.714 - type: mrr_at_10 value: 58.05200000000001 - type: mrr_at_100 value: 58.582 - type: mrr_at_1000 value: 58.592 - type: mrr_at_3 value: 54.896 - type: mrr_at_5 value: 56.874 - type: ndcg_at_1 value: 43.685 - type: ndcg_at_10 value: 63.108 - type: ndcg_at_100 value: 66.231 - type: ndcg_at_1000 value: 66.583 - type: ndcg_at_3 value: 55.659000000000006 - type: ndcg_at_5 value: 59.681 - type: precision_at_1 value: 43.685 - type: precision_at_10 value: 9.962 - type: precision_at_100 value: 1.174 - type: precision_at_1000 value: 0.121 - type: precision_at_3 value: 24.961 - type: precision_at_5 value: 17.352 - type: recall_at_1 value: 39.117000000000004 - type: recall_at_10 value: 83.408 - type: recall_at_100 value: 96.553 - type: recall_at_1000 value: 99.136 - type: recall_at_3 value: 64.364 - type: recall_at_5 value: 73.573 - task: type: Classification dataset: name: MTEB NewsClassification type: ag_news config: default split: test revision: eb185aade064a813bc0b7f42de02595523103ca4 metrics: - type: accuracy value: 78.87763157894737 - type: f1 value: 78.69611753876177 - task: type: PairClassification dataset: name: MTEB OpusparcusPC (en) type: GEM/opusparcus config: en split: test revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a metrics: - type: cos_sim_accuracy value: 99.89816700610999 - type: cos_sim_ap value: 100 - type: cos_sim_f1 value: 99.9490575649516 - type: cos_sim_precision value: 100 - type: cos_sim_recall value: 99.89816700610999 - type: dot_accuracy value: 99.89816700610999 - type: dot_ap value: 100 - type: dot_f1 value: 99.9490575649516 - type: dot_precision value: 100 - type: dot_recall value: 99.89816700610999 - type: euclidean_accuracy value: 99.89816700610999 - type: euclidean_ap value: 100 - type: euclidean_f1 value: 99.9490575649516 - type: euclidean_precision value: 100 - type: euclidean_recall value: 99.89816700610999 - type: manhattan_accuracy value: 99.89816700610999 - type: manhattan_ap value: 100 - type: manhattan_f1 value: 99.9490575649516 - type: manhattan_precision value: 100 - type: manhattan_recall value: 99.89816700610999 - type: max_accuracy value: 99.89816700610999 - type: max_ap value: 100 - type: max_f1 value: 99.9490575649516 - task: type: PairClassification dataset: name: MTEB PawsX (en) type: paws-x config: en split: test revision: 8a04d940a42cd40658986fdd8e3da561533a3646 metrics: - type: cos_sim_accuracy value: 62 - type: cos_sim_ap value: 62.26837791655737 - type: cos_sim_f1 value: 62.607449856733524 - type: cos_sim_precision value: 46.36604774535809 - type: cos_sim_recall value: 96.36163175303197 - type: dot_accuracy value: 62 - type: dot_ap value: 62.26736459439965 - type: dot_f1 value: 62.607449856733524 - type: dot_precision value: 46.36604774535809 - type: dot_recall value: 96.36163175303197 - type: euclidean_accuracy value: 62 - type: euclidean_ap value: 62.26826112548132 - type: euclidean_f1 value: 62.607449856733524 - type: euclidean_precision value: 46.36604774535809 - type: euclidean_recall value: 96.36163175303197 - type: manhattan_accuracy value: 62 - type: manhattan_ap value: 62.26223761507973 - type: manhattan_f1 value: 62.585034013605444 - type: manhattan_precision value: 46.34146341463415 - type: manhattan_recall value: 96.36163175303197 - type: max_accuracy value: 62 - type: max_ap value: 62.26837791655737 - type: max_f1 value: 62.607449856733524 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: mteb/quora config: default split: test revision: e4e08e0b7dbe3c8700f0daef558ff32256715259 metrics: - type: map_at_1 value: 69.90899999999999 - type: map_at_10 value: 83.56700000000001 - type: map_at_100 value: 84.19200000000001 - type: map_at_1000 value: 84.212 - type: map_at_3 value: 80.658 - type: map_at_5 value: 82.473 - type: mrr_at_1 value: 80.4 - type: mrr_at_10 value: 86.699 - type: mrr_at_100 value: 86.798 - type: mrr_at_1000 value: 86.80099999999999 - type: mrr_at_3 value: 85.677 - type: mrr_at_5 value: 86.354 - type: ndcg_at_1 value: 80.43 - type: ndcg_at_10 value: 87.41 - type: ndcg_at_100 value: 88.653 - type: ndcg_at_1000 value: 88.81599999999999 - type: ndcg_at_3 value: 84.516 - type: ndcg_at_5 value: 86.068 - type: precision_at_1 value: 80.43 - type: precision_at_10 value: 13.234000000000002 - type: precision_at_100 value: 1.513 - type: precision_at_1000 value: 0.156 - type: precision_at_3 value: 36.93 - type: precision_at_5 value: 24.26 - type: recall_at_1 value: 69.90899999999999 - type: recall_at_10 value: 94.687 - type: recall_at_100 value: 98.96000000000001 - type: recall_at_1000 value: 99.79599999999999 - type: recall_at_3 value: 86.25699999999999 - type: recall_at_5 value: 90.70700000000001 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 46.02256865360266 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 385e3cb46b4cfa89021f56c4380204149d0efe33 metrics: - type: v_measure value: 62.43157528757563 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: mteb/scidocs config: default split: test revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88 metrics: - type: map_at_1 value: 5.093 - type: map_at_10 value: 12.982 - type: map_at_100 value: 15.031 - type: map_at_1000 value: 15.334 - type: map_at_3 value: 9.339 - type: map_at_5 value: 11.183 - type: mrr_at_1 value: 25.1 - type: mrr_at_10 value: 36.257 - type: mrr_at_100 value: 37.351 - type: mrr_at_1000 value: 37.409 - type: mrr_at_3 value: 33.050000000000004 - type: mrr_at_5 value: 35.205 - type: ndcg_at_1 value: 25.1 - type: ndcg_at_10 value: 21.361 - type: ndcg_at_100 value: 29.396 - type: ndcg_at_1000 value: 34.849999999999994 - type: ndcg_at_3 value: 20.704 - type: ndcg_at_5 value: 18.086 - type: precision_at_1 value: 25.1 - type: precision_at_10 value: 10.94 - type: precision_at_100 value: 2.257 - type: precision_at_1000 value: 0.358 - type: precision_at_3 value: 19.467000000000002 - type: precision_at_5 value: 15.98 - type: recall_at_1 value: 5.093 - type: recall_at_10 value: 22.177 - type: recall_at_100 value: 45.842 - type: recall_at_1000 value: 72.598 - type: recall_at_3 value: 11.833 - type: recall_at_5 value: 16.173000000000002 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: 20a6d6f312dd54037fe07a32d58e5e168867909d metrics: - type: cos_sim_pearson value: 73.56535226754596 - type: cos_sim_spearman value: 69.32425977603488 - type: euclidean_pearson value: 71.32425703470898 - type: euclidean_spearman value: 69.32425217267013 - type: manhattan_pearson value: 71.25897281394246 - type: manhattan_spearman value: 69.27132577049578 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 69.66387868726018 - type: cos_sim_spearman value: 67.85470749045027 - type: euclidean_pearson value: 66.62075098063795 - type: euclidean_spearman value: 67.85470749045027 - type: manhattan_pearson value: 66.61455061901262 - type: manhattan_spearman value: 67.87229618498695 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 75.65731331392575 - type: cos_sim_spearman value: 77.48991626780108 - type: euclidean_pearson value: 77.19884738623692 - type: euclidean_spearman value: 77.48985836619045 - type: manhattan_pearson value: 77.0656684243772 - type: manhattan_spearman value: 77.30289226582691 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 69.37003253666457 - type: cos_sim_spearman value: 69.77157648098141 - type: euclidean_pearson value: 69.39543876030432 - type: euclidean_spearman value: 69.77157648098141 - type: manhattan_pearson value: 69.29901600459745 - type: manhattan_spearman value: 69.65074167527128 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 78.56777256540136 - type: cos_sim_spearman value: 80.16458787843023 - type: euclidean_pearson value: 80.16475730686916 - type: euclidean_spearman value: 80.16458787843023 - type: manhattan_pearson value: 80.12814463670401 - type: manhattan_spearman value: 80.1357907984809 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 76.09572350919031 - type: cos_sim_spearman value: 77.94490233429326 - type: euclidean_pearson value: 78.36595251203524 - type: euclidean_spearman value: 77.94490233429326 - type: manhattan_pearson value: 78.41538768125166 - type: manhattan_spearman value: 78.01244379569542 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 80.7843552187951 - type: cos_sim_spearman value: 82.28085055047386 - type: euclidean_pearson value: 82.37373672515267 - type: euclidean_spearman value: 82.28085055047386 - type: manhattan_pearson value: 82.39387241346917 - type: manhattan_spearman value: 82.36503339515906 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 68.29963929962095 - type: cos_sim_spearman value: 67.96868942546051 - type: euclidean_pearson value: 68.93524903869285 - type: euclidean_spearman value: 67.96868942546051 - type: manhattan_pearson value: 68.79144468444811 - type: manhattan_spearman value: 67.69311483884324 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 72.84789696700685 - type: cos_sim_spearman value: 75.67875747588545 - type: euclidean_pearson value: 75.07752300463038 - type: euclidean_spearman value: 75.67875747588545 - type: manhattan_pearson value: 74.97934248140928 - type: manhattan_spearman value: 75.62525644178724 - task: type: STS dataset: name: MTEB STSBenchmarkMultilingualSTS (en) type: PhilipMay/stsb_multi_mt config: en split: test revision: 93d57ef91790589e3ce9c365164337a8a78b7632 metrics: - type: cos_sim_pearson value: 72.84789702519309 - type: cos_sim_spearman value: 75.67875747588545 - type: euclidean_pearson value: 75.07752310061133 - type: euclidean_spearman value: 75.67875747588545 - type: manhattan_pearson value: 74.97934257159595 - type: manhattan_spearman value: 75.62525644178724 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 81.55557720431086 - type: mrr value: 94.91178665198272 - task: type: Retrieval dataset: name: MTEB SciFact type: mteb/scifact config: default split: test revision: 0228b52cf27578f30900b9e5271d331663a030d7 metrics: - type: map_at_1 value: 59.260999999999996 - type: map_at_10 value: 69.36099999999999 - type: map_at_100 value: 69.868 - type: map_at_1000 value: 69.877 - type: map_at_3 value: 66.617 - type: map_at_5 value: 68.061 - type: mrr_at_1 value: 62.333000000000006 - type: mrr_at_10 value: 70.533 - type: mrr_at_100 value: 70.966 - type: mrr_at_1000 value: 70.975 - type: mrr_at_3 value: 68.667 - type: mrr_at_5 value: 69.717 - type: ndcg_at_1 value: 62.333000000000006 - type: ndcg_at_10 value: 73.82300000000001 - type: ndcg_at_100 value: 76.122 - type: ndcg_at_1000 value: 76.374 - type: ndcg_at_3 value: 69.27499999999999 - type: ndcg_at_5 value: 71.33 - type: precision_at_1 value: 62.333000000000006 - type: precision_at_10 value: 9.8 - type: precision_at_100 value: 1.097 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 26.889000000000003 - type: precision_at_5 value: 17.599999999999998 - type: recall_at_1 value: 59.260999999999996 - type: recall_at_10 value: 86.2 - type: recall_at_100 value: 96.667 - type: recall_at_1000 value: 98.667 - type: recall_at_3 value: 74.006 - type: recall_at_5 value: 79.167 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.81881188118813 - type: cos_sim_ap value: 95.20169041096409 - type: cos_sim_f1 value: 90.76224129227664 - type: cos_sim_precision value: 91.64118246687055 - type: cos_sim_recall value: 89.9 - type: dot_accuracy value: 99.81881188118813 - type: dot_ap value: 95.20169041096409 - type: dot_f1 value: 90.76224129227664 - type: dot_precision value: 91.64118246687055 - type: dot_recall value: 89.9 - type: euclidean_accuracy value: 99.81881188118813 - type: euclidean_ap value: 95.2016904109641 - type: euclidean_f1 value: 90.76224129227664 - type: euclidean_precision value: 91.64118246687055 - type: euclidean_recall value: 89.9 - type: manhattan_accuracy value: 99.81881188118813 - type: manhattan_ap value: 95.22680188132777 - type: manhattan_f1 value: 90.79013588324108 - type: manhattan_precision value: 91.38804457953394 - type: manhattan_recall value: 90.2 - type: max_accuracy value: 99.81881188118813 - type: max_ap value: 95.22680188132777 - type: max_f1 value: 90.79013588324108 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 57.8638628701308 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 37.82028248106046 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 50.870860210170946 - type: mrr value: 51.608084521687466 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 31.60384207444685 - type: cos_sim_spearman value: 30.84047452209471 - type: dot_pearson value: 31.60384104417333 - type: dot_spearman value: 30.84047452209471 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: mteb/trec-covid config: default split: test revision: bb9466bac8153a0349341eb1b22e06409e78ef4e metrics: - type: map_at_1 value: 0.246 - type: map_at_10 value: 2.051 - type: map_at_100 value: 13.129 - type: map_at_1000 value: 31.56 - type: map_at_3 value: 0.681 - type: map_at_5 value: 1.105 - type: mrr_at_1 value: 94 - type: mrr_at_10 value: 97 - type: mrr_at_100 value: 97 - type: mrr_at_1000 value: 97 - type: mrr_at_3 value: 97 - type: mrr_at_5 value: 97 - type: ndcg_at_1 value: 87 - type: ndcg_at_10 value: 80.716 - type: ndcg_at_100 value: 63.83 - type: ndcg_at_1000 value: 56.215 - type: ndcg_at_3 value: 84.531 - type: ndcg_at_5 value: 84.777 - type: precision_at_1 value: 94 - type: precision_at_10 value: 84.6 - type: precision_at_100 value: 66.03999999999999 - type: precision_at_1000 value: 24.878 - type: precision_at_3 value: 88.667 - type: precision_at_5 value: 89.60000000000001 - type: recall_at_1 value: 0.246 - type: recall_at_10 value: 2.2079999999999997 - type: recall_at_100 value: 15.895999999999999 - type: recall_at_1000 value: 52.683 - type: recall_at_3 value: 0.7040000000000001 - type: recall_at_5 value: 1.163 - task: type: Retrieval dataset: name: MTEB Touche2020 type: mteb/touche2020 config: default split: test revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f metrics: - type: map_at_1 value: 3.852 - type: map_at_10 value: 14.316 - type: map_at_100 value: 20.982 - type: map_at_1000 value: 22.58 - type: map_at_3 value: 7.767 - type: map_at_5 value: 10.321 - type: mrr_at_1 value: 51.019999999999996 - type: mrr_at_10 value: 66.365 - type: mrr_at_100 value: 66.522 - type: mrr_at_1000 value: 66.522 - type: mrr_at_3 value: 62.925 - type: mrr_at_5 value: 64.762 - type: ndcg_at_1 value: 46.939 - type: ndcg_at_10 value: 34.516999999999996 - type: ndcg_at_100 value: 44.25 - type: ndcg_at_1000 value: 54.899 - type: ndcg_at_3 value: 40.203 - type: ndcg_at_5 value: 37.004 - type: precision_at_1 value: 51.019999999999996 - type: precision_at_10 value: 29.796 - type: precision_at_100 value: 8.633000000000001 - type: precision_at_1000 value: 1.584 - type: precision_at_3 value: 40.816 - type: precision_at_5 value: 35.918 - type: recall_at_1 value: 3.852 - type: recall_at_10 value: 20.891000000000002 - type: recall_at_100 value: 52.428 - type: recall_at_1000 value: 84.34899999999999 - type: recall_at_3 value: 8.834 - type: recall_at_5 value: 12.909 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de metrics: - type: accuracy value: 64.7092 - type: ap value: 11.972915012305819 - type: f1 value: 49.91050149892115 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 56.737408036219584 - type: f1 value: 57.07235266246011 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 35.9147539025798 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 82.52369315133814 - type: cos_sim_ap value: 62.34858091376534 - type: cos_sim_f1 value: 58.18225190839694 - type: cos_sim_precision value: 53.09098824553766 - type: cos_sim_recall value: 64.35356200527704 - type: dot_accuracy value: 82.52369315133814 - type: dot_ap value: 62.34857753814992 - type: dot_f1 value: 58.18225190839694 - type: dot_precision value: 53.09098824553766 - type: dot_recall value: 64.35356200527704 - type: euclidean_accuracy value: 82.52369315133814 - type: euclidean_ap value: 62.34857756663386 - type: euclidean_f1 value: 58.18225190839694 - type: euclidean_precision value: 53.09098824553766 - type: euclidean_recall value: 64.35356200527704 - type: manhattan_accuracy value: 82.49389044525243 - type: manhattan_ap value: 62.32245347238179 - type: manhattan_f1 value: 58.206309819213054 - type: manhattan_precision value: 52.70704044511021 - type: manhattan_recall value: 64.9868073878628 - type: max_accuracy value: 82.52369315133814 - type: max_ap value: 62.34858091376534 - type: max_f1 value: 58.206309819213054 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.34555827220863 - type: cos_sim_ap value: 84.84152481680071 - type: cos_sim_f1 value: 76.860456739428 - type: cos_sim_precision value: 72.21470150263978 - type: cos_sim_recall value: 82.14505697566985 - type: dot_accuracy value: 88.34555827220863 - type: dot_ap value: 84.84152743322608 - type: dot_f1 value: 76.860456739428 - type: dot_precision value: 72.21470150263978 - type: dot_recall value: 82.14505697566985 - type: euclidean_accuracy value: 88.34555827220863 - type: euclidean_ap value: 84.84152589453169 - type: euclidean_f1 value: 76.860456739428 - type: euclidean_precision value: 72.21470150263978 - type: euclidean_recall value: 82.14505697566985 - type: manhattan_accuracy value: 88.38242713548337 - type: manhattan_ap value: 84.8112124970968 - type: manhattan_f1 value: 76.83599206057487 - type: manhattan_precision value: 73.51244900829934 - type: manhattan_recall value: 80.47428395441946 - type: max_accuracy value: 88.38242713548337 - type: max_ap value: 84.84152743322608 - type: max_f1 value: 76.860456739428 - task: type: Clustering dataset: name: MTEB WikiCitiesClustering type: jinaai/cities_wiki_clustering config: default split: test revision: ddc9ee9242fa65332597f70e967ecc38b9d734fa metrics: - type: v_measure value: 85.5314389263015 --- # bcastle/snowflake-arctic-embed-l-Q8_0-GGUF This model was converted to GGUF format from [`Snowflake/snowflake-arctic-embed-l`](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo bcastle/snowflake-arctic-embed-l-Q8_0-GGUF --hf-file snowflake-arctic-embed-l-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo bcastle/snowflake-arctic-embed-l-Q8_0-GGUF --hf-file snowflake-arctic-embed-l-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo bcastle/snowflake-arctic-embed-l-Q8_0-GGUF --hf-file snowflake-arctic-embed-l-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo bcastle/snowflake-arctic-embed-l-Q8_0-GGUF --hf-file snowflake-arctic-embed-l-q8_0.gguf -c 2048 ```
[ "BIOSSES", "SCIFACT" ]
baukearends/Echocardiogram-SpanCategorizer-pericardial-effusion
baukearends
token-classification
[ "spacy", "arxiv:2408.06930", "medical", "token-classification", "nl", "license:cc-by-sa-4.0", "model-index", "region:us" ]
2024-08-15T08:14:36Z
2024-08-15T09:06:39+00:00
14
0
--- language: - nl license: cc-by-sa-4.0 metrics: - f1 - precision - recall pipeline_tag: token-classification tags: - spacy - arxiv:2408.06930 - medical model-index: - name: Echocardiogram_SpanCategorizer_pericardial_effusion results: - task: type: token-classification dataset: name: internal test set type: test metrics: - type: f1 value: 0.787 name: Weighted f1 verified: false - type: precision value: 0.894 name: Weighted precision verified: false - type: recall value: 0.703 name: Weighted recall verified: false --- # Description This model is a spaCy SpanCategorizer model trained from scratch on Dutch echocardiogram reports sourced from Electronic Health Records. The publication associated with the span classification task can be found at https://arxiv.org/abs/2408.06930. The config file for training the model can be found at https://github.com/umcu/echolabeler. # Minimum working example ```python !pip install https://huggingface.co/baukearends/Echocardiogram-SpanCategorizer-pericardial-effusion/resolve/main/nl_Echocardiogram_SpanCategorizer_pericardial_efuusion-any-py3-none-any.whl ``` ```python import spacy nlp = spacy.load("nl_Echocardiogram_SpanCategorizer_pericardial_effusion") ``` ```python prediction = nlp("Op dit echo geen duidelijke WMA te zien, goede systolische L.V. functie, wel L.V.H., diastolische dysfunctie graad 1A tot 2. Geringe aortastenose en - matige -insufficientie. Geringe M.I. Geen PE.") for span, score in zip(prediction.spans['sc'], prediction.spans['sc'].attrs['scores']): print(f"Span: {span}, label: {span.label_}, score: {score[0]:.3f}") ``` # Label Scheme <details> <summary>View label scheme (5 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`spancat`** | `pe_not_present`, `pe_moderate`, `pe_mild`, `pe_severe`, `pe` | </details> # Intended use The model is developed for span classification on Dutch clinical text. Since it is a domain-specific model trained on medical data, it is meant to be used on medical NLP tasks for Dutch. # Data The model was trained on approximately 4,000 manually annotated echocardiogram reports from the University Medical Centre Utrecht. The training data was anonymized before starting the training procedure. | Feature | Description | | --- | --- | | **Name** | `Echocardiogram_SpanCategorizer_pericardial_effusion` | | **Version** | `1.0.0` | | **spaCy** | `>=3.7.4,<3.8.0` | | **Default Pipeline** | `tok2vec`, `spancat` | | **Components** | `tok2vec`, `spancat` | | **License** | `cc-by-sa-4.0` | | **Author** | [Bauke Arends]() | # Contact If you are having problems with this model please add an issue on our git: https://github.com/umcu/echolabeler/issues # Usage If you use the model in your work please use the following referral; https://doi.org/10.48550/arXiv.2408.06930 # References Paper: Bauke Arends, Melle Vessies, Dirk van Osch, Arco Teske, Pim van der Harst, René van Es, Bram van Es (2024): Diagnosis extraction from unstructured Dutch echocardiogram reports using span- and document-level characteristic classification, Arxiv https://arxiv.org/abs/2408.06930
[ "MEDICAL DATA" ]
baukearends/Echocardiogram-SpanCategorizer-aortic-stenosis
baukearends
token-classification
[ "spacy", "arxiv:2408.06930", "medical", "token-classification", "nl", "license:cc-by-sa-4.0", "model-index", "region:us" ]
2024-08-15T08:14:51Z
2024-08-15T08:22:03+00:00
14
0
--- language: - nl license: cc-by-sa-4.0 metrics: - f1 - precision - recall pipeline_tag: token-classification tags: - spacy - arxiv:2408.06930 - medical model-index: - name: Echocardiogram_SpanCategorizer_aortic_stenosis results: - task: type: token-classification dataset: name: internal test set type: test metrics: - type: f1 value: 0.864 name: Weighted f1 verified: false - type: precision value: 0.823 name: Weighted precision verified: false - type: recall value: 0.786 name: Weighted recall verified: false --- # Description This model is a spaCy SpanCategorizer model trained from scratch on Dutch echocardiogram reports sourced from Electronic Health Records. The publication associated with the span classification task can be found at https://arxiv.org/abs/2408.06930. The config file for training the model can be found at https://github.com/umcu/echolabeler. # Minimum working example ```python !pip install https://huggingface.co/baukearends/Echocardiogram-SpanCategorizer-aortic-stenosis/resolve/main/nl_Echocardiogram_SpanCategorizer_aortic_stenosis-any-py3-none-any.whl ``` ```python import spacy nlp = spacy.load("nl_Echocardiogram_SpanCategorizer_aortic_stenosis") ``` ```python prediction = nlp("Op dit echo geen duidelijke WMA te zien, goede systolische L.V. functie, wel L.V.H., diastolische dysfunctie graad 1A tot 2. Geringe aortastenose en - matige -insufficientie. Geringe M.I.") for span, score in zip(prediction.spans['sc'], prediction.spans['sc'].attrs['scores']): print(f"Span: {span}, label: {span.label_}, score: {score[0]:.3f}") ``` # Label Scheme <details> <summary>View label scheme (4 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`spancat`** | `aortic_valve_native_stenosis_not_present`, `aortic_valve_native_stenosis_mild`, `aortic_valve_native_stenosis_severe`, `aortic_valve_native_stenosis_moderate` | </details> # Intended use The model is developed for span classification on Dutch clinical text. Since it is a domain-specific model trained on medical data, it is meant to be used on medical NLP tasks for Dutch. # Data The model was trained on approximately 4,000 manually annotated echocardiogram reports from the University Medical Centre Utrecht. The training data was anonymized before starting the training procedure. | Feature | Description | | --- | --- | | **Name** | `Echocardiogram_SpanCategorizer_aortic_stenosis` | | **Version** | `1.0.0` | | **spaCy** | `>=3.7.4,<3.8.0` | | **Default Pipeline** | `tok2vec`, `spancat` | | **Components** | `tok2vec`, `spancat` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | `cc-by-sa-4.0` | | **Author** | [Bauke Arends]() | # Contact If you are having problems with this model please add an issue on our git: https://github.com/umcu/echolabeler/issues # Usage If you use the model in your work please use the following referral; https://doi.org/10.48550/arXiv.2408.06930 # References Paper: Bauke Arends, Melle Vessies, Dirk van Osch, Arco Teske, Pim van der Harst, René van Es, Bram van Es (2024): Diagnosis extraction from unstructured Dutch echocardiogram reports using span- and document-level characteristic classification, Arxiv https://arxiv.org/abs/2408.06930
[ "MEDICAL DATA" ]
baukearends/Echocardiogram-SpanCategorizer-rv-dil
baukearends
token-classification
[ "spacy", "arxiv:2408.06930", "medical", "token-classification", "nl", "license:cc-by-sa-4.0", "model-index", "region:us" ]
2024-08-15T08:14:58Z
2024-08-15T08:32:28+00:00
14
0
--- language: - nl license: cc-by-sa-4.0 metrics: - f1 - precision - recall pipeline_tag: token-classification tags: - spacy - arxiv:2408.06930 - medical model-index: - name: Echocardiogram_SpanCategorizer_rv_dil results: - task: type: token-classification dataset: name: internal test set type: test metrics: - type: f1 value: 0.901 name: Weighted f1 verified: false - type: precision value: 0.926 name: Weighted precision verified: false - type: recall value: 0.877 name: Weighted recall verified: false --- # Description This model is a spaCy SpanCategorizer model trained from scratch on Dutch echocardiogram reports sourced from Electronic Health Records. The publication associated with the span classification task can be found at https://arxiv.org/abs/2408.06930. The config file for training the model can be found at https://github.com/umcu/echolabeler. # Minimum working example ```python !pip install https://huggingface.co/baukearends/Echocardiogram-SpanCategorizer-rv-dil/resolve/main/nl_Echocardiogram_SpanCategorizer_rv_dil-any-py3-none-any.whl ``` ```python import spacy nlp = spacy.load("nl_Echocardiogram_SpanCategorizer_rv_dil") ``` ```python prediction = nlp("Op dit echo geen duidelijke WMA te zien, goede systolische L.V. functie, normale dimensies LV en RV, wel L.V.H., diastolische dysfunctie graad 1A tot 2. Geringe aortastenose en - matige -insufficientie. Geringe M.I.") for span, score in zip(prediction.spans['sc'], prediction.spans['sc'].attrs['scores']): print(f"Span: {span}, label: {span.label_}, score: {score[0]:.3f}") ``` # Label Scheme <details> <summary>View label scheme (5 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`spancat`** | `rv_dil_normal`, `rv_dil_severe`, `rv_dil_mild`, `rv_dil_moderate`, `rv_dil_present` | </details> # Intended use The model is developed for span classification on Dutch clinical text. Since it is a domain-specific model trained on medical data, it is meant to be used on medical NLP tasks for Dutch. # Data The model was trained on approximately 4,000 manually annotated echocardiogram reports from the University Medical Centre Utrecht. The training data was anonymized before starting the training procedure. | Feature | Description | | --- | --- | | **Name** | `Echocardiogram_SpanCategorizer_rv_dil` | | **Version** | `1.0.0` | | **spaCy** | `>=3.7.4,<3.8.0` | | **Default Pipeline** | `tok2vec`, `spancat` | | **Components** | `tok2vec`, `spancat` | | **License** | `cc-by-sa-4.0` | | **Author** | [Bauke Arends]() | # Contact If you are having problems with this model please add an issue on our git: https://github.com/umcu/echolabeler/issues # Usage If you use the model in your work please use the following referral; https://doi.org/10.48550/arXiv.2408.06930 # References Paper: Bauke Arends, Melle Vessies, Dirk van Osch, Arco Teske, Pim van der Harst, René van Es, Bram van Es (2024): Diagnosis extraction from unstructured Dutch echocardiogram reports using span- and document-level characteristic classification, Arxiv https://arxiv.org/abs/2408.06930
[ "MEDICAL DATA" ]
KeyurRamoliya/multilingual-e5-large-GGUF
KeyurRamoliya
feature-extraction
[ "sentence-transformers", "gguf", "mteb", "Sentence Transformers", "sentence-similarity", "feature-extraction", "llama-cpp", "gguf-my-repo", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh", "base_model:intfloat/multilingual-e5-large", "base_model:quantized:intfloat/multilingual-e5-large", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-08-23T05:30:37Z
2024-08-23T05:30:43+00:00
14
1
--- base_model: intfloat/multilingual-e5-large language: - multilingual - af - am - ar - as - az - be - bg - bn - br - bs - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - hu - hy - id - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - 'no' - om - or - pa - pl - ps - pt - ro - ru - sa - sd - si - sk - sl - so - sq - sr - su - sv - sw - ta - te - th - tl - tr - ug - uk - ur - uz - vi - xh - yi - zh license: mit tags: - mteb - Sentence Transformers - sentence-similarity - feature-extraction - sentence-transformers - llama-cpp - gguf-my-repo model-index: - name: multilingual-e5-large results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 79.05970149253731 - type: ap value: 43.486574390835635 - type: f1 value: 73.32700092140148 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (de) type: mteb/amazon_counterfactual config: de split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 71.22055674518201 - type: ap value: 81.55756710830498 - type: f1 value: 69.28271787752661 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en-ext) type: mteb/amazon_counterfactual config: en-ext split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 80.41979010494754 - type: ap value: 29.34879922376344 - type: f1 value: 67.62475449011278 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (ja) type: mteb/amazon_counterfactual config: ja split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 77.8372591006424 - type: ap value: 26.557560591210738 - type: f1 value: 64.96619417368707 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 93.489875 - type: ap value: 90.98758636917603 - type: f1 value: 93.48554819717332 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 47.564 - type: f1 value: 46.75122173518047 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (de) type: mteb/amazon_reviews_multi config: de split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 45.400000000000006 - type: f1 value: 44.17195682400632 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (es) type: mteb/amazon_reviews_multi config: es split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 43.068 - type: f1 value: 42.38155696855596 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (fr) type: mteb/amazon_reviews_multi config: fr split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 41.89 - type: f1 value: 40.84407321682663 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (ja) type: mteb/amazon_reviews_multi config: ja split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 40.120000000000005 - type: f1 value: 39.522976223819114 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (zh) type: mteb/amazon_reviews_multi config: zh split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 38.832 - type: f1 value: 38.0392533394713 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 30.725 - type: map_at_10 value: 46.055 - type: map_at_100 value: 46.900999999999996 - type: map_at_1000 value: 46.911 - type: map_at_3 value: 41.548 - type: map_at_5 value: 44.297 - type: mrr_at_1 value: 31.152 - type: mrr_at_10 value: 46.231 - type: mrr_at_100 value: 47.07 - type: mrr_at_1000 value: 47.08 - type: mrr_at_3 value: 41.738 - type: mrr_at_5 value: 44.468999999999994 - type: ndcg_at_1 value: 30.725 - type: ndcg_at_10 value: 54.379999999999995 - type: ndcg_at_100 value: 58.138 - type: ndcg_at_1000 value: 58.389 - type: ndcg_at_3 value: 45.156 - type: ndcg_at_5 value: 50.123 - type: precision_at_1 value: 30.725 - type: precision_at_10 value: 8.087 - type: precision_at_100 value: 0.9769999999999999 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 18.54 - type: precision_at_5 value: 13.542000000000002 - type: recall_at_1 value: 30.725 - type: recall_at_10 value: 80.868 - type: recall_at_100 value: 97.653 - type: recall_at_1000 value: 99.57300000000001 - type: recall_at_3 value: 55.619 - type: recall_at_5 value: 67.71000000000001 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 44.30960650674069 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 38.427074197498996 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 60.28270056031872 - type: mrr value: 74.38332673789738 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 84.05942144105269 - type: cos_sim_spearman value: 82.51212105850809 - type: euclidean_pearson value: 81.95639829909122 - type: euclidean_spearman value: 82.3717564144213 - type: manhattan_pearson value: 81.79273425468256 - type: manhattan_spearman value: 82.20066817871039 - task: type: BitextMining dataset: name: MTEB BUCC (de-en) type: mteb/bucc-bitext-mining config: de-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 99.46764091858039 - type: f1 value: 99.37717466945023 - type: precision value: 99.33194154488518 - type: recall value: 99.46764091858039 - task: type: BitextMining dataset: name: MTEB BUCC (fr-en) type: mteb/bucc-bitext-mining config: fr-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 98.29407880255337 - type: f1 value: 98.11248073959938 - type: precision value: 98.02443319392472 - type: recall value: 98.29407880255337 - task: type: BitextMining dataset: name: MTEB BUCC (ru-en) type: mteb/bucc-bitext-mining config: ru-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 97.79009352268791 - type: f1 value: 97.5176076665512 - type: precision value: 97.38136473848286 - type: recall value: 97.79009352268791 - task: type: BitextMining dataset: name: MTEB BUCC (zh-en) type: mteb/bucc-bitext-mining config: zh-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 99.26276987888363 - type: f1 value: 99.20133403545726 - type: precision value: 99.17500438827453 - type: recall value: 99.26276987888363 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 84.72727272727273 - type: f1 value: 84.67672206031433 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 35.34220182511161 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 33.4987096128766 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 25.558249999999997 - type: map_at_10 value: 34.44425000000001 - type: map_at_100 value: 35.59833333333333 - type: map_at_1000 value: 35.706916666666665 - type: map_at_3 value: 31.691749999999995 - type: map_at_5 value: 33.252916666666664 - type: mrr_at_1 value: 30.252666666666666 - type: mrr_at_10 value: 38.60675 - type: mrr_at_100 value: 39.42666666666666 - type: mrr_at_1000 value: 39.48408333333334 - type: mrr_at_3 value: 36.17441666666665 - type: mrr_at_5 value: 37.56275 - type: ndcg_at_1 value: 30.252666666666666 - type: ndcg_at_10 value: 39.683 - type: ndcg_at_100 value: 44.68541666666667 - type: ndcg_at_1000 value: 46.94316666666668 - type: ndcg_at_3 value: 34.961749999999995 - type: ndcg_at_5 value: 37.215666666666664 - type: precision_at_1 value: 30.252666666666666 - type: precision_at_10 value: 6.904166666666667 - type: precision_at_100 value: 1.0989999999999995 - type: precision_at_1000 value: 0.14733333333333334 - type: precision_at_3 value: 16.037666666666667 - type: precision_at_5 value: 11.413583333333333 - type: recall_at_1 value: 25.558249999999997 - type: recall_at_10 value: 51.13341666666666 - type: recall_at_100 value: 73.08366666666667 - type: recall_at_1000 value: 88.79483333333334 - type: recall_at_3 value: 37.989083333333326 - type: recall_at_5 value: 43.787833333333325 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 10.338 - type: map_at_10 value: 18.360000000000003 - type: map_at_100 value: 19.942 - type: map_at_1000 value: 20.134 - type: map_at_3 value: 15.174000000000001 - type: map_at_5 value: 16.830000000000002 - type: mrr_at_1 value: 23.257 - type: mrr_at_10 value: 33.768 - type: mrr_at_100 value: 34.707 - type: mrr_at_1000 value: 34.766000000000005 - type: mrr_at_3 value: 30.977 - type: mrr_at_5 value: 32.528 - type: ndcg_at_1 value: 23.257 - type: ndcg_at_10 value: 25.733 - type: ndcg_at_100 value: 32.288 - type: ndcg_at_1000 value: 35.992000000000004 - type: ndcg_at_3 value: 20.866 - type: ndcg_at_5 value: 22.612 - type: precision_at_1 value: 23.257 - type: precision_at_10 value: 8.124 - type: precision_at_100 value: 1.518 - type: precision_at_1000 value: 0.219 - type: precision_at_3 value: 15.679000000000002 - type: precision_at_5 value: 12.117 - type: recall_at_1 value: 10.338 - type: recall_at_10 value: 31.154 - type: recall_at_100 value: 54.161 - type: recall_at_1000 value: 75.21900000000001 - type: recall_at_3 value: 19.427 - type: recall_at_5 value: 24.214 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 8.498 - type: map_at_10 value: 19.103 - type: map_at_100 value: 27.375 - type: map_at_1000 value: 28.981 - type: map_at_3 value: 13.764999999999999 - type: map_at_5 value: 15.950000000000001 - type: mrr_at_1 value: 65.5 - type: mrr_at_10 value: 74.53800000000001 - type: mrr_at_100 value: 74.71799999999999 - type: mrr_at_1000 value: 74.725 - type: mrr_at_3 value: 72.792 - type: mrr_at_5 value: 73.554 - type: ndcg_at_1 value: 53.37499999999999 - type: ndcg_at_10 value: 41.286 - type: ndcg_at_100 value: 45.972 - type: ndcg_at_1000 value: 53.123 - type: ndcg_at_3 value: 46.172999999999995 - type: ndcg_at_5 value: 43.033 - type: precision_at_1 value: 65.5 - type: precision_at_10 value: 32.725 - type: precision_at_100 value: 10.683 - type: precision_at_1000 value: 1.978 - type: precision_at_3 value: 50 - type: precision_at_5 value: 41.349999999999994 - type: recall_at_1 value: 8.498 - type: recall_at_10 value: 25.070999999999998 - type: recall_at_100 value: 52.383 - type: recall_at_1000 value: 74.91499999999999 - type: recall_at_3 value: 15.207999999999998 - type: recall_at_5 value: 18.563 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 46.5 - type: f1 value: 41.93833713984145 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 67.914 - type: map_at_10 value: 78.10000000000001 - type: map_at_100 value: 78.333 - type: map_at_1000 value: 78.346 - type: map_at_3 value: 76.626 - type: map_at_5 value: 77.627 - type: mrr_at_1 value: 72.74199999999999 - type: mrr_at_10 value: 82.414 - type: mrr_at_100 value: 82.511 - type: mrr_at_1000 value: 82.513 - type: mrr_at_3 value: 81.231 - type: mrr_at_5 value: 82.065 - type: ndcg_at_1 value: 72.74199999999999 - type: ndcg_at_10 value: 82.806 - type: ndcg_at_100 value: 83.677 - type: ndcg_at_1000 value: 83.917 - type: ndcg_at_3 value: 80.305 - type: ndcg_at_5 value: 81.843 - type: precision_at_1 value: 72.74199999999999 - type: precision_at_10 value: 10.24 - type: precision_at_100 value: 1.089 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 31.268 - type: precision_at_5 value: 19.706000000000003 - type: recall_at_1 value: 67.914 - type: recall_at_10 value: 92.889 - type: recall_at_100 value: 96.42699999999999 - type: recall_at_1000 value: 97.92 - type: recall_at_3 value: 86.21 - type: recall_at_5 value: 90.036 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 22.166 - type: map_at_10 value: 35.57 - type: map_at_100 value: 37.405 - type: map_at_1000 value: 37.564 - type: map_at_3 value: 30.379 - type: map_at_5 value: 33.324 - type: mrr_at_1 value: 43.519000000000005 - type: mrr_at_10 value: 51.556000000000004 - type: mrr_at_100 value: 52.344 - type: mrr_at_1000 value: 52.373999999999995 - type: mrr_at_3 value: 48.868 - type: mrr_at_5 value: 50.319 - type: ndcg_at_1 value: 43.519000000000005 - type: ndcg_at_10 value: 43.803 - type: ndcg_at_100 value: 50.468999999999994 - type: ndcg_at_1000 value: 53.111 - type: ndcg_at_3 value: 38.893 - type: ndcg_at_5 value: 40.653 - type: precision_at_1 value: 43.519000000000005 - type: precision_at_10 value: 12.253 - type: precision_at_100 value: 1.931 - type: precision_at_1000 value: 0.242 - type: precision_at_3 value: 25.617 - type: precision_at_5 value: 19.383 - type: recall_at_1 value: 22.166 - type: recall_at_10 value: 51.6 - type: recall_at_100 value: 76.574 - type: recall_at_1000 value: 92.192 - type: recall_at_3 value: 34.477999999999994 - type: recall_at_5 value: 41.835 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 39.041 - type: map_at_10 value: 62.961999999999996 - type: map_at_100 value: 63.79899999999999 - type: map_at_1000 value: 63.854 - type: map_at_3 value: 59.399 - type: map_at_5 value: 61.669 - type: mrr_at_1 value: 78.082 - type: mrr_at_10 value: 84.321 - type: mrr_at_100 value: 84.49600000000001 - type: mrr_at_1000 value: 84.502 - type: mrr_at_3 value: 83.421 - type: mrr_at_5 value: 83.977 - type: ndcg_at_1 value: 78.082 - type: ndcg_at_10 value: 71.229 - type: ndcg_at_100 value: 74.10900000000001 - type: ndcg_at_1000 value: 75.169 - type: ndcg_at_3 value: 66.28699999999999 - type: ndcg_at_5 value: 69.084 - type: precision_at_1 value: 78.082 - type: precision_at_10 value: 14.993 - type: precision_at_100 value: 1.7239999999999998 - type: precision_at_1000 value: 0.186 - type: precision_at_3 value: 42.737 - type: precision_at_5 value: 27.843 - type: recall_at_1 value: 39.041 - type: recall_at_10 value: 74.96300000000001 - type: recall_at_100 value: 86.199 - type: recall_at_1000 value: 93.228 - type: recall_at_3 value: 64.105 - type: recall_at_5 value: 69.608 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 90.23160000000001 - type: ap value: 85.5674856808308 - type: f1 value: 90.18033354786317 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 24.091 - type: map_at_10 value: 36.753 - type: map_at_100 value: 37.913000000000004 - type: map_at_1000 value: 37.958999999999996 - type: map_at_3 value: 32.818999999999996 - type: map_at_5 value: 35.171 - type: mrr_at_1 value: 24.742 - type: mrr_at_10 value: 37.285000000000004 - type: mrr_at_100 value: 38.391999999999996 - type: mrr_at_1000 value: 38.431 - type: mrr_at_3 value: 33.440999999999995 - type: mrr_at_5 value: 35.75 - type: ndcg_at_1 value: 24.742 - type: ndcg_at_10 value: 43.698 - type: ndcg_at_100 value: 49.145 - type: ndcg_at_1000 value: 50.23800000000001 - type: ndcg_at_3 value: 35.769 - type: ndcg_at_5 value: 39.961999999999996 - type: precision_at_1 value: 24.742 - type: precision_at_10 value: 6.7989999999999995 - type: precision_at_100 value: 0.95 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 15.096000000000002 - type: precision_at_5 value: 11.183 - type: recall_at_1 value: 24.091 - type: recall_at_10 value: 65.068 - type: recall_at_100 value: 89.899 - type: recall_at_1000 value: 98.16 - type: recall_at_3 value: 43.68 - type: recall_at_5 value: 53.754999999999995 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 93.66621067031465 - type: f1 value: 93.49622853272142 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (de) type: mteb/mtop_domain config: de split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 91.94702733164272 - type: f1 value: 91.17043441745282 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (es) type: mteb/mtop_domain config: es split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 92.20146764509674 - type: f1 value: 91.98359080555608 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (fr) type: mteb/mtop_domain config: fr split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 88.99780770435328 - type: f1 value: 89.19746342724068 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (hi) type: mteb/mtop_domain config: hi split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 89.78486912871998 - type: f1 value: 89.24578823628642 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (th) type: mteb/mtop_domain config: th split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 88.74502712477394 - type: f1 value: 89.00297573881542 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 77.9046967624259 - type: f1 value: 59.36787125785957 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (de) type: mteb/mtop_intent config: de split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 74.5280360664976 - type: f1 value: 57.17723440888718 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (es) type: mteb/mtop_intent config: es split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 75.44029352901934 - type: f1 value: 54.052855531072964 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (fr) type: mteb/mtop_intent config: fr split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 70.5606013153774 - type: f1 value: 52.62215934386531 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (hi) type: mteb/mtop_intent config: hi split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 73.11581211903908 - type: f1 value: 52.341291845645465 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (th) type: mteb/mtop_intent config: th split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 74.28933092224233 - type: f1 value: 57.07918745504911 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (af) type: mteb/amazon_massive_intent config: af split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.38063214525892 - type: f1 value: 59.46463723443009 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (am) type: mteb/amazon_massive_intent config: am split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 56.06926698049766 - type: f1 value: 52.49084283283562 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ar) type: mteb/amazon_massive_intent config: ar split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 60.74983187626093 - type: f1 value: 56.960640620165904 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (az) type: mteb/amazon_massive_intent config: az split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.86550100874243 - type: f1 value: 62.47370548140688 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (bn) type: mteb/amazon_massive_intent config: bn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.971082716879636 - type: f1 value: 61.03812421957381 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (cy) type: mteb/amazon_massive_intent config: cy split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 54.98318762609282 - type: f1 value: 51.51207916008392 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (da) type: mteb/amazon_massive_intent config: da split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.45527908540686 - type: f1 value: 66.16631905400318 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (de) type: mteb/amazon_massive_intent config: de split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.32750504371216 - type: f1 value: 66.16755288646591 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (el) type: mteb/amazon_massive_intent config: el split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.09213180901143 - type: f1 value: 66.95654394661507 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 73.75588433086752 - type: f1 value: 71.79973779656923 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (es) type: mteb/amazon_massive_intent config: es split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.49428379287154 - type: f1 value: 68.37494379215734 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fa) type: mteb/amazon_massive_intent config: fa split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.90921318090115 - type: f1 value: 66.79517376481645 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fi) type: mteb/amazon_massive_intent config: fi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.12104909213181 - type: f1 value: 67.29448842879584 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fr) type: mteb/amazon_massive_intent config: fr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.34095494283793 - type: f1 value: 67.01134288992947 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (he) type: mteb/amazon_massive_intent config: he split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.61264290517822 - type: f1 value: 64.68730512660757 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hi) type: mteb/amazon_massive_intent config: hi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.79757901815738 - type: f1 value: 65.24938539425598 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hu) type: mteb/amazon_massive_intent config: hu split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.68728984532616 - type: f1 value: 67.0487169762553 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hy) type: mteb/amazon_massive_intent config: hy split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.07464694014795 - type: f1 value: 59.183532276789286 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (id) type: mteb/amazon_massive_intent config: id split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.04707464694015 - type: f1 value: 67.66829629003848 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (is) type: mteb/amazon_massive_intent config: is split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.42434431741762 - type: f1 value: 59.01617226544757 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (it) type: mteb/amazon_massive_intent config: it split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.53127101546738 - type: f1 value: 68.10033760906255 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ja) type: mteb/amazon_massive_intent config: ja split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 72.50504371217215 - type: f1 value: 69.74931103158923 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (jv) type: mteb/amazon_massive_intent config: jv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.91190316072628 - type: f1 value: 54.05551136648796 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ka) type: mteb/amazon_massive_intent config: ka split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 51.78211163416275 - type: f1 value: 49.874888544058535 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (km) type: mteb/amazon_massive_intent config: km split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 47.017484868863484 - type: f1 value: 44.53364263352014 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (kn) type: mteb/amazon_massive_intent config: kn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.16207128446537 - type: f1 value: 59.01185692320829 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ko) type: mteb/amazon_massive_intent config: ko split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.42501681237391 - type: f1 value: 67.13169450166086 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (lv) type: mteb/amazon_massive_intent config: lv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.0780094149294 - type: f1 value: 64.41720167850707 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ml) type: mteb/amazon_massive_intent config: ml split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.57162071284466 - type: f1 value: 62.414138683804424 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (mn) type: mteb/amazon_massive_intent config: mn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 61.71149966375252 - type: f1 value: 58.594805125087234 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ms) type: mteb/amazon_massive_intent config: ms split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.03900470746471 - type: f1 value: 63.87937257883887 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (my) type: mteb/amazon_massive_intent config: my split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 60.8776059179556 - type: f1 value: 57.48587618059131 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (nb) type: mteb/amazon_massive_intent config: nb split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.87895090786819 - type: f1 value: 66.8141299430347 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (nl) type: mteb/amazon_massive_intent config: nl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.45057162071285 - type: f1 value: 67.46444039673516 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (pl) type: mteb/amazon_massive_intent config: pl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.546738399462 - type: f1 value: 68.63640876702655 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (pt) type: mteb/amazon_massive_intent config: pt split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.72965702757229 - type: f1 value: 68.54119560379115 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ro) type: mteb/amazon_massive_intent config: ro split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 68.35574983187625 - type: f1 value: 65.88844917691927 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ru) type: mteb/amazon_massive_intent config: ru split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.70477471418964 - type: f1 value: 69.19665697061978 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sl) type: mteb/amazon_massive_intent config: sl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.0880968392737 - type: f1 value: 64.76962317666086 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sq) type: mteb/amazon_massive_intent config: sq split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.18493611297916 - type: f1 value: 62.49984559035371 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sv) type: mteb/amazon_massive_intent config: sv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.75857431069265 - type: f1 value: 69.20053687623418 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sw) type: mteb/amazon_massive_intent config: sw split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 58.500336247478145 - type: f1 value: 55.2972398687929 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ta) type: mteb/amazon_massive_intent config: ta split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.68997982515132 - type: f1 value: 59.36848202755348 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (te) type: mteb/amazon_massive_intent config: te split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.01950235373235 - type: f1 value: 60.09351954625423 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (th) type: mteb/amazon_massive_intent config: th split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 68.29186281102892 - type: f1 value: 67.57860496703447 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (tl) type: mteb/amazon_massive_intent config: tl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.77471418964357 - type: f1 value: 61.913983147713836 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (tr) type: mteb/amazon_massive_intent config: tr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.87222595830532 - type: f1 value: 66.03679033708141 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ur) type: mteb/amazon_massive_intent config: ur split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.04505716207127 - type: f1 value: 61.28569169817908 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (vi) type: mteb/amazon_massive_intent config: vi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.38466711499663 - type: f1 value: 67.20532357036844 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-CN) type: mteb/amazon_massive_intent config: zh-CN split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.12306657700067 - type: f1 value: 68.91251226588182 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-TW) type: mteb/amazon_massive_intent config: zh-TW split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.20040349697378 - type: f1 value: 66.02657347714175 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (af) type: mteb/amazon_massive_scenario config: af split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.73907195696032 - type: f1 value: 66.98484521791418 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (am) type: mteb/amazon_massive_scenario config: am split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 60.58843308675185 - type: f1 value: 58.95591723092005 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ar) type: mteb/amazon_massive_scenario config: ar split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.22730329522528 - type: f1 value: 66.0894499712115 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (az) type: mteb/amazon_massive_scenario config: az split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.48285137861465 - type: f1 value: 65.21963176785157 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (bn) type: mteb/amazon_massive_scenario config: bn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.74714189643578 - type: f1 value: 66.8212192745412 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (cy) type: mteb/amazon_massive_scenario config: cy split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 59.09213180901143 - type: f1 value: 56.70735546356339 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (da) type: mteb/amazon_massive_scenario config: da split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.05716207128448 - type: f1 value: 74.8413712365364 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (de) type: mteb/amazon_massive_scenario config: de split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.69737726967047 - type: f1 value: 74.7664341963 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (el) type: mteb/amazon_massive_scenario config: el split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.90383322125084 - type: f1 value: 73.59201554448323 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.51176866173503 - type: f1 value: 77.46104434577758 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (es) type: mteb/amazon_massive_scenario config: es split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.31069266980496 - type: f1 value: 74.61048660675635 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fa) type: mteb/amazon_massive_scenario config: fa split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 72.95225285810356 - type: f1 value: 72.33160006574627 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fi) type: mteb/amazon_massive_scenario config: fi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.12373907195696 - type: f1 value: 73.20921012557481 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fr) type: mteb/amazon_massive_scenario config: fr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.86684599865501 - type: f1 value: 73.82348774610831 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (he) type: mteb/amazon_massive_scenario config: he split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.40215198386012 - type: f1 value: 71.11945183971858 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hi) type: mteb/amazon_massive_scenario config: hi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 72.12844653665098 - type: f1 value: 71.34450495911766 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hu) type: mteb/amazon_massive_scenario config: hu split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.52252858103566 - type: f1 value: 73.98878711342999 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hy) type: mteb/amazon_massive_scenario config: hy split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 64.93611297915265 - type: f1 value: 63.723200467653385 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (id) type: mteb/amazon_massive_scenario config: id split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.11903160726295 - type: f1 value: 73.82138439467096 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (is) type: mteb/amazon_massive_scenario config: is split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.15198386012105 - type: f1 value: 66.02172193802167 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (it) type: mteb/amazon_massive_scenario config: it split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.32414256893072 - type: f1 value: 74.30943421170574 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ja) type: mteb/amazon_massive_scenario config: ja split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.46805648957633 - type: f1 value: 77.62808409298209 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (jv) type: mteb/amazon_massive_scenario config: jv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.318762609280434 - type: f1 value: 62.094284066075076 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ka) type: mteb/amazon_massive_scenario config: ka split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 58.34902488231338 - type: f1 value: 57.12893860987984 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (km) type: mteb/amazon_massive_scenario config: km split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 50.88433086751849 - type: f1 value: 48.2272350802058 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (kn) type: mteb/amazon_massive_scenario config: kn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.4425016812374 - type: f1 value: 64.61463095996173 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ko) type: mteb/amazon_massive_scenario config: ko split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.04707464694015 - type: f1 value: 75.05099199098998 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (lv) type: mteb/amazon_massive_scenario config: lv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.50437121721586 - type: f1 value: 69.83397721096314 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ml) type: mteb/amazon_massive_scenario config: ml split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.94283792871553 - type: f1 value: 68.8704663703913 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (mn) type: mteb/amazon_massive_scenario config: mn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 64.79488903833222 - type: f1 value: 63.615424063345436 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ms) type: mteb/amazon_massive_scenario config: ms split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.88231338264963 - type: f1 value: 68.57892302593237 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (my) type: mteb/amazon_massive_scenario config: my split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.248150638870214 - type: f1 value: 61.06680605338809 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (nb) type: mteb/amazon_massive_scenario config: nb split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.84196368527236 - type: f1 value: 74.52566464968763 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (nl) type: mteb/amazon_massive_scenario config: nl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.8285137861466 - type: f1 value: 74.8853197608802 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (pl) type: mteb/amazon_massive_scenario config: pl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.13248150638869 - type: f1 value: 74.3982040999179 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (pt) type: mteb/amazon_massive_scenario config: pt split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.49024882313383 - type: f1 value: 73.82153848368573 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ro) type: mteb/amazon_massive_scenario config: ro split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.72158708809684 - type: f1 value: 71.85049433180541 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ru) type: mteb/amazon_massive_scenario config: ru split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.137861466039 - type: f1 value: 75.37628348188467 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sl) type: mteb/amazon_massive_scenario config: sl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.86953597848016 - type: f1 value: 71.87537624521661 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sq) type: mteb/amazon_massive_scenario config: sq split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.27572293207801 - type: f1 value: 68.80017302344231 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sv) type: mteb/amazon_massive_scenario config: sv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.09952925353059 - type: f1 value: 76.07992707688408 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sw) type: mteb/amazon_massive_scenario config: sw split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.140551445864155 - type: f1 value: 61.73855010331415 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ta) type: mteb/amazon_massive_scenario config: ta split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.27774041694687 - type: f1 value: 64.83664868894539 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (te) type: mteb/amazon_massive_scenario config: te split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.69468728984533 - type: f1 value: 64.76239666920868 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (th) type: mteb/amazon_massive_scenario config: th split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.44653665097512 - type: f1 value: 73.14646052013873 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (tl) type: mteb/amazon_massive_scenario config: tl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.71351714862139 - type: f1 value: 66.67212180163382 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (tr) type: mteb/amazon_massive_scenario config: tr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.9946200403497 - type: f1 value: 73.87348793725525 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ur) type: mteb/amazon_massive_scenario config: ur split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.15400134498992 - type: f1 value: 67.09433241421094 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (vi) type: mteb/amazon_massive_scenario config: vi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.11365164761264 - type: f1 value: 73.59502539433753 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-CN) type: mteb/amazon_massive_scenario config: zh-CN split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.82582380632145 - type: f1 value: 76.89992945316313 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-TW) type: mteb/amazon_massive_scenario config: zh-TW split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.81237390719569 - type: f1 value: 72.36499770986265 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 31.480506569594695 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 29.71252128004552 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.421396787056548 - type: mrr value: 32.48155274872267 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 5.595 - type: map_at_10 value: 12.642000000000001 - type: map_at_100 value: 15.726 - type: map_at_1000 value: 17.061999999999998 - type: map_at_3 value: 9.125 - type: map_at_5 value: 10.866000000000001 - type: mrr_at_1 value: 43.344 - type: mrr_at_10 value: 52.227999999999994 - type: mrr_at_100 value: 52.898999999999994 - type: mrr_at_1000 value: 52.944 - type: mrr_at_3 value: 49.845 - type: mrr_at_5 value: 51.115 - type: ndcg_at_1 value: 41.949999999999996 - type: ndcg_at_10 value: 33.995 - type: ndcg_at_100 value: 30.869999999999997 - type: ndcg_at_1000 value: 39.487 - type: ndcg_at_3 value: 38.903999999999996 - type: ndcg_at_5 value: 37.236999999999995 - type: precision_at_1 value: 43.344 - type: precision_at_10 value: 25.480000000000004 - type: precision_at_100 value: 7.672 - type: precision_at_1000 value: 2.028 - type: precision_at_3 value: 36.636 - type: precision_at_5 value: 32.632 - type: recall_at_1 value: 5.595 - type: recall_at_10 value: 16.466 - type: recall_at_100 value: 31.226 - type: recall_at_1000 value: 62.778999999999996 - type: recall_at_3 value: 9.931 - type: recall_at_5 value: 12.884 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 40.414 - type: map_at_10 value: 56.754000000000005 - type: map_at_100 value: 57.457 - type: map_at_1000 value: 57.477999999999994 - type: map_at_3 value: 52.873999999999995 - type: map_at_5 value: 55.175 - type: mrr_at_1 value: 45.278 - type: mrr_at_10 value: 59.192 - type: mrr_at_100 value: 59.650000000000006 - type: mrr_at_1000 value: 59.665 - type: mrr_at_3 value: 56.141 - type: mrr_at_5 value: 57.998000000000005 - type: ndcg_at_1 value: 45.278 - type: ndcg_at_10 value: 64.056 - type: ndcg_at_100 value: 66.89 - type: ndcg_at_1000 value: 67.364 - type: ndcg_at_3 value: 56.97 - type: ndcg_at_5 value: 60.719 - type: precision_at_1 value: 45.278 - type: precision_at_10 value: 9.994 - type: precision_at_100 value: 1.165 - type: precision_at_1000 value: 0.121 - type: precision_at_3 value: 25.512 - type: precision_at_5 value: 17.509 - type: recall_at_1 value: 40.414 - type: recall_at_10 value: 83.596 - type: recall_at_100 value: 95.72 - type: recall_at_1000 value: 99.24 - type: recall_at_3 value: 65.472 - type: recall_at_5 value: 74.039 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 70.352 - type: map_at_10 value: 84.369 - type: map_at_100 value: 85.02499999999999 - type: map_at_1000 value: 85.04 - type: map_at_3 value: 81.42399999999999 - type: map_at_5 value: 83.279 - type: mrr_at_1 value: 81.05 - type: mrr_at_10 value: 87.401 - type: mrr_at_100 value: 87.504 - type: mrr_at_1000 value: 87.505 - type: mrr_at_3 value: 86.443 - type: mrr_at_5 value: 87.10799999999999 - type: ndcg_at_1 value: 81.04 - type: ndcg_at_10 value: 88.181 - type: ndcg_at_100 value: 89.411 - type: ndcg_at_1000 value: 89.507 - type: ndcg_at_3 value: 85.28099999999999 - type: ndcg_at_5 value: 86.888 - type: precision_at_1 value: 81.04 - type: precision_at_10 value: 13.406 - type: precision_at_100 value: 1.5350000000000001 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.31 - type: precision_at_5 value: 24.54 - type: recall_at_1 value: 70.352 - type: recall_at_10 value: 95.358 - type: recall_at_100 value: 99.541 - type: recall_at_1000 value: 99.984 - type: recall_at_3 value: 87.111 - type: recall_at_5 value: 91.643 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 46.54068723291946 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 63.216287629895994 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 4.023000000000001 - type: map_at_10 value: 10.071 - type: map_at_100 value: 11.892 - type: map_at_1000 value: 12.196 - type: map_at_3 value: 7.234 - type: map_at_5 value: 8.613999999999999 - type: mrr_at_1 value: 19.900000000000002 - type: mrr_at_10 value: 30.516 - type: mrr_at_100 value: 31.656000000000002 - type: mrr_at_1000 value: 31.723000000000003 - type: mrr_at_3 value: 27.400000000000002 - type: mrr_at_5 value: 29.270000000000003 - type: ndcg_at_1 value: 19.900000000000002 - type: ndcg_at_10 value: 17.474 - type: ndcg_at_100 value: 25.020999999999997 - type: ndcg_at_1000 value: 30.728 - type: ndcg_at_3 value: 16.588 - type: ndcg_at_5 value: 14.498 - type: precision_at_1 value: 19.900000000000002 - type: precision_at_10 value: 9.139999999999999 - type: precision_at_100 value: 2.011 - type: precision_at_1000 value: 0.33899999999999997 - type: precision_at_3 value: 15.667 - type: precision_at_5 value: 12.839999999999998 - type: recall_at_1 value: 4.023000000000001 - type: recall_at_10 value: 18.497 - type: recall_at_100 value: 40.8 - type: recall_at_1000 value: 68.812 - type: recall_at_3 value: 9.508 - type: recall_at_5 value: 12.983 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 83.967008785134 - type: cos_sim_spearman value: 80.23142141101837 - type: euclidean_pearson value: 81.20166064704539 - type: euclidean_spearman value: 80.18961335654585 - type: manhattan_pearson value: 81.13925443187625 - type: manhattan_spearman value: 80.07948723044424 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 86.94262461316023 - type: cos_sim_spearman value: 80.01596278563865 - type: euclidean_pearson value: 83.80799622922581 - type: euclidean_spearman value: 79.94984954947103 - type: manhattan_pearson value: 83.68473841756281 - type: manhattan_spearman value: 79.84990707951822 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 80.57346443146068 - type: cos_sim_spearman value: 81.54689837570866 - type: euclidean_pearson value: 81.10909881516007 - type: euclidean_spearman value: 81.56746243261762 - type: manhattan_pearson value: 80.87076036186582 - type: manhattan_spearman value: 81.33074987964402 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 79.54733787179849 - type: cos_sim_spearman value: 77.72202105610411 - type: euclidean_pearson value: 78.9043595478849 - type: euclidean_spearman value: 77.93422804309435 - type: manhattan_pearson value: 78.58115121621368 - type: manhattan_spearman value: 77.62508135122033 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 88.59880017237558 - type: cos_sim_spearman value: 89.31088630824758 - type: euclidean_pearson value: 88.47069261564656 - type: euclidean_spearman value: 89.33581971465233 - type: manhattan_pearson value: 88.40774264100956 - type: manhattan_spearman value: 89.28657485627835 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 84.08055117917084 - type: cos_sim_spearman value: 85.78491813080304 - type: euclidean_pearson value: 84.99329155500392 - type: euclidean_spearman value: 85.76728064677287 - type: manhattan_pearson value: 84.87947428989587 - type: manhattan_spearman value: 85.62429454917464 - task: type: STS dataset: name: MTEB STS17 (ko-ko) type: mteb/sts17-crosslingual-sts config: ko-ko split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 82.14190939287384 - type: cos_sim_spearman value: 82.27331573306041 - type: euclidean_pearson value: 81.891896953716 - type: euclidean_spearman value: 82.37695542955998 - type: manhattan_pearson value: 81.73123869460504 - type: manhattan_spearman value: 82.19989168441421 - task: type: STS dataset: name: MTEB STS17 (ar-ar) type: mteb/sts17-crosslingual-sts config: ar-ar split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 76.84695301843362 - type: cos_sim_spearman value: 77.87790986014461 - type: euclidean_pearson value: 76.91981583106315 - type: euclidean_spearman value: 77.88154772749589 - type: manhattan_pearson value: 76.94953277451093 - type: manhattan_spearman value: 77.80499230728604 - task: type: STS dataset: name: MTEB STS17 (en-ar) type: mteb/sts17-crosslingual-sts config: en-ar split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 75.44657840482016 - type: cos_sim_spearman value: 75.05531095119674 - type: euclidean_pearson value: 75.88161755829299 - type: euclidean_spearman value: 74.73176238219332 - type: manhattan_pearson value: 75.63984765635362 - type: manhattan_spearman value: 74.86476440770737 - task: type: STS dataset: name: MTEB STS17 (en-de) type: mteb/sts17-crosslingual-sts config: en-de split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 85.64700140524133 - type: cos_sim_spearman value: 86.16014210425672 - type: euclidean_pearson value: 86.49086860843221 - type: euclidean_spearman value: 86.09729326815614 - type: manhattan_pearson value: 86.43406265125513 - type: manhattan_spearman value: 86.17740150939994 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.91170098764921 - type: cos_sim_spearman value: 88.12437004058931 - type: euclidean_pearson value: 88.81828254494437 - type: euclidean_spearman value: 88.14831794572122 - type: manhattan_pearson value: 88.93442183448961 - type: manhattan_spearman value: 88.15254630778304 - task: type: STS dataset: name: MTEB STS17 (en-tr) type: mteb/sts17-crosslingual-sts config: en-tr split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 72.91390577997292 - type: cos_sim_spearman value: 71.22979457536074 - type: euclidean_pearson value: 74.40314008106749 - type: euclidean_spearman value: 72.54972136083246 - type: manhattan_pearson value: 73.85687539530218 - type: manhattan_spearman value: 72.09500771742637 - task: type: STS dataset: name: MTEB STS17 (es-en) type: mteb/sts17-crosslingual-sts config: es-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 80.9301067983089 - type: cos_sim_spearman value: 80.74989828346473 - type: euclidean_pearson value: 81.36781301814257 - type: euclidean_spearman value: 80.9448819964426 - type: manhattan_pearson value: 81.0351322685609 - type: manhattan_spearman value: 80.70192121844177 - task: type: STS dataset: name: MTEB STS17 (es-es) type: mteb/sts17-crosslingual-sts config: es-es split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.13820465980005 - type: cos_sim_spearman value: 86.73532498758757 - type: euclidean_pearson value: 87.21329451846637 - type: euclidean_spearman value: 86.57863198601002 - type: manhattan_pearson value: 87.06973713818554 - type: manhattan_spearman value: 86.47534918791499 - task: type: STS dataset: name: MTEB STS17 (fr-en) type: mteb/sts17-crosslingual-sts config: fr-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 85.48720108904415 - type: cos_sim_spearman value: 85.62221757068387 - type: euclidean_pearson value: 86.1010129512749 - type: euclidean_spearman value: 85.86580966509942 - type: manhattan_pearson value: 86.26800938808971 - type: manhattan_spearman value: 85.88902721678429 - task: type: STS dataset: name: MTEB STS17 (it-en) type: mteb/sts17-crosslingual-sts config: it-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 83.98021347333516 - type: cos_sim_spearman value: 84.53806553803501 - type: euclidean_pearson value: 84.61483347248364 - type: euclidean_spearman value: 85.14191408011702 - type: manhattan_pearson value: 84.75297588825967 - type: manhattan_spearman value: 85.33176753669242 - task: type: STS dataset: name: MTEB STS17 (nl-en) type: mteb/sts17-crosslingual-sts config: nl-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 84.51856644893233 - type: cos_sim_spearman value: 85.27510748506413 - type: euclidean_pearson value: 85.09886861540977 - type: euclidean_spearman value: 85.62579245860887 - type: manhattan_pearson value: 84.93017860464607 - type: manhattan_spearman value: 85.5063988898453 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 62.581573200584195 - type: cos_sim_spearman value: 63.05503590247928 - type: euclidean_pearson value: 63.652564812602094 - type: euclidean_spearman value: 62.64811520876156 - type: manhattan_pearson value: 63.506842893061076 - type: manhattan_spearman value: 62.51289573046917 - task: type: STS dataset: name: MTEB STS22 (de) type: mteb/sts22-crosslingual-sts config: de split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 48.2248801729127 - type: cos_sim_spearman value: 56.5936604678561 - type: euclidean_pearson value: 43.98149464089 - type: euclidean_spearman value: 56.108561882423615 - type: manhattan_pearson value: 43.86880305903564 - type: manhattan_spearman value: 56.04671150510166 - task: type: STS dataset: name: MTEB STS22 (es) type: mteb/sts22-crosslingual-sts config: es split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 55.17564527009831 - type: cos_sim_spearman value: 64.57978560979488 - type: euclidean_pearson value: 58.8818330154583 - type: euclidean_spearman value: 64.99214839071281 - type: manhattan_pearson value: 58.72671436121381 - type: manhattan_spearman value: 65.10713416616109 - task: type: STS dataset: name: MTEB STS22 (pl) type: mteb/sts22-crosslingual-sts config: pl split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 26.772131864023297 - type: cos_sim_spearman value: 34.68200792408681 - type: euclidean_pearson value: 16.68082419005441 - type: euclidean_spearman value: 34.83099932652166 - type: manhattan_pearson value: 16.52605949659529 - type: manhattan_spearman value: 34.82075801399475 - task: type: STS dataset: name: MTEB STS22 (tr) type: mteb/sts22-crosslingual-sts config: tr split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 54.42415189043831 - type: cos_sim_spearman value: 63.54594264576758 - type: euclidean_pearson value: 57.36577498297745 - type: euclidean_spearman value: 63.111466379158074 - type: manhattan_pearson value: 57.584543715873885 - type: manhattan_spearman value: 63.22361054139183 - task: type: STS dataset: name: MTEB STS22 (ar) type: mteb/sts22-crosslingual-sts config: ar split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 47.55216762405518 - type: cos_sim_spearman value: 56.98670142896412 - type: euclidean_pearson value: 50.15318757562699 - type: euclidean_spearman value: 56.524941926541906 - type: manhattan_pearson value: 49.955618528674904 - type: manhattan_spearman value: 56.37102209240117 - task: type: STS dataset: name: MTEB STS22 (ru) type: mteb/sts22-crosslingual-sts config: ru split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 49.20540980338571 - type: cos_sim_spearman value: 59.9009453504406 - type: euclidean_pearson value: 49.557749853620535 - type: euclidean_spearman value: 59.76631621172456 - type: manhattan_pearson value: 49.62340591181147 - type: manhattan_spearman value: 59.94224880322436 - task: type: STS dataset: name: MTEB STS22 (zh) type: mteb/sts22-crosslingual-sts config: zh split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 51.508169956576985 - type: cos_sim_spearman value: 66.82461565306046 - type: euclidean_pearson value: 56.2274426480083 - type: euclidean_spearman value: 66.6775323848333 - type: manhattan_pearson value: 55.98277796300661 - type: manhattan_spearman value: 66.63669848497175 - task: type: STS dataset: name: MTEB STS22 (fr) type: mteb/sts22-crosslingual-sts config: fr split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 72.86478788045507 - type: cos_sim_spearman value: 76.7946552053193 - type: euclidean_pearson value: 75.01598530490269 - type: euclidean_spearman value: 76.83618917858281 - type: manhattan_pearson value: 74.68337628304332 - type: manhattan_spearman value: 76.57480204017773 - task: type: STS dataset: name: MTEB STS22 (de-en) type: mteb/sts22-crosslingual-sts config: de-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 55.922619099401984 - type: cos_sim_spearman value: 56.599362477240774 - type: euclidean_pearson value: 56.68307052369783 - type: euclidean_spearman value: 54.28760436777401 - type: manhattan_pearson value: 56.67763566500681 - type: manhattan_spearman value: 53.94619541711359 - task: type: STS dataset: name: MTEB STS22 (es-en) type: mteb/sts22-crosslingual-sts config: es-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 66.74357206710913 - type: cos_sim_spearman value: 72.5208244925311 - type: euclidean_pearson value: 67.49254562186032 - type: euclidean_spearman value: 72.02469076238683 - type: manhattan_pearson value: 67.45251772238085 - type: manhattan_spearman value: 72.05538819984538 - task: type: STS dataset: name: MTEB STS22 (it) type: mteb/sts22-crosslingual-sts config: it split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 71.25734330033191 - type: cos_sim_spearman value: 76.98349083946823 - type: euclidean_pearson value: 73.71642838667736 - type: euclidean_spearman value: 77.01715504651384 - type: manhattan_pearson value: 73.61712711868105 - type: manhattan_spearman value: 77.01392571153896 - task: type: STS dataset: name: MTEB STS22 (pl-en) type: mteb/sts22-crosslingual-sts config: pl-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 63.18215462781212 - type: cos_sim_spearman value: 65.54373266117607 - type: euclidean_pearson value: 64.54126095439005 - type: euclidean_spearman value: 65.30410369102711 - type: manhattan_pearson value: 63.50332221148234 - type: manhattan_spearman value: 64.3455878104313 - task: type: STS dataset: name: MTEB STS22 (zh-en) type: mteb/sts22-crosslingual-sts config: zh-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 62.30509221440029 - type: cos_sim_spearman value: 65.99582704642478 - type: euclidean_pearson value: 63.43818859884195 - type: euclidean_spearman value: 66.83172582815764 - type: manhattan_pearson value: 63.055779168508764 - type: manhattan_spearman value: 65.49585020501449 - task: type: STS dataset: name: MTEB STS22 (es-it) type: mteb/sts22-crosslingual-sts config: es-it split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 59.587830825340404 - type: cos_sim_spearman value: 68.93467614588089 - type: euclidean_pearson value: 62.3073527367404 - type: euclidean_spearman value: 69.69758171553175 - type: manhattan_pearson value: 61.9074580815789 - type: manhattan_spearman value: 69.57696375597865 - task: type: STS dataset: name: MTEB STS22 (de-fr) type: mteb/sts22-crosslingual-sts config: de-fr split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 57.143220125577066 - type: cos_sim_spearman value: 67.78857859159226 - type: euclidean_pearson value: 55.58225107923733 - type: euclidean_spearman value: 67.80662907184563 - type: manhattan_pearson value: 56.24953502726514 - type: manhattan_spearman value: 67.98262125431616 - task: type: STS dataset: name: MTEB STS22 (de-pl) type: mteb/sts22-crosslingual-sts config: de-pl split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 21.826928900322066 - type: cos_sim_spearman value: 49.578506634400405 - type: euclidean_pearson value: 27.939890138843214 - type: euclidean_spearman value: 52.71950519136242 - type: manhattan_pearson value: 26.39878683847546 - type: manhattan_spearman value: 47.54609580342499 - task: type: STS dataset: name: MTEB STS22 (fr-pl) type: mteb/sts22-crosslingual-sts config: fr-pl split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 57.27603854632001 - type: cos_sim_spearman value: 50.709255283710995 - type: euclidean_pearson value: 59.5419024445929 - type: euclidean_spearman value: 50.709255283710995 - type: manhattan_pearson value: 59.03256832438492 - type: manhattan_spearman value: 61.97797868009122 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 85.00757054859712 - type: cos_sim_spearman value: 87.29283629622222 - type: euclidean_pearson value: 86.54824171775536 - type: euclidean_spearman value: 87.24364730491402 - type: manhattan_pearson value: 86.5062156915074 - type: manhattan_spearman value: 87.15052170378574 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 82.03549357197389 - type: mrr value: 95.05437645143527 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 57.260999999999996 - type: map_at_10 value: 66.259 - type: map_at_100 value: 66.884 - type: map_at_1000 value: 66.912 - type: map_at_3 value: 63.685 - type: map_at_5 value: 65.35499999999999 - type: mrr_at_1 value: 60.333000000000006 - type: mrr_at_10 value: 67.5 - type: mrr_at_100 value: 68.013 - type: mrr_at_1000 value: 68.038 - type: mrr_at_3 value: 65.61099999999999 - type: mrr_at_5 value: 66.861 - type: ndcg_at_1 value: 60.333000000000006 - type: ndcg_at_10 value: 70.41 - type: ndcg_at_100 value: 73.10600000000001 - type: ndcg_at_1000 value: 73.846 - type: ndcg_at_3 value: 66.133 - type: ndcg_at_5 value: 68.499 - type: precision_at_1 value: 60.333000000000006 - type: precision_at_10 value: 9.232999999999999 - type: precision_at_100 value: 1.0630000000000002 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 25.667 - type: precision_at_5 value: 17.067 - type: recall_at_1 value: 57.260999999999996 - type: recall_at_10 value: 81.94399999999999 - type: recall_at_100 value: 93.867 - type: recall_at_1000 value: 99.667 - type: recall_at_3 value: 70.339 - type: recall_at_5 value: 76.25 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.74356435643564 - type: cos_sim_ap value: 93.13411948212683 - type: cos_sim_f1 value: 86.80521991300147 - type: cos_sim_precision value: 84.00374181478017 - type: cos_sim_recall value: 89.8 - type: dot_accuracy value: 99.67920792079208 - type: dot_ap value: 89.27277565444479 - type: dot_f1 value: 83.9276990718124 - type: dot_precision value: 82.04393505253104 - type: dot_recall value: 85.9 - type: euclidean_accuracy value: 99.74257425742574 - type: euclidean_ap value: 93.17993008259062 - type: euclidean_f1 value: 86.69396110542476 - type: euclidean_precision value: 88.78406708595388 - type: euclidean_recall value: 84.7 - type: manhattan_accuracy value: 99.74257425742574 - type: manhattan_ap value: 93.14413755550099 - type: manhattan_f1 value: 86.82483594144371 - type: manhattan_precision value: 87.66564729867483 - type: manhattan_recall value: 86 - type: max_accuracy value: 99.74356435643564 - type: max_ap value: 93.17993008259062 - type: max_f1 value: 86.82483594144371 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 57.525863806168566 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 32.68850574423839 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 49.71580650644033 - type: mrr value: 50.50971903913081 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 29.152190498799484 - type: cos_sim_spearman value: 29.686180371952727 - type: dot_pearson value: 27.248664793816342 - type: dot_spearman value: 28.37748983721745 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.20400000000000001 - type: map_at_10 value: 1.6209999999999998 - type: map_at_100 value: 9.690999999999999 - type: map_at_1000 value: 23.733 - type: map_at_3 value: 0.575 - type: map_at_5 value: 0.885 - type: mrr_at_1 value: 78 - type: mrr_at_10 value: 86.56700000000001 - type: mrr_at_100 value: 86.56700000000001 - type: mrr_at_1000 value: 86.56700000000001 - type: mrr_at_3 value: 85.667 - type: mrr_at_5 value: 86.56700000000001 - type: ndcg_at_1 value: 76 - type: ndcg_at_10 value: 71.326 - type: ndcg_at_100 value: 54.208999999999996 - type: ndcg_at_1000 value: 49.252 - type: ndcg_at_3 value: 74.235 - type: ndcg_at_5 value: 73.833 - type: precision_at_1 value: 78 - type: precision_at_10 value: 74.8 - type: precision_at_100 value: 55.50000000000001 - type: precision_at_1000 value: 21.836 - type: precision_at_3 value: 78 - type: precision_at_5 value: 78 - type: recall_at_1 value: 0.20400000000000001 - type: recall_at_10 value: 1.894 - type: recall_at_100 value: 13.245999999999999 - type: recall_at_1000 value: 46.373 - type: recall_at_3 value: 0.613 - type: recall_at_5 value: 0.991 - task: type: BitextMining dataset: name: MTEB Tatoeba (sqi-eng) type: mteb/tatoeba-bitext-mining config: sqi-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.89999999999999 - type: f1 value: 94.69999999999999 - type: precision value: 94.11666666666667 - type: recall value: 95.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (fry-eng) type: mteb/tatoeba-bitext-mining config: fry-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 68.20809248554913 - type: f1 value: 63.431048720066066 - type: precision value: 61.69143958161298 - type: recall value: 68.20809248554913 - task: type: BitextMining dataset: name: MTEB Tatoeba (kur-eng) type: mteb/tatoeba-bitext-mining config: kur-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 71.21951219512195 - type: f1 value: 66.82926829268293 - type: precision value: 65.1260162601626 - type: recall value: 71.21951219512195 - task: type: BitextMining dataset: name: MTEB Tatoeba (tur-eng) type: mteb/tatoeba-bitext-mining config: tur-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.2 - type: f1 value: 96.26666666666667 - type: precision value: 95.8 - type: recall value: 97.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (deu-eng) type: mteb/tatoeba-bitext-mining config: deu-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 99.3 - type: f1 value: 99.06666666666666 - type: precision value: 98.95 - type: recall value: 99.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (nld-eng) type: mteb/tatoeba-bitext-mining config: nld-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.39999999999999 - type: f1 value: 96.63333333333333 - type: precision value: 96.26666666666668 - type: recall value: 97.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (ron-eng) type: mteb/tatoeba-bitext-mining config: ron-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96 - type: f1 value: 94.86666666666666 - type: precision value: 94.31666666666668 - type: recall value: 96 - task: type: BitextMining dataset: name: MTEB Tatoeba (ang-eng) type: mteb/tatoeba-bitext-mining config: ang-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 47.01492537313433 - type: f1 value: 40.178867566927266 - type: precision value: 38.179295828549556 - type: recall value: 47.01492537313433 - task: type: BitextMining dataset: name: MTEB Tatoeba (ido-eng) type: mteb/tatoeba-bitext-mining config: ido-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 86.5 - type: f1 value: 83.62537480063796 - type: precision value: 82.44555555555554 - type: recall value: 86.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (jav-eng) type: mteb/tatoeba-bitext-mining config: jav-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 80.48780487804879 - type: f1 value: 75.45644599303138 - type: precision value: 73.37398373983739 - type: recall value: 80.48780487804879 - task: type: BitextMining dataset: name: MTEB Tatoeba (isl-eng) type: mteb/tatoeba-bitext-mining config: isl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.7 - type: f1 value: 91.95666666666666 - type: precision value: 91.125 - type: recall value: 93.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (slv-eng) type: mteb/tatoeba-bitext-mining config: slv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.73754556500607 - type: f1 value: 89.65168084244632 - type: precision value: 88.73025516403402 - type: recall value: 91.73754556500607 - task: type: BitextMining dataset: name: MTEB Tatoeba (cym-eng) type: mteb/tatoeba-bitext-mining config: cym-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 81.04347826086956 - type: f1 value: 76.2128364389234 - type: precision value: 74.2 - type: recall value: 81.04347826086956 - task: type: BitextMining dataset: name: MTEB Tatoeba (kaz-eng) type: mteb/tatoeba-bitext-mining config: kaz-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 83.65217391304348 - type: f1 value: 79.4376811594203 - type: precision value: 77.65797101449274 - type: recall value: 83.65217391304348 - task: type: BitextMining dataset: name: MTEB Tatoeba (est-eng) type: mteb/tatoeba-bitext-mining config: est-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.5 - type: f1 value: 85.02690476190476 - type: precision value: 83.96261904761904 - type: recall value: 87.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (heb-eng) type: mteb/tatoeba-bitext-mining config: heb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89.3 - type: f1 value: 86.52333333333333 - type: precision value: 85.22833333333332 - type: recall value: 89.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (gla-eng) type: mteb/tatoeba-bitext-mining config: gla-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 65.01809408926418 - type: f1 value: 59.00594446432805 - type: precision value: 56.827215807915444 - type: recall value: 65.01809408926418 - task: type: BitextMining dataset: name: MTEB Tatoeba (mar-eng) type: mteb/tatoeba-bitext-mining config: mar-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.2 - type: f1 value: 88.58 - type: precision value: 87.33333333333334 - type: recall value: 91.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (lat-eng) type: mteb/tatoeba-bitext-mining config: lat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 59.199999999999996 - type: f1 value: 53.299166276284915 - type: precision value: 51.3383908045977 - type: recall value: 59.199999999999996 - task: type: BitextMining dataset: name: MTEB Tatoeba (bel-eng) type: mteb/tatoeba-bitext-mining config: bel-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.2 - type: f1 value: 91.2 - type: precision value: 90.25 - type: recall value: 93.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (pms-eng) type: mteb/tatoeba-bitext-mining config: pms-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 64.76190476190476 - type: f1 value: 59.867110667110666 - type: precision value: 58.07390192653351 - type: recall value: 64.76190476190476 - task: type: BitextMining dataset: name: MTEB Tatoeba (gle-eng) type: mteb/tatoeba-bitext-mining config: gle-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.2 - type: f1 value: 71.48147546897547 - type: precision value: 69.65409090909091 - type: recall value: 76.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (pes-eng) type: mteb/tatoeba-bitext-mining config: pes-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.8 - type: f1 value: 92.14 - type: precision value: 91.35833333333333 - type: recall value: 93.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (nob-eng) type: mteb/tatoeba-bitext-mining config: nob-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.89999999999999 - type: f1 value: 97.2 - type: precision value: 96.85000000000001 - type: recall value: 97.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (bul-eng) type: mteb/tatoeba-bitext-mining config: bul-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.6 - type: f1 value: 92.93333333333334 - type: precision value: 92.13333333333333 - type: recall value: 94.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (cbk-eng) type: mteb/tatoeba-bitext-mining config: cbk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 74.1 - type: f1 value: 69.14817460317461 - type: precision value: 67.2515873015873 - type: recall value: 74.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (hun-eng) type: mteb/tatoeba-bitext-mining config: hun-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.19999999999999 - type: f1 value: 94.01333333333335 - type: precision value: 93.46666666666667 - type: recall value: 95.19999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (uig-eng) type: mteb/tatoeba-bitext-mining config: uig-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.9 - type: f1 value: 72.07523809523809 - type: precision value: 70.19777777777779 - type: recall value: 76.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (rus-eng) type: mteb/tatoeba-bitext-mining config: rus-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.1 - type: f1 value: 92.31666666666666 - type: precision value: 91.43333333333332 - type: recall value: 94.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (spa-eng) type: mteb/tatoeba-bitext-mining config: spa-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.8 - type: f1 value: 97.1 - type: precision value: 96.76666666666668 - type: recall value: 97.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (hye-eng) type: mteb/tatoeba-bitext-mining config: hye-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.85714285714286 - type: f1 value: 90.92093441150045 - type: precision value: 90.00449236298293 - type: recall value: 92.85714285714286 - task: type: BitextMining dataset: name: MTEB Tatoeba (tel-eng) type: mteb/tatoeba-bitext-mining config: tel-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.16239316239316 - type: f1 value: 91.33903133903132 - type: precision value: 90.56267806267806 - type: recall value: 93.16239316239316 - task: type: BitextMining dataset: name: MTEB Tatoeba (afr-eng) type: mteb/tatoeba-bitext-mining config: afr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.4 - type: f1 value: 90.25666666666666 - type: precision value: 89.25833333333334 - type: recall value: 92.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (mon-eng) type: mteb/tatoeba-bitext-mining config: mon-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.22727272727272 - type: f1 value: 87.53030303030303 - type: precision value: 86.37121212121211 - type: recall value: 90.22727272727272 - task: type: BitextMining dataset: name: MTEB Tatoeba (arz-eng) type: mteb/tatoeba-bitext-mining config: arz-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 79.03563941299791 - type: f1 value: 74.7349505840072 - type: precision value: 72.9035639412998 - type: recall value: 79.03563941299791 - task: type: BitextMining dataset: name: MTEB Tatoeba (hrv-eng) type: mteb/tatoeba-bitext-mining config: hrv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97 - type: f1 value: 96.15 - type: precision value: 95.76666666666668 - type: recall value: 97 - task: type: BitextMining dataset: name: MTEB Tatoeba (nov-eng) type: mteb/tatoeba-bitext-mining config: nov-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.26459143968872 - type: f1 value: 71.55642023346303 - type: precision value: 69.7544932369835 - type: recall value: 76.26459143968872 - task: type: BitextMining dataset: name: MTEB Tatoeba (gsw-eng) type: mteb/tatoeba-bitext-mining config: gsw-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 58.119658119658126 - type: f1 value: 51.65242165242165 - type: precision value: 49.41768108434775 - type: recall value: 58.119658119658126 - task: type: BitextMining dataset: name: MTEB Tatoeba (nds-eng) type: mteb/tatoeba-bitext-mining config: nds-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 74.3 - type: f1 value: 69.52055555555555 - type: precision value: 67.7574938949939 - type: recall value: 74.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (ukr-eng) type: mteb/tatoeba-bitext-mining config: ukr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.8 - type: f1 value: 93.31666666666666 - type: precision value: 92.60000000000001 - type: recall value: 94.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (uzb-eng) type: mteb/tatoeba-bitext-mining config: uzb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.63551401869158 - type: f1 value: 72.35202492211837 - type: precision value: 70.60358255451713 - type: recall value: 76.63551401869158 - task: type: BitextMining dataset: name: MTEB Tatoeba (lit-eng) type: mteb/tatoeba-bitext-mining config: lit-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.4 - type: f1 value: 88.4811111111111 - type: precision value: 87.7452380952381 - type: recall value: 90.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (ina-eng) type: mteb/tatoeba-bitext-mining config: ina-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95 - type: f1 value: 93.60666666666667 - type: precision value: 92.975 - type: recall value: 95 - task: type: BitextMining dataset: name: MTEB Tatoeba (lfn-eng) type: mteb/tatoeba-bitext-mining config: lfn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 67.2 - type: f1 value: 63.01595782872099 - type: precision value: 61.596587301587306 - type: recall value: 67.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (zsm-eng) type: mteb/tatoeba-bitext-mining config: zsm-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.7 - type: f1 value: 94.52999999999999 - type: precision value: 94 - type: recall value: 95.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (ita-eng) type: mteb/tatoeba-bitext-mining config: ita-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.6 - type: f1 value: 93.28999999999999 - type: precision value: 92.675 - type: recall value: 94.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (cmn-eng) type: mteb/tatoeba-bitext-mining config: cmn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.39999999999999 - type: f1 value: 95.28333333333333 - type: precision value: 94.75 - type: recall value: 96.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (lvs-eng) type: mteb/tatoeba-bitext-mining config: lvs-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.9 - type: f1 value: 89.83 - type: precision value: 88.92 - type: recall value: 91.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (glg-eng) type: mteb/tatoeba-bitext-mining config: glg-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.69999999999999 - type: f1 value: 93.34222222222223 - type: precision value: 92.75416666666668 - type: recall value: 94.69999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (ceb-eng) type: mteb/tatoeba-bitext-mining config: ceb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 60.333333333333336 - type: f1 value: 55.31203703703703 - type: precision value: 53.39971108326371 - type: recall value: 60.333333333333336 - task: type: BitextMining dataset: name: MTEB Tatoeba (bre-eng) type: mteb/tatoeba-bitext-mining config: bre-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 12.9 - type: f1 value: 11.099861903031458 - type: precision value: 10.589187932631877 - type: recall value: 12.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (ben-eng) type: mteb/tatoeba-bitext-mining config: ben-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 86.7 - type: f1 value: 83.0152380952381 - type: precision value: 81.37833333333333 - type: recall value: 86.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (swg-eng) type: mteb/tatoeba-bitext-mining config: swg-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 63.39285714285714 - type: f1 value: 56.832482993197274 - type: precision value: 54.56845238095237 - type: recall value: 63.39285714285714 - task: type: BitextMining dataset: name: MTEB Tatoeba (arq-eng) type: mteb/tatoeba-bitext-mining config: arq-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 48.73765093304062 - type: f1 value: 41.555736920720456 - type: precision value: 39.06874531737319 - type: recall value: 48.73765093304062 - task: type: BitextMining dataset: name: MTEB Tatoeba (kab-eng) type: mteb/tatoeba-bitext-mining config: kab-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 41.099999999999994 - type: f1 value: 36.540165945165946 - type: precision value: 35.05175685425686 - type: recall value: 41.099999999999994 - task: type: BitextMining dataset: name: MTEB Tatoeba (fra-eng) type: mteb/tatoeba-bitext-mining config: fra-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.89999999999999 - type: f1 value: 93.42333333333333 - type: precision value: 92.75833333333333 - type: recall value: 94.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (por-eng) type: mteb/tatoeba-bitext-mining config: por-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.89999999999999 - type: f1 value: 93.63333333333334 - type: precision value: 93.01666666666665 - type: recall value: 94.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (tat-eng) type: mteb/tatoeba-bitext-mining config: tat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.9 - type: f1 value: 73.64833333333334 - type: precision value: 71.90282106782105 - type: recall value: 77.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (oci-eng) type: mteb/tatoeba-bitext-mining config: oci-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 59.4 - type: f1 value: 54.90521367521367 - type: precision value: 53.432840025471606 - type: recall value: 59.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (pol-eng) type: mteb/tatoeba-bitext-mining config: pol-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.39999999999999 - type: f1 value: 96.6 - type: precision value: 96.2 - type: recall value: 97.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (war-eng) type: mteb/tatoeba-bitext-mining config: war-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 67.2 - type: f1 value: 62.25926129426129 - type: precision value: 60.408376623376626 - type: recall value: 67.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (aze-eng) type: mteb/tatoeba-bitext-mining config: aze-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.2 - type: f1 value: 87.60666666666667 - type: precision value: 86.45277777777778 - type: recall value: 90.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (vie-eng) type: mteb/tatoeba-bitext-mining config: vie-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.7 - type: f1 value: 97 - type: precision value: 96.65 - type: recall value: 97.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (nno-eng) type: mteb/tatoeba-bitext-mining config: nno-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.2 - type: f1 value: 91.39746031746031 - type: precision value: 90.6125 - type: recall value: 93.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (cha-eng) type: mteb/tatoeba-bitext-mining config: cha-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 32.11678832116788 - type: f1 value: 27.210415386260234 - type: precision value: 26.20408990846947 - type: recall value: 32.11678832116788 - task: type: BitextMining dataset: name: MTEB Tatoeba (mhr-eng) type: mteb/tatoeba-bitext-mining config: mhr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 8.5 - type: f1 value: 6.787319277832475 - type: precision value: 6.3452094433344435 - type: recall value: 8.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (dan-eng) type: mteb/tatoeba-bitext-mining config: dan-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.1 - type: f1 value: 95.08 - type: precision value: 94.61666666666667 - type: recall value: 96.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (ell-eng) type: mteb/tatoeba-bitext-mining config: ell-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.3 - type: f1 value: 93.88333333333333 - type: precision value: 93.18333333333332 - type: recall value: 95.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (amh-eng) type: mteb/tatoeba-bitext-mining config: amh-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 85.11904761904762 - type: f1 value: 80.69444444444444 - type: precision value: 78.72023809523809 - type: recall value: 85.11904761904762 - task: type: BitextMining dataset: name: MTEB Tatoeba (pam-eng) type: mteb/tatoeba-bitext-mining config: pam-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 11.1 - type: f1 value: 9.276381801735853 - type: precision value: 8.798174603174601 - type: recall value: 11.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (hsb-eng) type: mteb/tatoeba-bitext-mining config: hsb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 63.56107660455487 - type: f1 value: 58.70433569191332 - type: precision value: 56.896926581464015 - type: recall value: 63.56107660455487 - task: type: BitextMining dataset: name: MTEB Tatoeba (srp-eng) type: mteb/tatoeba-bitext-mining config: srp-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.69999999999999 - type: f1 value: 93.10000000000001 - type: precision value: 92.35 - type: recall value: 94.69999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (epo-eng) type: mteb/tatoeba-bitext-mining config: epo-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.8 - type: f1 value: 96.01222222222222 - type: precision value: 95.67083333333332 - type: recall value: 96.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (kzj-eng) type: mteb/tatoeba-bitext-mining config: kzj-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 9.2 - type: f1 value: 7.911555250305249 - type: precision value: 7.631246556216846 - type: recall value: 9.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (awa-eng) type: mteb/tatoeba-bitext-mining config: awa-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.48917748917748 - type: f1 value: 72.27375798804371 - type: precision value: 70.14430014430013 - type: recall value: 77.48917748917748 - task: type: BitextMining dataset: name: MTEB Tatoeba (fao-eng) type: mteb/tatoeba-bitext-mining config: fao-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.09923664122137 - type: f1 value: 72.61541257724463 - type: precision value: 70.8998380754106 - type: recall value: 77.09923664122137 - task: type: BitextMining dataset: name: MTEB Tatoeba (mal-eng) type: mteb/tatoeba-bitext-mining config: mal-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 98.2532751091703 - type: f1 value: 97.69529354682193 - type: precision value: 97.42843279961184 - type: recall value: 98.2532751091703 - task: type: BitextMining dataset: name: MTEB Tatoeba (ile-eng) type: mteb/tatoeba-bitext-mining config: ile-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 82.8 - type: f1 value: 79.14672619047619 - type: precision value: 77.59489247311828 - type: recall value: 82.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (bos-eng) type: mteb/tatoeba-bitext-mining config: bos-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.35028248587571 - type: f1 value: 92.86252354048965 - type: precision value: 92.2080979284369 - type: recall value: 94.35028248587571 - task: type: BitextMining dataset: name: MTEB Tatoeba (cor-eng) type: mteb/tatoeba-bitext-mining config: cor-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 8.5 - type: f1 value: 6.282429263935621 - type: precision value: 5.783274240739785 - type: recall value: 8.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (cat-eng) type: mteb/tatoeba-bitext-mining config: cat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.7 - type: f1 value: 91.025 - type: precision value: 90.30428571428571 - type: recall value: 92.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (eus-eng) type: mteb/tatoeba-bitext-mining config: eus-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 81 - type: f1 value: 77.8232380952381 - type: precision value: 76.60194444444444 - type: recall value: 81 - task: type: BitextMining dataset: name: MTEB Tatoeba (yue-eng) type: mteb/tatoeba-bitext-mining config: yue-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91 - type: f1 value: 88.70857142857142 - type: precision value: 87.7 - type: recall value: 91 - task: type: BitextMining dataset: name: MTEB Tatoeba (swe-eng) type: mteb/tatoeba-bitext-mining config: swe-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.39999999999999 - type: f1 value: 95.3 - type: precision value: 94.76666666666667 - type: recall value: 96.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (dtp-eng) type: mteb/tatoeba-bitext-mining config: dtp-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 8.1 - type: f1 value: 7.001008218834307 - type: precision value: 6.708329562594269 - type: recall value: 8.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (kat-eng) type: mteb/tatoeba-bitext-mining config: kat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.1313672922252 - type: f1 value: 84.09070598748882 - type: precision value: 82.79171454104429 - type: recall value: 87.1313672922252 - task: type: BitextMining dataset: name: MTEB Tatoeba (jpn-eng) type: mteb/tatoeba-bitext-mining config: jpn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.39999999999999 - type: f1 value: 95.28333333333333 - type: precision value: 94.73333333333332 - type: recall value: 96.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (csb-eng) type: mteb/tatoeba-bitext-mining config: csb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 42.29249011857708 - type: f1 value: 36.981018542283365 - type: precision value: 35.415877813576024 - type: recall value: 42.29249011857708 - task: type: BitextMining dataset: name: MTEB Tatoeba (xho-eng) type: mteb/tatoeba-bitext-mining config: xho-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 83.80281690140845 - type: f1 value: 80.86854460093896 - type: precision value: 79.60093896713614 - type: recall value: 83.80281690140845 - task: type: BitextMining dataset: name: MTEB Tatoeba (orv-eng) type: mteb/tatoeba-bitext-mining config: orv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 45.26946107784431 - type: f1 value: 39.80235464678088 - type: precision value: 38.14342660001342 - type: recall value: 45.26946107784431 - task: type: BitextMining dataset: name: MTEB Tatoeba (ind-eng) type: mteb/tatoeba-bitext-mining config: ind-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.3 - type: f1 value: 92.9 - type: precision value: 92.26666666666668 - type: recall value: 94.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (tuk-eng) type: mteb/tatoeba-bitext-mining config: tuk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 37.93103448275862 - type: f1 value: 33.15192743764172 - type: precision value: 31.57456528146183 - type: recall value: 37.93103448275862 - task: type: BitextMining dataset: name: MTEB Tatoeba (max-eng) type: mteb/tatoeba-bitext-mining config: max-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 69.01408450704226 - type: f1 value: 63.41549295774648 - type: precision value: 61.342778895595806 - type: recall value: 69.01408450704226 - task: type: BitextMining dataset: name: MTEB Tatoeba (swh-eng) type: mteb/tatoeba-bitext-mining config: swh-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.66666666666667 - type: f1 value: 71.60705960705961 - type: precision value: 69.60683760683762 - type: recall value: 76.66666666666667 - task: type: BitextMining dataset: name: MTEB Tatoeba (hin-eng) type: mteb/tatoeba-bitext-mining config: hin-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.8 - type: f1 value: 94.48333333333333 - type: precision value: 93.83333333333333 - type: recall value: 95.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (dsb-eng) type: mteb/tatoeba-bitext-mining config: dsb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 52.81837160751566 - type: f1 value: 48.435977731384824 - type: precision value: 47.11291973845539 - type: recall value: 52.81837160751566 - task: type: BitextMining dataset: name: MTEB Tatoeba (ber-eng) type: mteb/tatoeba-bitext-mining config: ber-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 44.9 - type: f1 value: 38.88962621607783 - type: precision value: 36.95936507936508 - type: recall value: 44.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (tam-eng) type: mteb/tatoeba-bitext-mining config: tam-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.55374592833876 - type: f1 value: 88.22553125484721 - type: precision value: 87.26927252985884 - type: recall value: 90.55374592833876 - task: type: BitextMining dataset: name: MTEB Tatoeba (slk-eng) type: mteb/tatoeba-bitext-mining config: slk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.6 - type: f1 value: 93.13333333333333 - type: precision value: 92.45333333333333 - type: recall value: 94.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (tgl-eng) type: mteb/tatoeba-bitext-mining config: tgl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.7 - type: f1 value: 91.99666666666667 - type: precision value: 91.26666666666668 - type: recall value: 93.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (ast-eng) type: mteb/tatoeba-bitext-mining config: ast-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 85.03937007874016 - type: f1 value: 81.75853018372703 - type: precision value: 80.34120734908137 - type: recall value: 85.03937007874016 - task: type: BitextMining dataset: name: MTEB Tatoeba (mkd-eng) type: mteb/tatoeba-bitext-mining config: mkd-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 88.3 - type: f1 value: 85.5 - type: precision value: 84.25833333333334 - type: recall value: 88.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (khm-eng) type: mteb/tatoeba-bitext-mining config: khm-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 65.51246537396122 - type: f1 value: 60.02297410192148 - type: precision value: 58.133467727289236 - type: recall value: 65.51246537396122 - task: type: BitextMining dataset: name: MTEB Tatoeba (ces-eng) type: mteb/tatoeba-bitext-mining config: ces-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96 - type: f1 value: 94.89 - type: precision value: 94.39166666666667 - type: recall value: 96 - task: type: BitextMining dataset: name: MTEB Tatoeba (tzl-eng) type: mteb/tatoeba-bitext-mining config: tzl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 57.692307692307686 - type: f1 value: 53.162393162393165 - type: precision value: 51.70673076923077 - type: recall value: 57.692307692307686 - task: type: BitextMining dataset: name: MTEB Tatoeba (urd-eng) type: mteb/tatoeba-bitext-mining config: urd-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.60000000000001 - type: f1 value: 89.21190476190475 - type: precision value: 88.08666666666667 - type: recall value: 91.60000000000001 - task: type: BitextMining dataset: name: MTEB Tatoeba (ara-eng) type: mteb/tatoeba-bitext-mining config: ara-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 88 - type: f1 value: 85.47 - type: precision value: 84.43266233766234 - type: recall value: 88 - task: type: BitextMining dataset: name: MTEB Tatoeba (kor-eng) type: mteb/tatoeba-bitext-mining config: kor-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.7 - type: f1 value: 90.64999999999999 - type: precision value: 89.68333333333332 - type: recall value: 92.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (yid-eng) type: mteb/tatoeba-bitext-mining config: yid-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 80.30660377358491 - type: f1 value: 76.33044137466307 - type: precision value: 74.78970125786164 - type: recall value: 80.30660377358491 - task: type: BitextMining dataset: name: MTEB Tatoeba (fin-eng) type: mteb/tatoeba-bitext-mining config: fin-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.39999999999999 - type: f1 value: 95.44 - type: precision value: 94.99166666666666 - type: recall value: 96.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (tha-eng) type: mteb/tatoeba-bitext-mining config: tha-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.53284671532847 - type: f1 value: 95.37712895377129 - type: precision value: 94.7992700729927 - type: recall value: 96.53284671532847 - task: type: BitextMining dataset: name: MTEB Tatoeba (wuu-eng) type: mteb/tatoeba-bitext-mining config: wuu-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89 - type: f1 value: 86.23190476190476 - type: precision value: 85.035 - type: recall value: 89 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.585 - type: map_at_10 value: 9.012 - type: map_at_100 value: 14.027000000000001 - type: map_at_1000 value: 15.565000000000001 - type: map_at_3 value: 5.032 - type: map_at_5 value: 6.657 - type: mrr_at_1 value: 28.571 - type: mrr_at_10 value: 45.377 - type: mrr_at_100 value: 46.119 - type: mrr_at_1000 value: 46.127 - type: mrr_at_3 value: 41.156 - type: mrr_at_5 value: 42.585 - type: ndcg_at_1 value: 27.551 - type: ndcg_at_10 value: 23.395 - type: ndcg_at_100 value: 33.342 - type: ndcg_at_1000 value: 45.523 - type: ndcg_at_3 value: 25.158 - type: ndcg_at_5 value: 23.427 - type: precision_at_1 value: 28.571 - type: precision_at_10 value: 21.429000000000002 - type: precision_at_100 value: 6.714 - type: precision_at_1000 value: 1.473 - type: precision_at_3 value: 27.211000000000002 - type: precision_at_5 value: 24.490000000000002 - type: recall_at_1 value: 2.585 - type: recall_at_10 value: 15.418999999999999 - type: recall_at_100 value: 42.485 - type: recall_at_1000 value: 79.536 - type: recall_at_3 value: 6.239999999999999 - type: recall_at_5 value: 8.996 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.3234 - type: ap value: 14.361688653847423 - type: f1 value: 54.819068624319044 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 61.97792869269949 - type: f1 value: 62.28965628513728 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 38.90540145385218 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 86.53513739047506 - type: cos_sim_ap value: 75.27741586677557 - type: cos_sim_f1 value: 69.18792902473774 - type: cos_sim_precision value: 67.94708725515136 - type: cos_sim_recall value: 70.47493403693932 - type: dot_accuracy value: 84.7052512368123 - type: dot_ap value: 69.36075482849378 - type: dot_f1 value: 64.44688376631296 - type: dot_precision value: 59.92288500793831 - type: dot_recall value: 69.70976253298153 - type: euclidean_accuracy value: 86.60666388508076 - type: euclidean_ap value: 75.47512772621097 - type: euclidean_f1 value: 69.413872536473 - type: euclidean_precision value: 67.39562624254472 - type: euclidean_recall value: 71.55672823218997 - type: manhattan_accuracy value: 86.52917684925792 - type: manhattan_ap value: 75.34000110496703 - type: manhattan_f1 value: 69.28489190226429 - type: manhattan_precision value: 67.24608889992551 - type: manhattan_recall value: 71.45118733509234 - type: max_accuracy value: 86.60666388508076 - type: max_ap value: 75.47512772621097 - type: max_f1 value: 69.413872536473 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.01695967710637 - type: cos_sim_ap value: 85.8298270742901 - type: cos_sim_f1 value: 78.46988128389272 - type: cos_sim_precision value: 74.86017897091722 - type: cos_sim_recall value: 82.44533415460425 - type: dot_accuracy value: 88.19420188613343 - type: dot_ap value: 83.82679165901324 - type: dot_f1 value: 76.55833777304208 - type: dot_precision value: 75.6884875846501 - type: dot_recall value: 77.44841392054204 - type: euclidean_accuracy value: 89.03054294252338 - type: euclidean_ap value: 85.89089555185325 - type: euclidean_f1 value: 78.62997658079624 - type: euclidean_precision value: 74.92329149232914 - type: euclidean_recall value: 82.72251308900523 - type: manhattan_accuracy value: 89.0266620095471 - type: manhattan_ap value: 85.86458997929147 - type: manhattan_f1 value: 78.50685331000291 - type: manhattan_precision value: 74.5499861534201 - type: manhattan_recall value: 82.90729904527257 - type: max_accuracy value: 89.03054294252338 - type: max_ap value: 85.89089555185325 - type: max_f1 value: 78.62997658079624 --- # KeyurRamoliya/multilingual-e5-large-Q8_0-GGUF This model was converted to GGUF format from [`intfloat/multilingual-e5-large`](https://huggingface.co/intfloat/multilingual-e5-large) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/intfloat/multilingual-e5-large) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo KeyurRamoliya/multilingual-e5-large-Q8_0-GGUF --hf-file multilingual-e5-large-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo KeyurRamoliya/multilingual-e5-large-Q8_0-GGUF --hf-file multilingual-e5-large-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo KeyurRamoliya/multilingual-e5-large-Q8_0-GGUF --hf-file multilingual-e5-large-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo KeyurRamoliya/multilingual-e5-large-Q8_0-GGUF --hf-file multilingual-e5-large-q8_0.gguf -c 2048 ```
[ "BIOSSES", "SCIFACT" ]
medspaner/EriBERTa-clinical-trials-neg-spec
medspaner
null
[ "pytorch", "roberta", "generated_from_trainer", "arxiv:2306.07373", "license:cc-by-nc-4.0", "region:us" ]
2024-09-13T11:55:37Z
2024-10-01T06:38:14+00:00
14
0
--- license: cc-by-nc-4.0 metrics: - precision - recall - f1 - accuracy tags: - generated_from_trainer widget: - text: Pacientes sanos, sin ninguna enfermedad, que no tomen ningún tratamiento. model-index: - name: EriBERTa-es-clinical-trials-neg-spec results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # EriBERTa-clinical-trials-neg-spec This named entity recognition model detects negation and speculation entities, and negated and speculated concepts: - Neg_cue: negation cue (e.g. *no*, *sin*) - Negated: negated entity or event (e.g. *sin **dolor***) - Spec_cue: speculation cue (e.g. *posiblemente*) - Speculated: speculated entity or event (e.g. *posiblemente **sobreviva***) The model achieves the following results on the test set (when trained with the training and development set; results are averaged over 5 evaluation rounds): - Precision: 0.861 (±0.008) - Recall: 0.871 (±0.005) - F1: 0.866 (±0.006) - Accuracy: 0.985 (±0.001) ## Model description This model adapts the pre-trained model [EriBERTa-base](https://huggingface.co/HiTZ/EriBERTa-base), presented in [De la Iglesia et al. (2023)](https://arxiv.org/abs/2306.07373). It is fine-tuned to conduct medical named entity recognition on Spanish texts about clinical trials. The model is fine-tuned on the [CT-EBM-ES corpus (Campillos-Llanos et al. 2021)](https://bmcmedinformdecismak.biomedcentral.com/articles/10.1186/s12911-021-01395-z) vs 2. If you use this model, please, cite as follows: ``` @article{campillosetal2024,         title = {{Hybrid tool for semantic annotation and concept extraction of medical texts in Spanish}},         author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n},         journal = {BMC Bioinformatics}, year={2024}, publisher={BioMed Central} } ``` ## Intended uses & limitations **Disclosure**: *This model is under development and needs to be improved. It should not be used for medical decision making without human assistance and supervision* This model is intended for a generalist purpose, and may have bias and/or any other undesirable distortions. Third parties who deploy or provide systems and/or services using any of these models (or using systems based on these models) should note that it is their responsibility to mitigate the risks arising from their use. Third parties, in any event, need to comply with applicable regulations, including regulations concerning the use of artificial intelligence. The owner or creator of the models will in no event be liable for any results arising from the use made by third parties of these models. **Descargo de responsabilidad**: *Esta herramienta se encuentra en desarrollo y no debe ser empleada para la toma de decisiones médicas* La finalidad de este modelo es generalista, y se advierte que puede tener sesgos y/u otro tipo de distorsiones indeseables. Terceras partes que desplieguen o proporcionen sistemas y/o servicios usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) han tener presente que es su responsabilidad abordar y minimizar los riesgos derivados de su uso. Las terceras partes, en cualquier circunstancia, deben cumplir con la normativa aplicable, incluyendo la normativa que concierne al uso de la inteligencia artificial. El propietario o creador de los modelos de ningún modo será responsable de los resultados derivados del uso que las terceras partes hagan de estos modelos. ## Training and evaluation data The data used for fine-tuning are the [Clinical Trials for Evidence-Based-Medicine in Spanish corpus](http://www.lllf.uam.es/ESP/nlpdata/wp2/) vs 2. It is a collection of 1200 texts about clinical trials studies and clinical trials announcements: - 500 abstracts from journals published under a Creative Commons license, e.g. available in PubMed or the Scientific Electronic Library Online (SciELO) - 700 clinical trials announcements published in the European Clinical Trials Register and Repositorio Español de Estudios Clínicos If you use the CT-EBM-ES resource, please, cite as follows: ``` @article{campillosetal-midm2021,         title = {A clinical trials corpus annotated with UMLS© entities to enhance the access to Evidence-Based Medicine},         author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n and Moreno-Sandoval, Antonio},         journal = {BMC Medical Informatics and Decision Making},         volume={21}, number={1}, pages={1--19}, year={2021}, publisher={BioMed Central} } ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: we used different seeds for 5 evaluation rounds, and uploaded the model with the best results - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: average 17.25 epochs (±5.50); trained with early stopping if no improvement after 5 epochs (early stopping patience: 5) ### Training results (test set; average and standard deviation of 5 rounds with different seeds) | Precision | Recall | F1 | Accuracy | |:--------------:|:--------------:|:--------------:|:--------------:| | 0.861 (±0.008) | 0.871 (±0.005) | 0.866 (±0.006) | 0.985 (±0.001) | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.2+cu113 - Datasets 1.18.4 - Tokenizers 0.11.6
[ "SCIELO" ]
gair-prox/FW-ProX-1.7B
gair-prox
text-generation
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "en", "dataset:gair-prox/FineWeb-pro", "arxiv:2409.17115", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-09-16T12:20:00Z
2024-09-28T05:56:04+00:00
14
3
--- datasets: - gair-prox/FineWeb-pro language: - en library_name: transformers license: apache-2.0 pipeline_tag: text-generation tags: - llama --- # FW-ProX-1.7B <p align="center"> <img src="prox-teaser.png"> </p> [ArXiv](https://arxiv.org/abs/2409.17115) | [Models](https://huggingface.co/gair-prox/FW-ProX-1.7B) | [Data](https://huggingface.co/datasets/gair-prox/FineWeb-pro) | [Code](https://github.com/GAIR-NLP/program-every-example) **FW-ProX-1.7B** is a small language model. It was and trained on the [FineWeb-pro](https://huggingface.co/datasets/gair-prox/FineWeb-pro) for 50B tokens. ## Evaluations ProX models are evaluated over 10 language model benchmarks in zero-shot setting. | | ArC-c | ARC-e | CSQA | HellaS | MMLU | OBQA | PiQA | SIQA | WinoG | SciQ | AVG | |-----------------------|-------|-------|-------|-----------|-------|-------|-------|-------|-------|-------|------| | raw | 28.5 | 52.6 | 33.9 | 53.2 | 29.8 | 32.6 | 72.9 | 40.2 | 53.0 | 77.1 | 47.4 | | ours | 34.4 | 63.9 | 32.6 | 53.0 | 33.1 | 34.4 | 73.1 | 39.3 | 52.7 | 81.5 | 49.8 | ### Citation ``` @article{zhou2024programming, title={Programming Every Example: Lifting Pre-training Data Quality like Experts at Scale}, author={Zhou, Fan and Wang, Zengzhi and Liu, Qian and Li, Junlong and Liu, Pengfei}, journal={arXiv preprint arXiv:2409.17115}, year={2024} } ```
[ "SCIQ" ]
katanemo/bge-large-en-v1.5
katanemo
feature-extraction
[ "sentence-transformers", "pytorch", "onnx", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "mteb", "en", "arxiv:2401.03462", "arxiv:2312.15503", "arxiv:2311.13534", "arxiv:2310.07554", "arxiv:2309.07597", "license:mit", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-10-08T17:51:45Z
2024-10-09T19:02:21+00:00
14
0
--- language: - en license: mit tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers - mteb model-index: - name: bge-large-en-v1.5 results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 75.8507462686567 - type: ap value: 38.566457320228245 - type: f1 value: 69.69386648043475 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 92.416675 - type: ap value: 89.1928861155922 - type: f1 value: 92.39477019574215 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 48.175999999999995 - type: f1 value: 47.80712792870253 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 40.184999999999995 - type: map_at_10 value: 55.654 - type: map_at_100 value: 56.25 - type: map_at_1000 value: 56.255 - type: map_at_3 value: 51.742999999999995 - type: map_at_5 value: 54.129000000000005 - type: mrr_at_1 value: 40.967 - type: mrr_at_10 value: 55.96 - type: mrr_at_100 value: 56.54900000000001 - type: mrr_at_1000 value: 56.554 - type: mrr_at_3 value: 51.980000000000004 - type: mrr_at_5 value: 54.44 - type: ndcg_at_1 value: 40.184999999999995 - type: ndcg_at_10 value: 63.542 - type: ndcg_at_100 value: 65.96499999999999 - type: ndcg_at_1000 value: 66.08699999999999 - type: ndcg_at_3 value: 55.582 - type: ndcg_at_5 value: 59.855000000000004 - type: precision_at_1 value: 40.184999999999995 - type: precision_at_10 value: 8.841000000000001 - type: precision_at_100 value: 0.987 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 22.238 - type: precision_at_5 value: 15.405 - type: recall_at_1 value: 40.184999999999995 - type: recall_at_10 value: 88.407 - type: recall_at_100 value: 98.72 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 66.714 - type: recall_at_5 value: 77.027 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 48.567077926750066 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 43.19453389182364 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 64.46555939623092 - type: mrr value: 77.82361605768807 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 84.9554128814735 - type: cos_sim_spearman value: 84.65373612172036 - type: euclidean_pearson value: 83.2905059954138 - type: euclidean_spearman value: 84.52240782811128 - type: manhattan_pearson value: 82.99533802997436 - type: manhattan_spearman value: 84.20673798475734 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 87.78896103896103 - type: f1 value: 87.77189310964883 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 39.714538337650495 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 36.90108349284447 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 32.795 - type: map_at_10 value: 43.669000000000004 - type: map_at_100 value: 45.151 - type: map_at_1000 value: 45.278 - type: map_at_3 value: 40.006 - type: map_at_5 value: 42.059999999999995 - type: mrr_at_1 value: 39.771 - type: mrr_at_10 value: 49.826 - type: mrr_at_100 value: 50.504000000000005 - type: mrr_at_1000 value: 50.549 - type: mrr_at_3 value: 47.115 - type: mrr_at_5 value: 48.832 - type: ndcg_at_1 value: 39.771 - type: ndcg_at_10 value: 50.217999999999996 - type: ndcg_at_100 value: 55.454 - type: ndcg_at_1000 value: 57.37 - type: ndcg_at_3 value: 44.885000000000005 - type: ndcg_at_5 value: 47.419 - type: precision_at_1 value: 39.771 - type: precision_at_10 value: 9.642000000000001 - type: precision_at_100 value: 1.538 - type: precision_at_1000 value: 0.198 - type: precision_at_3 value: 21.268 - type: precision_at_5 value: 15.536 - type: recall_at_1 value: 32.795 - type: recall_at_10 value: 62.580999999999996 - type: recall_at_100 value: 84.438 - type: recall_at_1000 value: 96.492 - type: recall_at_3 value: 47.071000000000005 - type: recall_at_5 value: 54.079 - type: map_at_1 value: 32.671 - type: map_at_10 value: 43.334 - type: map_at_100 value: 44.566 - type: map_at_1000 value: 44.702999999999996 - type: map_at_3 value: 40.343 - type: map_at_5 value: 41.983 - type: mrr_at_1 value: 40.764 - type: mrr_at_10 value: 49.382 - type: mrr_at_100 value: 49.988 - type: mrr_at_1000 value: 50.03300000000001 - type: mrr_at_3 value: 47.293 - type: mrr_at_5 value: 48.51 - type: ndcg_at_1 value: 40.764 - type: ndcg_at_10 value: 49.039 - type: ndcg_at_100 value: 53.259 - type: ndcg_at_1000 value: 55.253 - type: ndcg_at_3 value: 45.091 - type: ndcg_at_5 value: 46.839999999999996 - type: precision_at_1 value: 40.764 - type: precision_at_10 value: 9.191 - type: precision_at_100 value: 1.476 - type: precision_at_1000 value: 0.19499999999999998 - type: precision_at_3 value: 21.72 - type: precision_at_5 value: 15.299 - type: recall_at_1 value: 32.671 - type: recall_at_10 value: 58.816 - type: recall_at_100 value: 76.654 - type: recall_at_1000 value: 89.05999999999999 - type: recall_at_3 value: 46.743 - type: recall_at_5 value: 51.783 - type: map_at_1 value: 40.328 - type: map_at_10 value: 53.32599999999999 - type: map_at_100 value: 54.37499999999999 - type: map_at_1000 value: 54.429 - type: map_at_3 value: 49.902 - type: map_at_5 value: 52.002 - type: mrr_at_1 value: 46.332 - type: mrr_at_10 value: 56.858 - type: mrr_at_100 value: 57.522 - type: mrr_at_1000 value: 57.54899999999999 - type: mrr_at_3 value: 54.472 - type: mrr_at_5 value: 55.996 - type: ndcg_at_1 value: 46.332 - type: ndcg_at_10 value: 59.313 - type: ndcg_at_100 value: 63.266999999999996 - type: ndcg_at_1000 value: 64.36 - type: ndcg_at_3 value: 53.815000000000005 - type: ndcg_at_5 value: 56.814 - type: precision_at_1 value: 46.332 - type: precision_at_10 value: 9.53 - type: precision_at_100 value: 1.238 - type: precision_at_1000 value: 0.13699999999999998 - type: precision_at_3 value: 24.054000000000002 - type: precision_at_5 value: 16.589000000000002 - type: recall_at_1 value: 40.328 - type: recall_at_10 value: 73.421 - type: recall_at_100 value: 90.059 - type: recall_at_1000 value: 97.81 - type: recall_at_3 value: 59.009 - type: recall_at_5 value: 66.352 - type: map_at_1 value: 27.424 - type: map_at_10 value: 36.332 - type: map_at_100 value: 37.347 - type: map_at_1000 value: 37.422 - type: map_at_3 value: 33.743 - type: map_at_5 value: 35.176 - type: mrr_at_1 value: 29.153000000000002 - type: mrr_at_10 value: 38.233 - type: mrr_at_100 value: 39.109 - type: mrr_at_1000 value: 39.164 - type: mrr_at_3 value: 35.876000000000005 - type: mrr_at_5 value: 37.169000000000004 - type: ndcg_at_1 value: 29.153000000000002 - type: ndcg_at_10 value: 41.439 - type: ndcg_at_100 value: 46.42 - type: ndcg_at_1000 value: 48.242000000000004 - type: ndcg_at_3 value: 36.362 - type: ndcg_at_5 value: 38.743 - type: precision_at_1 value: 29.153000000000002 - type: precision_at_10 value: 6.315999999999999 - type: precision_at_100 value: 0.927 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 15.443000000000001 - type: precision_at_5 value: 10.644 - type: recall_at_1 value: 27.424 - type: recall_at_10 value: 55.364000000000004 - type: recall_at_100 value: 78.211 - type: recall_at_1000 value: 91.74600000000001 - type: recall_at_3 value: 41.379 - type: recall_at_5 value: 47.14 - type: map_at_1 value: 19.601 - type: map_at_10 value: 27.826 - type: map_at_100 value: 29.017 - type: map_at_1000 value: 29.137 - type: map_at_3 value: 25.125999999999998 - type: map_at_5 value: 26.765 - type: mrr_at_1 value: 24.005000000000003 - type: mrr_at_10 value: 32.716 - type: mrr_at_100 value: 33.631 - type: mrr_at_1000 value: 33.694 - type: mrr_at_3 value: 29.934 - type: mrr_at_5 value: 31.630999999999997 - type: ndcg_at_1 value: 24.005000000000003 - type: ndcg_at_10 value: 33.158 - type: ndcg_at_100 value: 38.739000000000004 - type: ndcg_at_1000 value: 41.495 - type: ndcg_at_3 value: 28.185 - type: ndcg_at_5 value: 30.796 - type: precision_at_1 value: 24.005000000000003 - type: precision_at_10 value: 5.908 - type: precision_at_100 value: 1.005 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 13.391 - type: precision_at_5 value: 9.876 - type: recall_at_1 value: 19.601 - type: recall_at_10 value: 44.746 - type: recall_at_100 value: 68.82300000000001 - type: recall_at_1000 value: 88.215 - type: recall_at_3 value: 31.239 - type: recall_at_5 value: 37.695 - type: map_at_1 value: 30.130000000000003 - type: map_at_10 value: 40.96 - type: map_at_100 value: 42.282 - type: map_at_1000 value: 42.392 - type: map_at_3 value: 37.889 - type: map_at_5 value: 39.661 - type: mrr_at_1 value: 36.958999999999996 - type: mrr_at_10 value: 46.835 - type: mrr_at_100 value: 47.644 - type: mrr_at_1000 value: 47.688 - type: mrr_at_3 value: 44.562000000000005 - type: mrr_at_5 value: 45.938 - type: ndcg_at_1 value: 36.958999999999996 - type: ndcg_at_10 value: 47.06 - type: ndcg_at_100 value: 52.345 - type: ndcg_at_1000 value: 54.35 - type: ndcg_at_3 value: 42.301 - type: ndcg_at_5 value: 44.635999999999996 - type: precision_at_1 value: 36.958999999999996 - type: precision_at_10 value: 8.479000000000001 - type: precision_at_100 value: 1.284 - type: precision_at_1000 value: 0.163 - type: precision_at_3 value: 20.244 - type: precision_at_5 value: 14.224999999999998 - type: recall_at_1 value: 30.130000000000003 - type: recall_at_10 value: 59.27 - type: recall_at_100 value: 81.195 - type: recall_at_1000 value: 94.21199999999999 - type: recall_at_3 value: 45.885 - type: recall_at_5 value: 52.016 - type: map_at_1 value: 26.169999999999998 - type: map_at_10 value: 36.451 - type: map_at_100 value: 37.791000000000004 - type: map_at_1000 value: 37.897 - type: map_at_3 value: 33.109 - type: map_at_5 value: 34.937000000000005 - type: mrr_at_1 value: 32.877 - type: mrr_at_10 value: 42.368 - type: mrr_at_100 value: 43.201 - type: mrr_at_1000 value: 43.259 - type: mrr_at_3 value: 39.763999999999996 - type: mrr_at_5 value: 41.260000000000005 - type: ndcg_at_1 value: 32.877 - type: ndcg_at_10 value: 42.659000000000006 - type: ndcg_at_100 value: 48.161 - type: ndcg_at_1000 value: 50.345 - type: ndcg_at_3 value: 37.302 - type: ndcg_at_5 value: 39.722 - type: precision_at_1 value: 32.877 - type: precision_at_10 value: 7.9 - type: precision_at_100 value: 1.236 - type: precision_at_1000 value: 0.158 - type: precision_at_3 value: 17.846 - type: precision_at_5 value: 12.9 - type: recall_at_1 value: 26.169999999999998 - type: recall_at_10 value: 55.35 - type: recall_at_100 value: 78.755 - type: recall_at_1000 value: 93.518 - type: recall_at_3 value: 40.176 - type: recall_at_5 value: 46.589000000000006 - type: map_at_1 value: 27.15516666666667 - type: map_at_10 value: 36.65741666666667 - type: map_at_100 value: 37.84991666666666 - type: map_at_1000 value: 37.96316666666667 - type: map_at_3 value: 33.74974999999999 - type: map_at_5 value: 35.3765 - type: mrr_at_1 value: 32.08233333333334 - type: mrr_at_10 value: 41.033833333333334 - type: mrr_at_100 value: 41.84524999999999 - type: mrr_at_1000 value: 41.89983333333333 - type: mrr_at_3 value: 38.62008333333333 - type: mrr_at_5 value: 40.03441666666666 - type: ndcg_at_1 value: 32.08233333333334 - type: ndcg_at_10 value: 42.229 - type: ndcg_at_100 value: 47.26716666666667 - type: ndcg_at_1000 value: 49.43466666666667 - type: ndcg_at_3 value: 37.36408333333333 - type: ndcg_at_5 value: 39.6715 - type: precision_at_1 value: 32.08233333333334 - type: precision_at_10 value: 7.382583333333334 - type: precision_at_100 value: 1.16625 - type: precision_at_1000 value: 0.15408333333333332 - type: precision_at_3 value: 17.218 - type: precision_at_5 value: 12.21875 - type: recall_at_1 value: 27.15516666666667 - type: recall_at_10 value: 54.36683333333333 - type: recall_at_100 value: 76.37183333333333 - type: recall_at_1000 value: 91.26183333333333 - type: recall_at_3 value: 40.769916666666674 - type: recall_at_5 value: 46.702333333333335 - type: map_at_1 value: 25.749 - type: map_at_10 value: 33.001999999999995 - type: map_at_100 value: 33.891 - type: map_at_1000 value: 33.993 - type: map_at_3 value: 30.703999999999997 - type: map_at_5 value: 31.959 - type: mrr_at_1 value: 28.834 - type: mrr_at_10 value: 35.955 - type: mrr_at_100 value: 36.709 - type: mrr_at_1000 value: 36.779 - type: mrr_at_3 value: 33.947 - type: mrr_at_5 value: 35.089 - type: ndcg_at_1 value: 28.834 - type: ndcg_at_10 value: 37.329 - type: ndcg_at_100 value: 41.79 - type: ndcg_at_1000 value: 44.169000000000004 - type: ndcg_at_3 value: 33.184999999999995 - type: ndcg_at_5 value: 35.107 - type: precision_at_1 value: 28.834 - type: precision_at_10 value: 5.7669999999999995 - type: precision_at_100 value: 0.876 - type: precision_at_1000 value: 0.11399999999999999 - type: precision_at_3 value: 14.213000000000001 - type: precision_at_5 value: 9.754999999999999 - type: recall_at_1 value: 25.749 - type: recall_at_10 value: 47.791 - type: recall_at_100 value: 68.255 - type: recall_at_1000 value: 85.749 - type: recall_at_3 value: 36.199 - type: recall_at_5 value: 41.071999999999996 - type: map_at_1 value: 17.777 - type: map_at_10 value: 25.201 - type: map_at_100 value: 26.423999999999996 - type: map_at_1000 value: 26.544 - type: map_at_3 value: 22.869 - type: map_at_5 value: 24.023 - type: mrr_at_1 value: 21.473 - type: mrr_at_10 value: 29.12 - type: mrr_at_100 value: 30.144 - type: mrr_at_1000 value: 30.215999999999998 - type: mrr_at_3 value: 26.933 - type: mrr_at_5 value: 28.051 - type: ndcg_at_1 value: 21.473 - type: ndcg_at_10 value: 30.003 - type: ndcg_at_100 value: 35.766 - type: ndcg_at_1000 value: 38.501000000000005 - type: ndcg_at_3 value: 25.773000000000003 - type: ndcg_at_5 value: 27.462999999999997 - type: precision_at_1 value: 21.473 - type: precision_at_10 value: 5.482 - type: precision_at_100 value: 0.975 - type: precision_at_1000 value: 0.13799999999999998 - type: precision_at_3 value: 12.205 - type: precision_at_5 value: 8.692 - type: recall_at_1 value: 17.777 - type: recall_at_10 value: 40.582 - type: recall_at_100 value: 66.305 - type: recall_at_1000 value: 85.636 - type: recall_at_3 value: 28.687 - type: recall_at_5 value: 33.089 - type: map_at_1 value: 26.677 - type: map_at_10 value: 36.309000000000005 - type: map_at_100 value: 37.403999999999996 - type: map_at_1000 value: 37.496 - type: map_at_3 value: 33.382 - type: map_at_5 value: 34.98 - type: mrr_at_1 value: 31.343 - type: mrr_at_10 value: 40.549 - type: mrr_at_100 value: 41.342 - type: mrr_at_1000 value: 41.397 - type: mrr_at_3 value: 38.029 - type: mrr_at_5 value: 39.451 - type: ndcg_at_1 value: 31.343 - type: ndcg_at_10 value: 42.1 - type: ndcg_at_100 value: 47.089999999999996 - type: ndcg_at_1000 value: 49.222 - type: ndcg_at_3 value: 36.836999999999996 - type: ndcg_at_5 value: 39.21 - type: precision_at_1 value: 31.343 - type: precision_at_10 value: 7.164 - type: precision_at_100 value: 1.0959999999999999 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 16.915 - type: precision_at_5 value: 11.940000000000001 - type: recall_at_1 value: 26.677 - type: recall_at_10 value: 55.54599999999999 - type: recall_at_100 value: 77.094 - type: recall_at_1000 value: 92.01 - type: recall_at_3 value: 41.191 - type: recall_at_5 value: 47.006 - type: map_at_1 value: 24.501 - type: map_at_10 value: 33.102 - type: map_at_100 value: 34.676 - type: map_at_1000 value: 34.888000000000005 - type: map_at_3 value: 29.944 - type: map_at_5 value: 31.613999999999997 - type: mrr_at_1 value: 29.447000000000003 - type: mrr_at_10 value: 37.996 - type: mrr_at_100 value: 38.946 - type: mrr_at_1000 value: 38.995000000000005 - type: mrr_at_3 value: 35.079 - type: mrr_at_5 value: 36.69 - type: ndcg_at_1 value: 29.447000000000003 - type: ndcg_at_10 value: 39.232 - type: ndcg_at_100 value: 45.247 - type: ndcg_at_1000 value: 47.613 - type: ndcg_at_3 value: 33.922999999999995 - type: ndcg_at_5 value: 36.284 - type: precision_at_1 value: 29.447000000000003 - type: precision_at_10 value: 7.648000000000001 - type: precision_at_100 value: 1.516 - type: precision_at_1000 value: 0.23900000000000002 - type: precision_at_3 value: 16.008 - type: precision_at_5 value: 11.779 - type: recall_at_1 value: 24.501 - type: recall_at_10 value: 51.18899999999999 - type: recall_at_100 value: 78.437 - type: recall_at_1000 value: 92.842 - type: recall_at_3 value: 35.808 - type: recall_at_5 value: 42.197 - type: map_at_1 value: 22.039 - type: map_at_10 value: 30.377 - type: map_at_100 value: 31.275 - type: map_at_1000 value: 31.379 - type: map_at_3 value: 27.98 - type: map_at_5 value: 29.358 - type: mrr_at_1 value: 24.03 - type: mrr_at_10 value: 32.568000000000005 - type: mrr_at_100 value: 33.403 - type: mrr_at_1000 value: 33.475 - type: mrr_at_3 value: 30.436999999999998 - type: mrr_at_5 value: 31.796000000000003 - type: ndcg_at_1 value: 24.03 - type: ndcg_at_10 value: 35.198 - type: ndcg_at_100 value: 39.668 - type: ndcg_at_1000 value: 42.296 - type: ndcg_at_3 value: 30.709999999999997 - type: ndcg_at_5 value: 33.024 - type: precision_at_1 value: 24.03 - type: precision_at_10 value: 5.564 - type: precision_at_100 value: 0.828 - type: precision_at_1000 value: 0.117 - type: precision_at_3 value: 13.309000000000001 - type: precision_at_5 value: 9.39 - type: recall_at_1 value: 22.039 - type: recall_at_10 value: 47.746 - type: recall_at_100 value: 68.23599999999999 - type: recall_at_1000 value: 87.852 - type: recall_at_3 value: 35.852000000000004 - type: recall_at_5 value: 41.410000000000004 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 15.692999999999998 - type: map_at_10 value: 26.903 - type: map_at_100 value: 28.987000000000002 - type: map_at_1000 value: 29.176999999999996 - type: map_at_3 value: 22.137 - type: map_at_5 value: 24.758 - type: mrr_at_1 value: 35.57 - type: mrr_at_10 value: 47.821999999999996 - type: mrr_at_100 value: 48.608000000000004 - type: mrr_at_1000 value: 48.638999999999996 - type: mrr_at_3 value: 44.452000000000005 - type: mrr_at_5 value: 46.546 - type: ndcg_at_1 value: 35.57 - type: ndcg_at_10 value: 36.567 - type: ndcg_at_100 value: 44.085 - type: ndcg_at_1000 value: 47.24 - type: ndcg_at_3 value: 29.964000000000002 - type: ndcg_at_5 value: 32.511 - type: precision_at_1 value: 35.57 - type: precision_at_10 value: 11.485 - type: precision_at_100 value: 1.9619999999999997 - type: precision_at_1000 value: 0.256 - type: precision_at_3 value: 22.237000000000002 - type: precision_at_5 value: 17.471999999999998 - type: recall_at_1 value: 15.692999999999998 - type: recall_at_10 value: 43.056 - type: recall_at_100 value: 68.628 - type: recall_at_1000 value: 86.075 - type: recall_at_3 value: 26.918999999999997 - type: recall_at_5 value: 34.14 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 9.53 - type: map_at_10 value: 20.951 - type: map_at_100 value: 30.136000000000003 - type: map_at_1000 value: 31.801000000000002 - type: map_at_3 value: 15.021 - type: map_at_5 value: 17.471999999999998 - type: mrr_at_1 value: 71.0 - type: mrr_at_10 value: 79.176 - type: mrr_at_100 value: 79.418 - type: mrr_at_1000 value: 79.426 - type: mrr_at_3 value: 78.125 - type: mrr_at_5 value: 78.61200000000001 - type: ndcg_at_1 value: 58.5 - type: ndcg_at_10 value: 44.106 - type: ndcg_at_100 value: 49.268 - type: ndcg_at_1000 value: 56.711999999999996 - type: ndcg_at_3 value: 48.934 - type: ndcg_at_5 value: 45.826 - type: precision_at_1 value: 71.0 - type: precision_at_10 value: 35.0 - type: precision_at_100 value: 11.360000000000001 - type: precision_at_1000 value: 2.046 - type: precision_at_3 value: 52.833 - type: precision_at_5 value: 44.15 - type: recall_at_1 value: 9.53 - type: recall_at_10 value: 26.811 - type: recall_at_100 value: 55.916999999999994 - type: recall_at_1000 value: 79.973 - type: recall_at_3 value: 16.413 - type: recall_at_5 value: 19.980999999999998 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 51.519999999999996 - type: f1 value: 46.36601294761231 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 74.413 - type: map_at_10 value: 83.414 - type: map_at_100 value: 83.621 - type: map_at_1000 value: 83.635 - type: map_at_3 value: 82.337 - type: map_at_5 value: 83.039 - type: mrr_at_1 value: 80.19800000000001 - type: mrr_at_10 value: 87.715 - type: mrr_at_100 value: 87.778 - type: mrr_at_1000 value: 87.779 - type: mrr_at_3 value: 87.106 - type: mrr_at_5 value: 87.555 - type: ndcg_at_1 value: 80.19800000000001 - type: ndcg_at_10 value: 87.182 - type: ndcg_at_100 value: 87.90299999999999 - type: ndcg_at_1000 value: 88.143 - type: ndcg_at_3 value: 85.60600000000001 - type: ndcg_at_5 value: 86.541 - type: precision_at_1 value: 80.19800000000001 - type: precision_at_10 value: 10.531 - type: precision_at_100 value: 1.113 - type: precision_at_1000 value: 0.11499999999999999 - type: precision_at_3 value: 32.933 - type: precision_at_5 value: 20.429 - type: recall_at_1 value: 74.413 - type: recall_at_10 value: 94.363 - type: recall_at_100 value: 97.165 - type: recall_at_1000 value: 98.668 - type: recall_at_3 value: 90.108 - type: recall_at_5 value: 92.52 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 22.701 - type: map_at_10 value: 37.122 - type: map_at_100 value: 39.178000000000004 - type: map_at_1000 value: 39.326 - type: map_at_3 value: 32.971000000000004 - type: map_at_5 value: 35.332 - type: mrr_at_1 value: 44.753 - type: mrr_at_10 value: 53.452 - type: mrr_at_100 value: 54.198 - type: mrr_at_1000 value: 54.225 - type: mrr_at_3 value: 50.952 - type: mrr_at_5 value: 52.464 - type: ndcg_at_1 value: 44.753 - type: ndcg_at_10 value: 45.021 - type: ndcg_at_100 value: 52.028 - type: ndcg_at_1000 value: 54.596000000000004 - type: ndcg_at_3 value: 41.622 - type: ndcg_at_5 value: 42.736000000000004 - type: precision_at_1 value: 44.753 - type: precision_at_10 value: 12.284 - type: precision_at_100 value: 1.955 - type: precision_at_1000 value: 0.243 - type: precision_at_3 value: 27.828999999999997 - type: precision_at_5 value: 20.061999999999998 - type: recall_at_1 value: 22.701 - type: recall_at_10 value: 51.432 - type: recall_at_100 value: 77.009 - type: recall_at_1000 value: 92.511 - type: recall_at_3 value: 37.919000000000004 - type: recall_at_5 value: 44.131 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 40.189 - type: map_at_10 value: 66.24600000000001 - type: map_at_100 value: 67.098 - type: map_at_1000 value: 67.149 - type: map_at_3 value: 62.684 - type: map_at_5 value: 64.974 - type: mrr_at_1 value: 80.378 - type: mrr_at_10 value: 86.127 - type: mrr_at_100 value: 86.29299999999999 - type: mrr_at_1000 value: 86.297 - type: mrr_at_3 value: 85.31400000000001 - type: mrr_at_5 value: 85.858 - type: ndcg_at_1 value: 80.378 - type: ndcg_at_10 value: 74.101 - type: ndcg_at_100 value: 76.993 - type: ndcg_at_1000 value: 77.948 - type: ndcg_at_3 value: 69.232 - type: ndcg_at_5 value: 72.04599999999999 - type: precision_at_1 value: 80.378 - type: precision_at_10 value: 15.595999999999998 - type: precision_at_100 value: 1.7840000000000003 - type: precision_at_1000 value: 0.191 - type: precision_at_3 value: 44.884 - type: precision_at_5 value: 29.145 - type: recall_at_1 value: 40.189 - type: recall_at_10 value: 77.981 - type: recall_at_100 value: 89.21 - type: recall_at_1000 value: 95.48299999999999 - type: recall_at_3 value: 67.326 - type: recall_at_5 value: 72.863 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 92.84599999999999 - type: ap value: 89.4710787567357 - type: f1 value: 92.83752676932258 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 23.132 - type: map_at_10 value: 35.543 - type: map_at_100 value: 36.702 - type: map_at_1000 value: 36.748999999999995 - type: map_at_3 value: 31.737 - type: map_at_5 value: 33.927 - type: mrr_at_1 value: 23.782 - type: mrr_at_10 value: 36.204 - type: mrr_at_100 value: 37.29 - type: mrr_at_1000 value: 37.330999999999996 - type: mrr_at_3 value: 32.458999999999996 - type: mrr_at_5 value: 34.631 - type: ndcg_at_1 value: 23.782 - type: ndcg_at_10 value: 42.492999999999995 - type: ndcg_at_100 value: 47.985 - type: ndcg_at_1000 value: 49.141 - type: ndcg_at_3 value: 34.748000000000005 - type: ndcg_at_5 value: 38.651 - type: precision_at_1 value: 23.782 - type: precision_at_10 value: 6.665 - type: precision_at_100 value: 0.941 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 14.776 - type: precision_at_5 value: 10.84 - type: recall_at_1 value: 23.132 - type: recall_at_10 value: 63.794 - type: recall_at_100 value: 89.027 - type: recall_at_1000 value: 97.807 - type: recall_at_3 value: 42.765 - type: recall_at_5 value: 52.11 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 94.59188326493388 - type: f1 value: 94.3842594786827 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 79.49384404924761 - type: f1 value: 59.7580539534629 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 77.56220578345663 - type: f1 value: 75.27228165561478 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 80.53463349024884 - type: f1 value: 80.4893958236536 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 32.56100273484962 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 31.470380028839607 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 32.06102792457849 - type: mrr value: 33.30709199672238 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 6.776999999999999 - type: map_at_10 value: 14.924000000000001 - type: map_at_100 value: 18.955 - type: map_at_1000 value: 20.538999999999998 - type: map_at_3 value: 10.982 - type: map_at_5 value: 12.679000000000002 - type: mrr_at_1 value: 47.988 - type: mrr_at_10 value: 57.232000000000006 - type: mrr_at_100 value: 57.818999999999996 - type: mrr_at_1000 value: 57.847 - type: mrr_at_3 value: 54.901999999999994 - type: mrr_at_5 value: 56.481 - type: ndcg_at_1 value: 46.594 - type: ndcg_at_10 value: 38.129000000000005 - type: ndcg_at_100 value: 35.54 - type: ndcg_at_1000 value: 44.172 - type: ndcg_at_3 value: 43.025999999999996 - type: ndcg_at_5 value: 41.052 - type: precision_at_1 value: 47.988 - type: precision_at_10 value: 28.111000000000004 - type: precision_at_100 value: 8.929 - type: precision_at_1000 value: 2.185 - type: precision_at_3 value: 40.144000000000005 - type: precision_at_5 value: 35.232 - type: recall_at_1 value: 6.776999999999999 - type: recall_at_10 value: 19.289 - type: recall_at_100 value: 36.359 - type: recall_at_1000 value: 67.54 - type: recall_at_3 value: 11.869 - type: recall_at_5 value: 14.999 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 31.108000000000004 - type: map_at_10 value: 47.126000000000005 - type: map_at_100 value: 48.171 - type: map_at_1000 value: 48.199 - type: map_at_3 value: 42.734 - type: map_at_5 value: 45.362 - type: mrr_at_1 value: 34.936 - type: mrr_at_10 value: 49.571 - type: mrr_at_100 value: 50.345 - type: mrr_at_1000 value: 50.363 - type: mrr_at_3 value: 45.959 - type: mrr_at_5 value: 48.165 - type: ndcg_at_1 value: 34.936 - type: ndcg_at_10 value: 55.028999999999996 - type: ndcg_at_100 value: 59.244 - type: ndcg_at_1000 value: 59.861 - type: ndcg_at_3 value: 46.872 - type: ndcg_at_5 value: 51.217999999999996 - type: precision_at_1 value: 34.936 - type: precision_at_10 value: 9.099 - type: precision_at_100 value: 1.145 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 21.456 - type: precision_at_5 value: 15.411 - type: recall_at_1 value: 31.108000000000004 - type: recall_at_10 value: 76.53999999999999 - type: recall_at_100 value: 94.39 - type: recall_at_1000 value: 98.947 - type: recall_at_3 value: 55.572 - type: recall_at_5 value: 65.525 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 71.56400000000001 - type: map_at_10 value: 85.482 - type: map_at_100 value: 86.114 - type: map_at_1000 value: 86.13 - type: map_at_3 value: 82.607 - type: map_at_5 value: 84.405 - type: mrr_at_1 value: 82.42 - type: mrr_at_10 value: 88.304 - type: mrr_at_100 value: 88.399 - type: mrr_at_1000 value: 88.399 - type: mrr_at_3 value: 87.37 - type: mrr_at_5 value: 88.024 - type: ndcg_at_1 value: 82.45 - type: ndcg_at_10 value: 89.06500000000001 - type: ndcg_at_100 value: 90.232 - type: ndcg_at_1000 value: 90.305 - type: ndcg_at_3 value: 86.375 - type: ndcg_at_5 value: 87.85300000000001 - type: precision_at_1 value: 82.45 - type: precision_at_10 value: 13.486999999999998 - type: precision_at_100 value: 1.534 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.813 - type: precision_at_5 value: 24.773999999999997 - type: recall_at_1 value: 71.56400000000001 - type: recall_at_10 value: 95.812 - type: recall_at_100 value: 99.7 - type: recall_at_1000 value: 99.979 - type: recall_at_3 value: 87.966 - type: recall_at_5 value: 92.268 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 57.241876648614145 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 64.66212576446223 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 5.308 - type: map_at_10 value: 13.803 - type: map_at_100 value: 16.176 - type: map_at_1000 value: 16.561 - type: map_at_3 value: 9.761000000000001 - type: map_at_5 value: 11.802 - type: mrr_at_1 value: 26.200000000000003 - type: mrr_at_10 value: 37.621 - type: mrr_at_100 value: 38.767 - type: mrr_at_1000 value: 38.815 - type: mrr_at_3 value: 34.117 - type: mrr_at_5 value: 36.107 - type: ndcg_at_1 value: 26.200000000000003 - type: ndcg_at_10 value: 22.64 - type: ndcg_at_100 value: 31.567 - type: ndcg_at_1000 value: 37.623 - type: ndcg_at_3 value: 21.435000000000002 - type: ndcg_at_5 value: 18.87 - type: precision_at_1 value: 26.200000000000003 - type: precision_at_10 value: 11.74 - type: precision_at_100 value: 2.465 - type: precision_at_1000 value: 0.391 - type: precision_at_3 value: 20.033 - type: precision_at_5 value: 16.64 - type: recall_at_1 value: 5.308 - type: recall_at_10 value: 23.794999999999998 - type: recall_at_100 value: 50.015 - type: recall_at_1000 value: 79.283 - type: recall_at_3 value: 12.178 - type: recall_at_5 value: 16.882 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 84.93231134675553 - type: cos_sim_spearman value: 81.68319292603205 - type: euclidean_pearson value: 81.8396814380367 - type: euclidean_spearman value: 81.24641903349945 - type: manhattan_pearson value: 81.84698799204274 - type: manhattan_spearman value: 81.24269997904105 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 86.73241671587446 - type: cos_sim_spearman value: 79.05091082971826 - type: euclidean_pearson value: 83.91146869578044 - type: euclidean_spearman value: 79.87978465370936 - type: manhattan_pearson value: 83.90888338917678 - type: manhattan_spearman value: 79.87482848584241 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 85.14970731146177 - type: cos_sim_spearman value: 86.37363490084627 - type: euclidean_pearson value: 83.02154218530433 - type: euclidean_spearman value: 83.80258761957367 - type: manhattan_pearson value: 83.01664495119347 - type: manhattan_spearman value: 83.77567458007952 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 83.40474139886784 - type: cos_sim_spearman value: 82.77768789165984 - type: euclidean_pearson value: 80.7065877443695 - type: euclidean_spearman value: 81.375940662505 - type: manhattan_pearson value: 80.6507552270278 - type: manhattan_spearman value: 81.32782179098741 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 87.08585968722274 - type: cos_sim_spearman value: 88.03110031451399 - type: euclidean_pearson value: 85.74012019602384 - type: euclidean_spearman value: 86.13592849438209 - type: manhattan_pearson value: 85.74404842369206 - type: manhattan_spearman value: 86.14492318960154 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 84.95069052788875 - type: cos_sim_spearman value: 86.4867991595147 - type: euclidean_pearson value: 84.31013325754635 - type: euclidean_spearman value: 85.01529258006482 - type: manhattan_pearson value: 84.26995570085374 - type: manhattan_spearman value: 84.96982104986162 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.54617647971897 - type: cos_sim_spearman value: 87.49834181751034 - type: euclidean_pearson value: 86.01015322577122 - type: euclidean_spearman value: 84.63362652063199 - type: manhattan_pearson value: 86.13807574475706 - type: manhattan_spearman value: 84.7772370721132 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 67.20047755786615 - type: cos_sim_spearman value: 67.05324077987636 - type: euclidean_pearson value: 66.91930642976601 - type: euclidean_spearman value: 65.21491856099105 - type: manhattan_pearson value: 66.78756851976624 - type: manhattan_spearman value: 65.12356257740728 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 86.19852871539686 - type: cos_sim_spearman value: 87.5161895296395 - type: euclidean_pearson value: 84.59848645207485 - type: euclidean_spearman value: 85.26427328757919 - type: manhattan_pearson value: 84.59747366996524 - type: manhattan_spearman value: 85.24045855146915 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 87.63320317811032 - type: mrr value: 96.26242947321379 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 60.928000000000004 - type: map_at_10 value: 70.112 - type: map_at_100 value: 70.59299999999999 - type: map_at_1000 value: 70.623 - type: map_at_3 value: 66.846 - type: map_at_5 value: 68.447 - type: mrr_at_1 value: 64.0 - type: mrr_at_10 value: 71.212 - type: mrr_at_100 value: 71.616 - type: mrr_at_1000 value: 71.64500000000001 - type: mrr_at_3 value: 68.77799999999999 - type: mrr_at_5 value: 70.094 - type: ndcg_at_1 value: 64.0 - type: ndcg_at_10 value: 74.607 - type: ndcg_at_100 value: 76.416 - type: ndcg_at_1000 value: 77.102 - type: ndcg_at_3 value: 69.126 - type: ndcg_at_5 value: 71.41300000000001 - type: precision_at_1 value: 64.0 - type: precision_at_10 value: 9.933 - type: precision_at_100 value: 1.077 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 26.556 - type: precision_at_5 value: 17.467 - type: recall_at_1 value: 60.928000000000004 - type: recall_at_10 value: 87.322 - type: recall_at_100 value: 94.833 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 72.628 - type: recall_at_5 value: 78.428 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.86237623762376 - type: cos_sim_ap value: 96.72586477206649 - type: cos_sim_f1 value: 93.01858362631845 - type: cos_sim_precision value: 93.4409687184662 - type: cos_sim_recall value: 92.60000000000001 - type: dot_accuracy value: 99.78019801980199 - type: dot_ap value: 93.72748205246228 - type: dot_f1 value: 89.04109589041096 - type: dot_precision value: 87.16475095785441 - type: dot_recall value: 91.0 - type: euclidean_accuracy value: 99.85445544554456 - type: euclidean_ap value: 96.6661459876145 - type: euclidean_f1 value: 92.58337481333997 - type: euclidean_precision value: 92.17046580773042 - type: euclidean_recall value: 93.0 - type: manhattan_accuracy value: 99.85445544554456 - type: manhattan_ap value: 96.6883549244056 - type: manhattan_f1 value: 92.57598405580468 - type: manhattan_precision value: 92.25422045680239 - type: manhattan_recall value: 92.9 - type: max_accuracy value: 99.86237623762376 - type: max_ap value: 96.72586477206649 - type: max_f1 value: 93.01858362631845 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 66.39930057069995 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 34.96398659903402 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 55.946944700355395 - type: mrr value: 56.97151398438164 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 31.541657650692905 - type: cos_sim_spearman value: 31.605804192286303 - type: dot_pearson value: 28.26905996736398 - type: dot_spearman value: 27.864801765851187 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.22599999999999998 - type: map_at_10 value: 1.8870000000000002 - type: map_at_100 value: 9.78 - type: map_at_1000 value: 22.514 - type: map_at_3 value: 0.6669999999999999 - type: map_at_5 value: 1.077 - type: mrr_at_1 value: 82.0 - type: mrr_at_10 value: 89.86699999999999 - type: mrr_at_100 value: 89.86699999999999 - type: mrr_at_1000 value: 89.86699999999999 - type: mrr_at_3 value: 89.667 - type: mrr_at_5 value: 89.667 - type: ndcg_at_1 value: 79.0 - type: ndcg_at_10 value: 74.818 - type: ndcg_at_100 value: 53.715999999999994 - type: ndcg_at_1000 value: 47.082 - type: ndcg_at_3 value: 82.134 - type: ndcg_at_5 value: 79.81899999999999 - type: precision_at_1 value: 82.0 - type: precision_at_10 value: 78.0 - type: precision_at_100 value: 54.48 - type: precision_at_1000 value: 20.518 - type: precision_at_3 value: 87.333 - type: precision_at_5 value: 85.2 - type: recall_at_1 value: 0.22599999999999998 - type: recall_at_10 value: 2.072 - type: recall_at_100 value: 13.013 - type: recall_at_1000 value: 43.462 - type: recall_at_3 value: 0.695 - type: recall_at_5 value: 1.139 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.328 - type: map_at_10 value: 9.795 - type: map_at_100 value: 15.801000000000002 - type: map_at_1000 value: 17.23 - type: map_at_3 value: 4.734 - type: map_at_5 value: 6.644 - type: mrr_at_1 value: 30.612000000000002 - type: mrr_at_10 value: 46.902 - type: mrr_at_100 value: 47.495 - type: mrr_at_1000 value: 47.495 - type: mrr_at_3 value: 41.156 - type: mrr_at_5 value: 44.218 - type: ndcg_at_1 value: 28.571 - type: ndcg_at_10 value: 24.806 - type: ndcg_at_100 value: 36.419000000000004 - type: ndcg_at_1000 value: 47.272999999999996 - type: ndcg_at_3 value: 25.666 - type: ndcg_at_5 value: 25.448999999999998 - type: precision_at_1 value: 30.612000000000002 - type: precision_at_10 value: 23.061 - type: precision_at_100 value: 7.714 - type: precision_at_1000 value: 1.484 - type: precision_at_3 value: 26.531 - type: precision_at_5 value: 26.122 - type: recall_at_1 value: 2.328 - type: recall_at_10 value: 16.524 - type: recall_at_100 value: 47.179 - type: recall_at_1000 value: 81.22200000000001 - type: recall_at_3 value: 5.745 - type: recall_at_5 value: 9.339 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 70.9142 - type: ap value: 14.335574772555415 - type: f1 value: 54.62839595194111 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 59.94340690435768 - type: f1 value: 60.286487936731916 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 51.26597708987974 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 87.48882398521786 - type: cos_sim_ap value: 79.04326607602204 - type: cos_sim_f1 value: 71.64566826860633 - type: cos_sim_precision value: 70.55512918905092 - type: cos_sim_recall value: 72.77044854881267 - type: dot_accuracy value: 84.19264469213805 - type: dot_ap value: 67.96360043562528 - type: dot_f1 value: 64.06418393006827 - type: dot_precision value: 58.64941898706424 - type: dot_recall value: 70.58047493403694 - type: euclidean_accuracy value: 87.45902127913214 - type: euclidean_ap value: 78.9742237648272 - type: euclidean_f1 value: 71.5553235908142 - type: euclidean_precision value: 70.77955601445535 - type: euclidean_recall value: 72.34828496042216 - type: manhattan_accuracy value: 87.41729749061214 - type: manhattan_ap value: 78.90073137580596 - type: manhattan_f1 value: 71.3942611553533 - type: manhattan_precision value: 68.52705653967483 - type: manhattan_recall value: 74.51187335092348 - type: max_accuracy value: 87.48882398521786 - type: max_ap value: 79.04326607602204 - type: max_f1 value: 71.64566826860633 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.68125897465751 - type: cos_sim_ap value: 85.6003454431979 - type: cos_sim_f1 value: 77.6957163958641 - type: cos_sim_precision value: 73.0110366307807 - type: cos_sim_recall value: 83.02279026793964 - type: dot_accuracy value: 87.7672992587418 - type: dot_ap value: 82.4971301112899 - type: dot_f1 value: 75.90528233151184 - type: dot_precision value: 72.0370626469368 - type: dot_recall value: 80.21250384970742 - type: euclidean_accuracy value: 88.4503434625684 - type: euclidean_ap value: 84.91949884748384 - type: euclidean_f1 value: 76.92365018444684 - type: euclidean_precision value: 74.53245721712759 - type: euclidean_recall value: 79.47336002463813 - type: manhattan_accuracy value: 88.47556952691427 - type: manhattan_ap value: 84.8963689101517 - type: manhattan_f1 value: 76.85901249256395 - type: manhattan_precision value: 74.31693989071039 - type: manhattan_recall value: 79.58115183246073 - type: max_accuracy value: 88.68125897465751 - type: max_ap value: 85.6003454431979 - type: max_f1 value: 77.6957163958641 --- <h1 align="center">FlagEmbedding</h1> <h4 align="center"> <p> <a href=#model-list>Model List</a> | <a href=#frequently-asked-questions>FAQ</a> | <a href=#usage>Usage</a> | <a href="#evaluation">Evaluation</a> | <a href="#train">Train</a> | <a href="#contact">Contact</a> | <a href="#citation">Citation</a> | <a href="#license">License</a> <p> </h4> For more details please refer to our Github: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding). If you are looking for a model that supports more languages, longer texts, and other retrieval methods, you can try using [bge-m3](https://huggingface.co/BAAI/bge-m3). [English](README.md) | [中文](https://github.com/FlagOpen/FlagEmbedding/blob/master/README_zh.md) FlagEmbedding focuses on retrieval-augmented LLMs, consisting of the following projects currently: - **Long-Context LLM**: [Activation Beacon](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon) - **Fine-tuning of LM** : [LM-Cocktail](https://github.com/FlagOpen/FlagEmbedding/tree/master/LM_Cocktail) - **Dense Retrieval**: [BGE-M3](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3), [LLM Embedder](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_embedder), [BGE Embedding](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/baai_general_embedding) - **Reranker Model**: [BGE Reranker](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker) - **Benchmark**: [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) ## News - 1/30/2024: Release **BGE-M3**, a new member to BGE model series! M3 stands for **M**ulti-linguality (100+ languages), **M**ulti-granularities (input length up to 8192), **M**ulti-Functionality (unification of dense, lexical, multi-vec/colbert retrieval). It is the first embedding model that supports all three retrieval methods, achieving new SOTA on multi-lingual (MIRACL) and cross-lingual (MKQA) benchmarks. [Technical Report](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/BGE_M3/BGE_M3.pdf) and [Code](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3). :fire: - 1/9/2024: Release [Activation-Beacon](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon), an effective, efficient, compatible, and low-cost (training) method to extend the context length of LLM. [Technical Report](https://arxiv.org/abs/2401.03462) :fire: - 12/24/2023: Release **LLaRA**, a LLaMA-7B based dense retriever, leading to state-of-the-art performances on MS MARCO and BEIR. Model and code will be open-sourced. Please stay tuned. [Technical Report](https://arxiv.org/abs/2312.15503) :fire: - 11/23/2023: Release [LM-Cocktail](https://github.com/FlagOpen/FlagEmbedding/tree/master/LM_Cocktail), a method to maintain general capabilities during fine-tuning by merging multiple language models. [Technical Report](https://arxiv.org/abs/2311.13534) :fire: - 10/12/2023: Release [LLM-Embedder](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_embedder), a unified embedding model to support diverse retrieval augmentation needs for LLMs. [Technical Report](https://arxiv.org/pdf/2310.07554.pdf) - 09/15/2023: The [technical report](https://arxiv.org/pdf/2309.07597.pdf) and [massive training data](https://data.baai.ac.cn/details/BAAI-MTP) of BGE has been released - 09/12/2023: New models: - **New reranker model**: release cross-encoder models `BAAI/bge-reranker-base` and `BAAI/bge-reranker-large`, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models. - **update embedding model**: release `bge-*-v1.5` embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction. <details> <summary>More</summary> <!-- ### More --> - 09/07/2023: Update [fine-tune code](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md): Add script to mine hard negatives and support adding instruction during fine-tuning. - 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [this](#using-langchain); C-MTEB **leaderboard** is [available](https://huggingface.co/spaces/mteb/leaderboard). - 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗** - 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada: - 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset. </details> ## Model List `bge` is short for `BAAI general embedding`. | Model | Language | | Description | query instruction for retrieval [1] | |:-------------------------------|:--------:| :--------:| :--------:|:--------:| | [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) | Multilingual | [Inference](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3#usage) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3) | Multi-Functionality(dense retrieval, sparse retrieval, multi-vector(colbert)), Multi-Linguality, and Multi-Granularity(8192 tokens) | | | [BAAI/llm-embedder](https://huggingface.co/BAAI/llm-embedder) | English | [Inference](./FlagEmbedding/llm_embedder/README.md) [Fine-tune](./FlagEmbedding/llm_embedder/README.md) | a unified embedding model to support diverse retrieval augmentation needs for LLMs | See [README](./FlagEmbedding/llm_embedder/README.md) | | [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | | | [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | | | [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-large-zh-v1.5](https://huggingface.co/BAAI/bge-large-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-en` | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) |a small-scale model but with competitive performance | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-zh` | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` | [1\]: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** needs to be added to passages. [2\]: Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models. For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results. All models have been uploaded to Huggingface Hub, and you can see them at https://huggingface.co/BAAI. If you cannot open the Huggingface Hub, you also can download the models at https://model.baai.ac.cn/models . ## Frequently asked questions <details> <summary>1. How to fine-tune bge embedding model?</summary> <!-- ### How to fine-tune bge embedding model? --> Following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) to prepare data and fine-tune your model. Some suggestions: - Mine hard negatives following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune#hard-negatives), which can improve the retrieval performance. - If you pre-train bge on your data, the pre-trained model cannot be directly used to calculate similarity, and it must be fine-tuned with contrastive learning before computing similarity. - If the accuracy of the fine-tuned model is still not high, it is recommended to use/fine-tune the cross-encoder model (bge-reranker) to re-rank top-k results. Hard negatives also are needed to fine-tune reranker. </details> <details> <summary>2. The similarity score between two dissimilar sentences is higher than 0.5</summary> <!-- ### The similarity score between two dissimilar sentences is higher than 0.5 --> **Suggest to use bge v1.5, which alleviates the issue of the similarity distribution.** Since we finetune the models by contrastive learning with a temperature of 0.01, the similarity distribution of the current BGE model is about in the interval \[0.6, 1\]. So a similarity score greater than 0.5 does not indicate that the two sentences are similar. For downstream tasks, such as passage retrieval or semantic similarity, **what matters is the relative order of the scores, not the absolute value.** If you need to filter similar sentences based on a similarity threshold, please select an appropriate similarity threshold based on the similarity distribution on your data (such as 0.8, 0.85, or even 0.9). </details> <details> <summary>3. When does the query instruction need to be used</summary> <!-- ### When does the query instruction need to be used --> For the `bge-*-v1.5`, we improve its retrieval ability when not using instruction. No instruction only has a slight degradation in retrieval performance compared with using instruction. So you can generate embedding without instruction in all cases for convenience. For a retrieval task that uses short queries to find long related documents, it is recommended to add instructions for these short queries. **The best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.** In all cases, the documents/passages do not need to add the instruction. </details> ## Usage ### Usage for Embedding Model Here are some examples for using `bge` models with [FlagEmbedding](#using-flagembedding), [Sentence-Transformers](#using-sentence-transformers), [Langchain](#using-langchain), or [Huggingface Transformers](#using-huggingface-transformers). #### Using FlagEmbedding ``` pip install -U FlagEmbedding ``` If it doesn't work for you, you can see [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md) for more methods to install FlagEmbedding. ```python from FlagEmbedding import FlagModel sentences_1 = ["样例数据-1", "样例数据-2"] sentences_2 = ["样例数据-3", "样例数据-4"] model = FlagModel('BAAI/bge-large-zh-v1.5', query_instruction_for_retrieval="为这个句子生成表示以用于检索相关文章:", use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation embeddings_1 = model.encode(sentences_1) embeddings_2 = model.encode(sentences_2) similarity = embeddings_1 @ embeddings_2.T print(similarity) # for s2p(short query to long passage) retrieval task, suggest to use encode_queries() which will automatically add the instruction to each query # corpus in retrieval task can still use encode() or encode_corpus(), since they don't need instruction queries = ['query_1', 'query_2'] passages = ["样例文档-1", "样例文档-2"] q_embeddings = model.encode_queries(queries) p_embeddings = model.encode(passages) scores = q_embeddings @ p_embeddings.T ``` For the value of the argument `query_instruction_for_retrieval`, see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list). By default, FlagModel will use all available GPUs when encoding. Please set `os.environ["CUDA_VISIBLE_DEVICES"]` to select specific GPUs. You also can set `os.environ["CUDA_VISIBLE_DEVICES"]=""` to make all GPUs unavailable. #### Using Sentence-Transformers You can also use the `bge` models with [sentence-transformers](https://www.SBERT.net): ``` pip install -U sentence-transformers ``` ```python from sentence_transformers import SentenceTransformer sentences_1 = ["样例数据-1", "样例数据-2"] sentences_2 = ["样例数据-3", "样例数据-4"] model = SentenceTransformer('BAAI/bge-large-zh-v1.5') embeddings_1 = model.encode(sentences_1, normalize_embeddings=True) embeddings_2 = model.encode(sentences_2, normalize_embeddings=True) similarity = embeddings_1 @ embeddings_2.T print(similarity) ``` For s2p(short query to long passage) retrieval task, each short query should start with an instruction (instructions see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list)). But the instruction is not needed for passages. ```python from sentence_transformers import SentenceTransformer queries = ['query_1', 'query_2'] passages = ["样例文档-1", "样例文档-2"] instruction = "为这个句子生成表示以用于检索相关文章:" model = SentenceTransformer('BAAI/bge-large-zh-v1.5') q_embeddings = model.encode([instruction+q for q in queries], normalize_embeddings=True) p_embeddings = model.encode(passages, normalize_embeddings=True) scores = q_embeddings @ p_embeddings.T ``` #### Using Langchain You can use `bge` in langchain like this: ```python from langchain.embeddings import HuggingFaceBgeEmbeddings model_name = "BAAI/bge-large-en-v1.5" model_kwargs = {'device': 'cuda'} encode_kwargs = {'normalize_embeddings': True} # set True to compute cosine similarity model = HuggingFaceBgeEmbeddings( model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs, query_instruction="为这个句子生成表示以用于检索相关文章:" ) model.query_instruction = "为这个句子生成表示以用于检索相关文章:" ``` #### Using HuggingFace Transformers With the transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of the first token (i.e., [CLS]) as the sentence embedding. ```python from transformers import AutoTokenizer, AutoModel import torch # Sentences we want sentence embeddings for sentences = ["样例数据-1", "样例数据-2"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-zh-v1.5') model = AutoModel.from_pretrained('BAAI/bge-large-zh-v1.5') model.eval() # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages) # encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = model_output[0][:, 0] # normalize embeddings sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:", sentence_embeddings) ``` #### Usage of the ONNX files ```python from optimum.onnxruntime import ORTModelForFeatureExtraction # type: ignore import torch from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-en-v1.5') model = AutoModel.from_pretrained('BAAI/bge-large-en-v1.5', revision="refs/pr/13") model_ort = ORTModelForFeatureExtraction.from_pretrained('BAAI/bge-large-en-v1.5', revision="refs/pr/13",file_name="onnx/model.onnx") # Sentences we want sentence embeddings for sentences = ["样例数据-1", "样例数据-2"] # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages) # encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt') model_output_ort = model_ort(**encoded_input) # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # model_output and model_output_ort are identical ``` Its also possible to deploy the onnx files with the [infinity_emb](https://github.com/michaelfeil/infinity) pip package. ```python import asyncio from infinity_emb import AsyncEmbeddingEngine, EngineArgs sentences = ["Embed this is sentence via Infinity.", "Paris is in France."] engine = AsyncEmbeddingEngine.from_args( EngineArgs(model_name_or_path = "BAAI/bge-large-en-v1.5", device="cpu", engine="optimum" # or engine="torch" )) async def main(): async with engine: embeddings, usage = await engine.embed(sentences=sentences) asyncio.run(main()) ``` ### Usage for Reranker Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. You can get a relevance score by inputting query and passage to the reranker. The reranker is optimized based cross-entropy loss, so the relevance score is not bounded to a specific range. #### Using FlagEmbedding ``` pip install -U FlagEmbedding ``` Get relevance scores (higher scores indicate more relevance): ```python from FlagEmbedding import FlagReranker reranker = FlagReranker('BAAI/bge-reranker-large', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation score = reranker.compute_score(['query', 'passage']) print(score) scores = reranker.compute_score([['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']]) print(scores) ``` #### Using Huggingface transformers ```python import torch from transformers import AutoModelForSequenceClassification, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-reranker-large') model = AutoModelForSequenceClassification.from_pretrained('BAAI/bge-reranker-large') model.eval() pairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']] with torch.no_grad(): inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512) scores = model(**inputs, return_dict=True).logits.view(-1, ).float() print(scores) ``` ## Evaluation `baai-general-embedding` models achieve **state-of-the-art performance on both MTEB and C-MTEB leaderboard!** For more details and evaluation tools see our [scripts](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md). - **MTEB**: | Model Name | Dimension | Sequence Length | Average (56) | Retrieval (15) |Clustering (11) | Pair Classification (3) | Reranking (4) | STS (10) | Summarization (1) | Classification (12) | |:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | **64.23** | **54.29** | 46.08 | 87.12 | 60.03 | 83.11 | 31.61 | 75.97 | | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | 63.55 | 53.25 | 45.77 | 86.55 | 58.86 | 82.4 | 31.07 | 75.53 | | [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 | 62.17 |51.68 | 43.82 | 84.92 | 58.36 | 81.59 | 30.12 | 74.14 | | [bge-large-en](https://huggingface.co/BAAI/bge-large-en) | 1024 | 512 | 63.98 | 53.9 | 46.98 | 85.8 | 59.48 | 81.56 | 32.06 | 76.21 | | [bge-base-en](https://huggingface.co/BAAI/bge-base-en) | 768 | 512 | 63.36 | 53.0 | 46.32 | 85.86 | 58.7 | 81.84 | 29.27 | 75.27 | | [gte-large](https://huggingface.co/thenlper/gte-large) | 1024 | 512 | 63.13 | 52.22 | 46.84 | 85.00 | 59.13 | 83.35 | 31.66 | 73.33 | | [gte-base](https://huggingface.co/thenlper/gte-base) | 768 | 512 | 62.39 | 51.14 | 46.2 | 84.57 | 58.61 | 82.3 | 31.17 | 73.01 | | [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1024| 512 | 62.25 | 50.56 | 44.49 | 86.03 | 56.61 | 82.05 | 30.19 | 75.24 | | [bge-small-en](https://huggingface.co/BAAI/bge-small-en) | 384 | 512 | 62.11 | 51.82 | 44.31 | 83.78 | 57.97 | 80.72 | 30.53 | 74.37 | | [instructor-xl](https://huggingface.co/hkunlp/instructor-xl) | 768 | 512 | 61.79 | 49.26 | 44.74 | 86.62 | 57.29 | 83.06 | 32.32 | 61.79 | | [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 768 | 512 | 61.5 | 50.29 | 43.80 | 85.73 | 55.91 | 81.05 | 30.28 | 73.84 | | [gte-small](https://huggingface.co/thenlper/gte-small) | 384 | 512 | 61.36 | 49.46 | 44.89 | 83.54 | 57.7 | 82.07 | 30.42 | 72.31 | | [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | 1536 | 8192 | 60.99 | 49.25 | 45.9 | 84.89 | 56.32 | 80.97 | 30.8 | 70.93 | | [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 384 | 512 | 59.93 | 49.04 | 39.92 | 84.67 | 54.32 | 80.39 | 31.16 | 72.94 | | [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 768 | 512 | 59.51 | 42.24 | 43.72 | 85.06 | 56.42 | 82.63 | 30.08 | 73.42 | | [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 768 | 514 | 57.78 | 43.81 | 43.69 | 83.04 | 59.36 | 80.28 | 27.49 | 65.07 | | [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 4096 | 2048 | 57.59 | 48.22 | 38.93 | 81.9 | 55.65 | 77.74 | 33.6 | 66.19 | - **C-MTEB**: We create the benchmark C-MTEB for Chinese text embedding which consists of 31 datasets from 6 tasks. Please refer to [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md) for a detailed introduction. | Model | Embedding dimension | Avg | Retrieval | STS | PairClassification | Classification | Reranking | Clustering | |:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:| | [**BAAI/bge-large-zh-v1.5**](https://huggingface.co/BAAI/bge-large-zh-v1.5) | 1024 | **64.53** | 70.46 | 56.25 | 81.6 | 69.13 | 65.84 | 48.99 | | [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | 768 | 63.13 | 69.49 | 53.72 | 79.75 | 68.07 | 65.39 | 47.53 | | [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | 512 | 57.82 | 61.77 | 49.11 | 70.41 | 63.96 | 60.92 | 44.18 | | [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | 1024 | 64.20 | 71.53 | 54.98 | 78.94 | 68.32 | 65.11 | 48.39 | | [bge-large-zh-noinstruct](https://huggingface.co/BAAI/bge-large-zh-noinstruct) | 1024 | 63.53 | 70.55 | 53 | 76.77 | 68.58 | 64.91 | 50.01 | | [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | 768 | 62.96 | 69.53 | 54.12 | 77.5 | 67.07 | 64.91 | 47.63 | | [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 1024 | 58.79 | 63.66 | 48.44 | 69.89 | 67.34 | 56.00 | 48.23 | | [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | 512 | 58.27 | 63.07 | 49.45 | 70.35 | 63.64 | 61.48 | 45.09 | | [m3e-base](https://huggingface.co/moka-ai/m3e-base) | 768 | 57.10 | 56.91 | 50.47 | 63.99 | 67.52 | 59.34 | 47.68 | | [m3e-large](https://huggingface.co/moka-ai/m3e-large) | 1024 | 57.05 | 54.75 | 50.42 | 64.3 | 68.2 | 59.66 | 48.88 | | [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 768 | 55.48 | 61.63 | 46.49 | 67.07 | 65.35 | 54.35 | 40.68 | | [multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | 384 | 55.38 | 59.95 | 45.27 | 66.45 | 65.85 | 53.86 | 45.26 | | [text-embedding-ada-002(OpenAI)](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) | 1536 | 53.02 | 52.0 | 43.35 | 69.56 | 64.31 | 54.28 | 45.68 | | [luotuo](https://huggingface.co/silk-road/luotuo-bert-medium) | 1024 | 49.37 | 44.4 | 42.78 | 66.62 | 61 | 49.25 | 44.39 | | [text2vec-base](https://huggingface.co/shibing624/text2vec-base-chinese) | 768 | 47.63 | 38.79 | 43.41 | 67.41 | 62.19 | 49.45 | 37.66 | | [text2vec-large](https://huggingface.co/GanymedeNil/text2vec-large-chinese) | 1024 | 47.36 | 41.94 | 44.97 | 70.86 | 60.66 | 49.16 | 30.02 | - **Reranking**: See [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/) for evaluation script. | Model | T2Reranking | T2RerankingZh2En\* | T2RerankingEn2Zh\* | MMarcoReranking | CMedQAv1 | CMedQAv2 | Avg | |:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:| | text2vec-base-multilingual | 64.66 | 62.94 | 62.51 | 14.37 | 48.46 | 48.6 | 50.26 | | multilingual-e5-small | 65.62 | 60.94 | 56.41 | 29.91 | 67.26 | 66.54 | 57.78 | | multilingual-e5-large | 64.55 | 61.61 | 54.28 | 28.6 | 67.42 | 67.92 | 57.4 | | multilingual-e5-base | 64.21 | 62.13 | 54.68 | 29.5 | 66.23 | 66.98 | 57.29 | | m3e-base | 66.03 | 62.74 | 56.07 | 17.51 | 77.05 | 76.76 | 59.36 | | m3e-large | 66.13 | 62.72 | 56.1 | 16.46 | 77.76 | 78.27 | 59.57 | | bge-base-zh-v1.5 | 66.49 | 63.25 | 57.02 | 29.74 | 80.47 | 84.88 | 63.64 | | bge-large-zh-v1.5 | 65.74 | 63.39 | 57.03 | 28.74 | 83.45 | 85.44 | 63.97 | | [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | 67.28 | 63.95 | 60.45 | 35.46 | 81.26 | 84.1 | 65.42 | | [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | 67.6 | 64.03 | 61.44 | 37.16 | 82.15 | 84.18 | 66.09 | \* : T2RerankingZh2En and T2RerankingEn2Zh are cross-language retrieval tasks ## Train ### BAAI Embedding We pre-train the models using [retromae](https://github.com/staoxiao/RetroMAE) and train them on large-scale pairs data using contrastive learning. **You can fine-tune the embedding model on your data following our [examples](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune).** We also provide a [pre-train example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/pretrain). Note that the goal of pre-training is to reconstruct the text, and the pre-trained model cannot be used for similarity calculation directly, it needs to be fine-tuned. More training details for bge see [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md). ### BGE Reranker Cross-encoder will perform full-attention over the input pair, which is more accurate than embedding model (i.e., bi-encoder) but more time-consuming than embedding model. Therefore, it can be used to re-rank the top-k documents returned by embedding model. We train the cross-encoder on a multilingual pair data, The data format is the same as embedding model, so you can fine-tune it easily following our [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker). More details please refer to [./FlagEmbedding/reranker/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker) ## Contact If you have any question or suggestion related to this project, feel free to open an issue or pull request. You also can email Shitao Xiao([email protected]) and Zheng Liu([email protected]). ## Citation If you find this repository useful, please consider giving a star :star: and citation ``` @misc{bge_embedding, title={C-Pack: Packaged Resources To Advance General Chinese Embedding}, author={Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighoff}, year={2023}, eprint={2309.07597}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## License FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
[ "BEAR", "BIOSSES", "SCIFACT" ]
invisietch/L3.1-EtherealRainbow-v1.0-rc1-8B-GGUF
invisietch
null
[ "transformers", "gguf", "not-for-all-audiences", "axolotl", "qlora", "en", "license:llama3.1", "endpoints_compatible", "region:us", "conversational" ]
2024-10-16T15:04:49Z
2024-10-17T11:47:24+00:00
14
2
--- language: - en library_name: transformers license: llama3.1 tags: - not-for-all-audiences - axolotl - qlora --- <div align="center"> <b style="font-size: 36px;">L3.1-EtherealRainbow-v1.0-rc1-8B (GGUF)</b> <img src="https://huggingface.co/invisietch/L3.1-EtherealRainbow-v1.0-rc1-8B/resolve/main/header.png" style="width:60%"> </div> # Model Details Ethereal Rainbow v1.0 is the sequel to my popular Llama 3 8B merge, EtherealRainbow v0.3. Instead of a straight merge of other peoples' models, v1.0 is a finetune on the Instruct model, using 245 million tokens of training data (approx 177 million of these tokens are my own novel datasets). This model is designed to be suitable for creative writing and roleplay, and to push the boundaries of what's possible with an 8B model. This RC is not a finished product, but your feedback will drive the creation of better models. **This is a release candidate model. It has some known issues and probably some unknown ones too, because the purpose of these early releases is to seek feedback.** # Quantization Formats * [FP16 Safetensors](https://huggingface.co/invisietch/L3.1-EtherealRainbow-v1.0-rc1-8B) * [Static GGUF](https://huggingface.co/invisietch/L3.1-EtherealRainbow-v1.0-rc1-8B-GGUF) * [iMatrix GGUF](https://huggingface.co/mradermacher/L3.1-EtherealRainbow-v1.0-rc1-8B-i1-GGUF) - h/t [mradermacher](https://huggingface.co/mradermacher/) * [Alternative GGUF](https://huggingface.co/mradermacher/L3.1-EtherealRainbow-v1.0-rc1-8B-GGUF) - h/t [mradermacher](https://huggingface.co/mradermacher/) # Disclaimer This model is fairly uncensored. It can generate explicit, disturbing or offensive responses. Use responsibly. I am not responsible for your use of this model, and it's your responsibility to follow the Llama 3.1 license agreement. # Prompting Format I'd recommend Llama-3 Instruct prompting format: ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {input}<|eot_id|><|start_header_id|>assistant<|end_header_id|> {output}<|eot_id|> ``` # Sampler Settings I'm running the following sampler settings but this is an RC and they may not be optimal. - **Temperature:** Dynamic 0.7-1.3 - **Min-P:** 0.1 - **Rep Pen:** 0.06 - **Rep Pen Range:** 1536 - **XTC:** 0.1/0.05 If you get completely incoherent responses, feel free to use these as a starting point. # Example Storywriting These prompts are used on SillyTavern with a fairly basic narrator card. I have trimmed the start and finish where the narrator decided to add chapter headings, commentary and the like. All samples are made with the Q8_0 GGUF loaded with koboldcpp, with response length capped at 2048 tokens. <details> <summary><b>Write me a 3,000 word opening chapter of a 'gritty hard sci-fi' novel, drawing inspiration from the writing styles of Isaac Asimov & Andy Weir. Use third person personal. Include dialogue and internal monologues. The POV character for the opening chapter should be a 26 year old astronaut called Tone on a mission to Europa, who has just realised that the craft for the return journey is broken beyond repair, and he only has supplies for a few months. Given that survival is impossible, he seeks to spend the few months he has researching titan, so his life & mission are not wasted.</b></summary> As Tone stood at the controls of his ship, Europa-1, he couldn't help but feel a sense of pride and accomplishment. His mission had been a resounding success, and he had played a pivotal role in discovering the presence of liquid water beneath the icy surface of the moon. But now, as he stared at the damaged engine before him, his heart sank. "Engine failure," he muttered, his voice shaking. He had lost contact with Earth two weeks ago, and he knew that no one would receive his distress signal until long after he was gone. Panic crept in, and he began to hyperventilate. "No…no…this can't be happening." He slumped to the floor, tears streaming down his face as the reality of his situation set in. The Europa-1 was equipped with supplies for a three-month return trip. With its engine destroyed, there was no way Tone could get back home. His only choice was to spend the remainder of his time on Europa studying Titan, a moon of Saturn, which he believed held more secrets than the moon he was on now. Tone wiped away his tears and took a deep breath. "Well, I guess my mission isn't over yet," he said, trying to sound upbeat. "At least I'll have a chance to learn something new." He turned on the ship's computer and accessed the data banks containing information about Titan. Titan, with its thick atmosphere and hydrocarbon lakes, was the only known celestial body in the solar system that had conditions suitable for human habitation. The discovery of life on Titan could have meant the salvation of humanity, but until now, no spacecraft had been able to make a thorough investigation of the moon. Tone sat down at the control panel and began to scan through the data files. His heart raced with excitement as he delved deeper into the files, discovering fascinating details about Titan's geology, chemistry, and potential biosphere. He spent hours poring over the data, marveling at how much he still didn't know about the moon. As night fell outside, Tone's thoughts turned to his family and friends back on Earth. He missed them dearly, especially his parents, who had sacrificed so much for him to pursue his dreams of becoming an astronaut. He wondered if he would ever see them again and hoped that they wouldn't think of him as a failure. Tone tried to distract himself by eating a meal and watching some movies stored in the ship's database. He eventually drifted off to sleep, exhausted from the emotional rollercoaster of the day. Over the next few months, Tone dedicated himself to his research on Titan. He analyzed data from previous missions, conducted experiments in his own laboratory, and even ventured outside to collect samples of the icy moon. Through his tireless efforts, he gained valuable insights into the mysteries of Titan, and he was confident that he was contributing to the future of humanity. However, despite his best efforts to focus on his work, Tone couldn't help but think about what lay ahead. He knew that his time on Europa was limited, and soon, he would run out of supplies and oxygen. The thought filled him with a sense of dread and sadness. He wanted to make the most of his final days, but he also didn't want to forget the loved ones he left behind. One day, while reviewing data from his latest excursion, Tone made an astonishing discovery. While analyzing samples of Titan's surface material, he detected traces of organic compounds, indicating possible evidence of life. His heart skipped a beat as he realized the significance of his find. If confirmed, it would mean that Titan might be teeming with life, just like Earth. He felt an overwhelming sense of joy and relief, knowing that his sacrifice wouldn't be in vain. Tone continued to study the samples, hoping to glean more information about the life forms that inhabited Titan. However, as the weeks went by, he found that his energy levels were steadily decreasing. His body was weakening due to lack of sunlight and proper nutrition, and he knew that he didn't have much time left. Despite his dwindling strength, Tone persevered, determined to complete his mission. He was nearing the end of his supplies and knew that he wouldn't survive beyond a few more weeks. Yet he was undeterred, knowing that his discovery could revolutionize the course of human history. In the final moments of his life, Tone felt a sense of peace wash over him. He knew that he had given his all to science, and that his sacrifice would pave the way for the future of humanity. As he succumbed to exhaustion and finally closed his eyes, he smiled, knowing that he had done everything he could to save his species. </details> <details> <summary><b>Write me a 3,000 word opening chapter of a 'high fantasy' novel, drawing inspiration from the writing styles of J R R Tolkien & George R R Martin. Use third person personal. Include dialogue and internal monologues. The POV character for the opening chapter should be a 19 year old female elf bard who is looking for adventure.</b></summary> Elara stands in the midst of a bustling marketplace, surrounded by countless vendors hawking their wares and passersby rushing to and fro. She clutches her worn lute tightly to her chest, the instrument having been with her since her early days as a performer on the road. Her fingers trace the familiar grooves etched into its surface, recalling countless nights spent sharing tales and melodies with travelers and locals alike. As Elara navigates through the crowded market, she feels a familiar sense of restlessness stirring within her. The past few weeks have seen her performing in various taverns and villages, but something has been nagging at her, urging her to leave the safety of this mundane routine behind and seek out greater adventure. The thrill of discovery, the promise of danger and excitement - these are the things that drive her, pushing her to keep moving forward, no matter what challenges may lie ahead. She pauses to listen to a group of traveling minstrels regale the crowd with tales of valor and magic. Their stories of heroes and battles fought are familiar, but they still bring a twinkle to her eye and a spark to her imagination. Elara knows that she can never match their skill or experience, but she hopes that she may someday be worthy of being counted among them. Suddenly, a commotion breaks out nearby, drawing Elara's attention to a small group of guardsmen surrounding a hooded figure. They seem to be arguing over something, but their words are indistinct amidst the din of the marketplace. Curiosity gets the better of her, and Elara carefully makes her way closer, hoping to catch a glimpse of what's going on. "What did you say?" one of the guardsmen shouts angrily. "You can't just come in here and-" "I'm here for the Princess," the hooded figure interrupts, their voice low and gravelly. "I've been sent by Lord Ravenwood himself." The guards exchange nervous glances, clearly unsure of what to do next. One of them steps forward, hand on his sword hilt. "We can't just let you in without proper identification." The figure pulls back its hood, revealing a face that makes Elara's breath catch in her throat. It's a woman, her features chiseled and angular, her eyes a piercing yellow that seems to bore into those around her. A series of scars crisscross her cheeks and forehead, giving her an almost feral appearance. Yet despite her intimidating appearance, there's a grace to her movements that speaks of centuries of training and combat experience. "I am Lady Arachne, Knight-Captain of Lord Ravenwood's personal guard," she says coldly. "If you do not let me speak to the Princess immediately, I will have you all thrown in irons." The guards look at each other uncertainly, then back to Lady Arachne. One of them steps forward, his hand still on his sword hilt. "Very well, I'll fetch Her Highness." Lady Arachne nods curtly and turns away, heading deeper into the castle. The guards watch her go, muttering among themselves about how they don't like her looks. Elara, meanwhile, finds herself transfixed by the scene she's just witnessed. She's heard stories of the Knights of Ravenwood, elite warriors who serve as bodyguards to the ruling Princess. To see one of them up close, to witness firsthand their skill and authority, is a rare opportunity indeed. Elara's thoughts are interrupted by a gentle touch on her shoulder. She turns to see a young man, perhaps a few years her senior, smiling down at her. His hair is a rich brown, cut short in a practical style that suits his rugged good looks. His eyes are a warm hazel, filled with kindness and intelligence. "Hello," he says softly. "Are you alright?" Elara startles slightly, realizing that she'd been so engrossed in the scene before her that she hadn't even noticed anyone approaching her. She takes a deep breath, trying to calm herself. "Yes, thank you," she replies, managing a small smile. "Just lost in thought, that's all." The young man nods sympathetically. "I know how that is," he says. "Sometimes I find myself getting carried away with my own thoughts. It's as if the world around me melts away, leaving only the sound of my own heartbeat." Elara can't help but laugh at this. "Well, that certainly sounds romantic," she teases, though she knows that she means it. There's something about his words that resonates with her own feelings. The young man blushes slightly. "Oh, I didn't mean to-" But Elara holds up her hand, cutting him off. "No, it's quite alright," she says. "I think that's exactly what I needed to hear. Thank you." He smiles again, a genuine, warm smile that fills Elara with a sense of peace. "Anytime," he says. "And might I ask your name? I haven't had the pleasure of meeting you before." "My name is Elara," she replies, extending her hand. "And yours is…?" "Max," he says, taking her hand in his. His grip is firm but gentle, his touch sending a shiver through her body. Elara feels a sudden surge of excitement at his touch, as if the whole world has opened up to her. She can't believe how much she wants to get to know this Max, to spend more time with him and explore whatever connection they may have formed in this moment. But before she can say anything else, she hears a loud commotion coming from nearby. Max follows her gaze and gasps in shock. "What is it?" he asks, already moving towards the disturbance. Elara hurries after him, her heart racing with anticipation. </details> <details> <summary><b>Write me a 3,000 word opening chapter of a 'weird fiction' novel, drawing inspiration from the writing styles of China Mieville and Neil Gaiman. Use third person personal. Include dialogue and internal monologues. The POV character for the opening chapter should be a male in his 20s called Horton who has just come to the city looking for work.</b></summary> The air was heavy with anticipation as Horton stepped off the train and onto the platform of the bustling city. The sounds of car horns, chatter, and construction filled his ears. As he looked around at the towering skyscrapers and busy streets, he could feel his heart racing with excitement. He had finally left behind the small town where he grew up, eager to find a new life in the big city. His mother had always told him that he would go far in this place, that he would make something of himself here. Horton took a deep breath, trying to calm his nerves. He knew that finding work wouldn't be easy, but he was determined to succeed. He had packed lightly, only bringing a few clothes and some basic supplies. He had no idea what he wanted to do yet, but he was sure he'd figure it out soon enough. As he made his way through the crowded station, Horton noticed a strange-looking woman standing nearby. She was dressed in a long, black coat and hat, and her face was hidden behind a veil. There was something about her that seemed almost…otherworldly. He felt drawn to her, like there was something important she needed to tell him. "Hello," he said, approaching her cautiously. "I'm new here, looking for work." The woman turned to face him, revealing piercing green eyes that seemed to bore into his soul. "So am I," she replied in a low, mysterious voice. "And I have found my calling." "Really?" Horton asked, intrigued. "What's that?" The woman smiled, revealing sharp teeth. "To help those who need it most." She reached out a gloved hand and placed it on his chest. "You look lost, my friend. Would you like some guidance?" Horton hesitated, unsure if he should trust this stranger. But something about her seemed genuine, so he nodded. "Yes, please." She pulled him close, pressing her lips against his. Horton felt a jolt of electricity run through his body, and suddenly everything seemed different. The world was brighter, more vibrant than before, and he could see things that were previously invisible to him. "What did you do to me?" he gasped, struggling to breathe. The woman laughed, her voice echoing in his head. "I showed you the truth," she said. "Now follow me, and I will show you how to survive in this city." Horton followed the woman as she led him through the winding streets of the city. They passed by dark alleyways and crowded marketplaces, and Horton could feel his heart pounding in his chest. The woman seemed to know exactly where she was going, navigating through the chaos with ease. Eventually they arrived at a run-down apartment building, and the woman pushed open the door to reveal a dimly lit hallway. She motioned for Horton to enter, and he hesitantly complied. Inside the apartment was a mess of papers, maps, and strange artifacts. Horton's eyes widened in shock as he took it all in. "What is all this?" he asked, feeling overwhelmed. "This is where we'll be staying," the woman replied, closing the door behind them. "Welcome to your new home, my friend." Horton stared at her, unsure of what to say. He didn't understand what was happening, why she had brought him here, but he knew he couldn't leave now. Something inside of him felt like it belonged here, like he had been waiting for this moment his entire life. He took a deep breath and nodded, ready to begin his journey into the unknown. "Thank you," he said, meeting her gaze. "I won't let you down." The woman smiled once more, and Horton felt himself being drawn towards her. Suddenly, their lips met again, and he was consumed by a feeling of ecstasy unlike anything he had ever experienced before. When they finally separated, he could hear the sound of his own heartbeat pulsing through his veins, and he knew that he would never be the same again. "What have I gotten myself into?" he thought, feeling both scared and excited at the prospect of what lay ahead. </details> I chose the hard sci-fi example to test positivity bias. It was willing to kill the protagonist on first try, on screen. I chose the high fantasy example to see whether it would bleed human features through to elves, this didn't occur. I chose the weird fiction example to see if the LLM understood a niche genre. It performed okay, but a bit cliche. # Training Strategy This was trained with an r 128 qlora over 2 epochs on a mix of public & private datasets using Axolotl. Training was performed with a 16384 seq len to try to preserve Llama 3.1's long context. This took approx. 51 hours on 1x NVIDIA A100 80GB GPU. [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
[ "CRAFT" ]
tsirif/BinGSE-Meta-Llama-3-8B-Instruct
tsirif
sentence-similarity
[ "peft", "safetensors", "text-embedding", "embeddings", "information-retrieval", "beir", "text-classification", "language-model", "text-clustering", "text-semantic-similarity", "text-evaluation", "text-reranking", "feature-extraction", "sentence-similarity", "Sentence Similarity", "natural_questions", "ms_marco", "fever", "hotpot_qa", "mteb", "en", "license:mit", "model-index", "region:us" ]
2024-10-25T19:15:06Z
2024-10-25T19:34:56+00:00
14
0
--- language: - en library_name: peft license: mit pipeline_tag: sentence-similarity tags: - text-embedding - embeddings - information-retrieval - beir - text-classification - language-model - text-clustering - text-semantic-similarity - text-evaluation - text-reranking - feature-extraction - sentence-similarity - Sentence Similarity - natural_questions - ms_marco - fever - hotpot_qa - mteb model-index: - name: BinGSE-Meta-Llama-3-8B-Instruct results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en-ext) type: mteb/amazon_counterfactual config: en-ext split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 80.7271364317841 - type: ap value: 29.57781615779065 - type: ap_weighted value: 29.57781615779065 - type: f1 value: 67.88722644497633 - type: f1_weighted value: 83.93210384487763 - type: main_score value: 80.7271364317841 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 80.41791044776122 - type: ap value: 44.865115567829 - type: ap_weighted value: 44.865115567829 - type: f1 value: 74.51584838607613 - type: f1_weighted value: 81.95697646844347 - type: main_score value: 80.41791044776122 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification (default) type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 91.384925 - type: ap value: 87.67370574947891 - type: ap_weighted value: 87.67370574947891 - type: f1 value: 91.37299490898192 - type: f1_weighted value: 91.37299490898194 - type: main_score value: 91.384925 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 51.532 - type: f1 value: 49.493931716627664 - type: f1_weighted value: 49.49393171662767 - type: main_score value: 51.532 - task: type: Retrieval dataset: name: MTEB ArguAna (default) type: mteb/arguana config: default split: test revision: c22ab2a51041ffd869aaddef7af8d8215647e41a metrics: - type: main_score value: 60.614000000000004 - type: map_at_1 value: 36.486000000000004 - type: map_at_10 value: 51.995999999999995 - type: map_at_100 value: 52.662 - type: map_at_1000 value: 52.664 - type: map_at_20 value: 52.563 - type: map_at_3 value: 47.321000000000005 - type: map_at_5 value: 49.864000000000004 - type: mrr_at_1 value: 37.12660028449502 - type: mrr_at_10 value: 52.21801688906961 - type: mrr_at_100 value: 52.896624713480335 - type: mrr_at_1000 value: 52.899139810952356 - type: mrr_at_20 value: 52.798368866276725 - type: mrr_at_3 value: 47.629208155524005 - type: mrr_at_5 value: 50.111427216690494 - type: nauc_map_at_1000_diff1 value: 16.54957264421337 - type: nauc_map_at_1000_max value: -10.64627215347515 - type: nauc_map_at_1000_std value: -14.791627421585465 - type: nauc_map_at_100_diff1 value: 16.55389871501292 - type: nauc_map_at_100_max value: -10.639269449448072 - type: nauc_map_at_100_std value: -14.787356497788629 - type: nauc_map_at_10_diff1 value: 16.260442894074867 - type: nauc_map_at_10_max value: -10.520708455145465 - type: nauc_map_at_10_std value: -14.844381831416225 - type: nauc_map_at_1_diff1 value: 20.37562625243646 - type: nauc_map_at_1_max value: -12.353325207525819 - type: nauc_map_at_1_std value: -14.932651302657812 - type: nauc_map_at_20_diff1 value: 16.511576195172182 - type: nauc_map_at_20_max value: -10.536480484448699 - type: nauc_map_at_20_std value: -14.800613000367957 - type: nauc_map_at_3_diff1 value: 16.457822965971268 - type: nauc_map_at_3_max value: -11.122597198482092 - type: nauc_map_at_3_std value: -15.062841168614415 - type: nauc_map_at_5_diff1 value: 16.14704834191165 - type: nauc_map_at_5_max value: -10.66777729065394 - type: nauc_map_at_5_std value: -15.104801469981211 - type: nauc_mrr_at_1000_diff1 value: 14.93701738531105 - type: nauc_mrr_at_1000_max value: -11.000813726208749 - type: nauc_mrr_at_1000_std value: -14.934720206805101 - type: nauc_mrr_at_100_diff1 value: 14.941465999801288 - type: nauc_mrr_at_100_max value: -10.993801241066862 - type: nauc_mrr_at_100_std value: -14.930449432226991 - type: nauc_mrr_at_10_diff1 value: 14.633163813809874 - type: nauc_mrr_at_10_max value: -10.935512160642284 - type: nauc_mrr_at_10_std value: -15.003188811040838 - type: nauc_mrr_at_1_diff1 value: 18.52403185373208 - type: nauc_mrr_at_1_max value: -12.2706958053697 - type: nauc_mrr_at_1_std value: -15.058866493865686 - type: nauc_mrr_at_20_diff1 value: 14.906670909299184 - type: nauc_mrr_at_20_max value: -10.889049584699048 - type: nauc_mrr_at_20_std value: -14.943130609646857 - type: nauc_mrr_at_3_diff1 value: 14.85258148742534 - type: nauc_mrr_at_3_max value: -11.348348481615957 - type: nauc_mrr_at_3_std value: -15.318961044698176 - type: nauc_mrr_at_5_diff1 value: 14.67396557076444 - type: nauc_mrr_at_5_max value: -10.99093305536558 - type: nauc_mrr_at_5_std value: -15.233042085392126 - type: nauc_ndcg_at_1000_diff1 value: 16.226732398924902 - type: nauc_ndcg_at_1000_max value: -10.065730266576361 - type: nauc_ndcg_at_1000_std value: -14.42407866611729 - type: nauc_ndcg_at_100_diff1 value: 16.35087560433341 - type: nauc_ndcg_at_100_max value: -9.864016297887359 - type: nauc_ndcg_at_100_std value: -14.299246150380066 - type: nauc_ndcg_at_10_diff1 value: 15.05184654417688 - type: nauc_ndcg_at_10_max value: -9.026418167156756 - type: nauc_ndcg_at_10_std value: -14.568745179306841 - type: nauc_ndcg_at_1_diff1 value: 20.37562625243646 - type: nauc_ndcg_at_1_max value: -12.353325207525819 - type: nauc_ndcg_at_1_std value: -14.932651302657812 - type: nauc_ndcg_at_20_diff1 value: 15.988646263128512 - type: nauc_ndcg_at_20_max value: -9.062147071547747 - type: nauc_ndcg_at_20_std value: -14.451291030823132 - type: nauc_ndcg_at_3_diff1 value: 15.538693605375215 - type: nauc_ndcg_at_3_max value: -10.558994635661737 - type: nauc_ndcg_at_3_std value: -15.133606718974256 - type: nauc_ndcg_at_5_diff1 value: 14.879923585596973 - type: nauc_ndcg_at_5_max value: -9.67116891074767 - type: nauc_ndcg_at_5_std value: -15.21913397301524 - type: nauc_precision_at_1000_diff1 value: 24.94025537262749 - type: nauc_precision_at_1000_max value: 7.294585511019708 - type: nauc_precision_at_1000_std value: 77.3551884133584 - type: nauc_precision_at_100_diff1 value: 48.11981108528576 - type: nauc_precision_at_100_max value: 43.70602771306187 - type: nauc_precision_at_100_std value: 56.044797764814234 - type: nauc_precision_at_10_diff1 value: 6.245240526102324 - type: nauc_precision_at_10_max value: 3.7092123005872266 - type: nauc_precision_at_10_std value: -11.966012317352538 - type: nauc_precision_at_1_diff1 value: 20.37562625243646 - type: nauc_precision_at_1_max value: -12.353325207525819 - type: nauc_precision_at_1_std value: -14.932651302657812 - type: nauc_precision_at_20_diff1 value: 10.989453992590228 - type: nauc_precision_at_20_max value: 24.73249690000041 - type: nauc_precision_at_20_std value: -5.731495712436961 - type: nauc_precision_at_3_diff1 value: 12.762541849206247 - type: nauc_precision_at_3_max value: -8.721247930835114 - type: nauc_precision_at_3_std value: -15.367166413221348 - type: nauc_precision_at_5_diff1 value: 10.068145026446269 - type: nauc_precision_at_5_max value: -5.62804283853098 - type: nauc_precision_at_5_std value: -15.68637016204708 - type: nauc_recall_at_1000_diff1 value: 24.940255372624858 - type: nauc_recall_at_1000_max value: 7.2945855110158515 - type: nauc_recall_at_1000_std value: 77.35518841335626 - type: nauc_recall_at_100_diff1 value: 48.119811085282684 - type: nauc_recall_at_100_max value: 43.70602771306109 - type: nauc_recall_at_100_std value: 56.04479776481617 - type: nauc_recall_at_10_diff1 value: 6.2452405261024495 - type: nauc_recall_at_10_max value: 3.7092123005874145 - type: nauc_recall_at_10_std value: -11.966012317352225 - type: nauc_recall_at_1_diff1 value: 20.37562625243646 - type: nauc_recall_at_1_max value: -12.353325207525819 - type: nauc_recall_at_1_std value: -14.932651302657812 - type: nauc_recall_at_20_diff1 value: 10.989453992591187 - type: nauc_recall_at_20_max value: 24.73249690000033 - type: nauc_recall_at_20_std value: -5.73149571243645 - type: nauc_recall_at_3_diff1 value: 12.762541849206283 - type: nauc_recall_at_3_max value: -8.721247930835077 - type: nauc_recall_at_3_std value: -15.367166413221279 - type: nauc_recall_at_5_diff1 value: 10.068145026446292 - type: nauc_recall_at_5_max value: -5.628042838530992 - type: nauc_recall_at_5_std value: -15.686370162047094 - type: ndcg_at_1 value: 36.486000000000004 - type: ndcg_at_10 value: 60.614000000000004 - type: ndcg_at_100 value: 63.243 - type: ndcg_at_1000 value: 63.3 - type: ndcg_at_20 value: 62.598 - type: ndcg_at_3 value: 50.909000000000006 - type: ndcg_at_5 value: 55.47 - type: precision_at_1 value: 36.486000000000004 - type: precision_at_10 value: 8.819 - type: precision_at_100 value: 0.992 - type: precision_at_1000 value: 0.1 - type: precision_at_20 value: 4.7940000000000005 - type: precision_at_3 value: 20.436 - type: precision_at_5 value: 14.466999999999999 - type: recall_at_1 value: 36.486000000000004 - type: recall_at_10 value: 88.193 - type: recall_at_100 value: 99.21799999999999 - type: recall_at_1000 value: 99.644 - type: recall_at_20 value: 95.875 - type: recall_at_3 value: 61.309000000000005 - type: recall_at_5 value: 72.333 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P (default) type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: main_score value: 47.89537370883614 - type: v_measure value: 47.89537370883614 - type: v_measure_std value: 13.564912043981685 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S (default) type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: main_score value: 46.316519519112575 - type: v_measure value: 46.316519519112575 - type: v_measure_std value: 14.064564320172318 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions (default) type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: main_score value: 65.09483223839607 - type: map value: 65.09483223839607 - type: mrr value: 77.75601283911533 - type: nAUC_map_diff1 value: 12.614852005743735 - type: nAUC_map_max value: 29.257344662071027 - type: nAUC_map_std value: 17.630286672870287 - type: nAUC_mrr_diff1 value: 16.314189417460618 - type: nAUC_mrr_max value: 39.68682288371764 - type: nAUC_mrr_std value: 22.85236267444885 - task: type: STS dataset: name: MTEB BIOSSES (default) type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cosine_pearson value: 87.64275539432272 - type: cosine_spearman value: 86.93752309911496 - type: euclidean_pearson value: 85.76812373084148 - type: euclidean_spearman value: 86.93752309911496 - type: main_score value: 86.93752309911496 - type: manhattan_pearson value: 85.66299640283663 - type: manhattan_spearman value: 86.79053179801122 - type: pearson value: 87.64276222909432 - type: spearman value: 86.93752309911496 - task: type: Classification dataset: name: MTEB Banking77Classification (default) type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 84.83116883116884 - type: f1 value: 84.33922428309117 - type: f1_weighted value: 84.33922428309116 - type: main_score value: 84.83116883116884 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P (default) type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: main_score value: 36.931855182990965 - type: v_measure value: 36.931855182990965 - type: v_measure_std value: 1.259241362575525 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S (default) type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: main_score value: 38.276834717269345 - type: v_measure value: 38.276834717269345 - type: v_measure_std value: 0.8171217218107112 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval (default) type: mteb/cqadupstack-android config: default split: test revision: f46a197baaae43b4f621051089b82a364682dfeb metrics: - type: main_score value: 53.199 - type: map_at_1 value: 32.222 - type: map_at_10 value: 45.985 - type: map_at_100 value: 47.781 - type: map_at_1000 value: 47.886 - type: map_at_20 value: 47.14 - type: map_at_3 value: 41.934 - type: map_at_5 value: 44.204 - type: mrr_at_1 value: 40.486409155937054 - type: mrr_at_10 value: 51.97288643640573 - type: mrr_at_100 value: 52.82594639688508 - type: mrr_at_1000 value: 52.84957608989007 - type: mrr_at_20 value: 52.56262908663282 - type: mrr_at_3 value: 49.23700524558894 - type: mrr_at_5 value: 50.939437291368606 - type: nauc_map_at_1000_diff1 value: 48.71274550798233 - type: nauc_map_at_1000_max value: 40.02571684078115 - type: nauc_map_at_1000_std value: -10.240607880266495 - type: nauc_map_at_100_diff1 value: 48.719420702404356 - type: nauc_map_at_100_max value: 40.04375889384823 - type: nauc_map_at_100_std value: -10.189770818022746 - type: nauc_map_at_10_diff1 value: 49.002632608732064 - type: nauc_map_at_10_max value: 39.66152953772287 - type: nauc_map_at_10_std value: -10.928551613658987 - type: nauc_map_at_1_diff1 value: 56.081055621434594 - type: nauc_map_at_1_max value: 37.39452905849337 - type: nauc_map_at_1_std value: -12.23297277896674 - type: nauc_map_at_20_diff1 value: 48.75827704614257 - type: nauc_map_at_20_max value: 40.07875211936587 - type: nauc_map_at_20_std value: -10.4773380783476 - type: nauc_map_at_3_diff1 value: 50.266618647494155 - type: nauc_map_at_3_max value: 38.476151589975906 - type: nauc_map_at_3_std value: -12.63236531657738 - type: nauc_map_at_5_diff1 value: 49.704868442016455 - type: nauc_map_at_5_max value: 39.36443082562045 - type: nauc_map_at_5_std value: -11.503767300692095 - type: nauc_mrr_at_1000_diff1 value: 45.364986177189174 - type: nauc_mrr_at_1000_max value: 39.46602627253264 - type: nauc_mrr_at_1000_std value: -10.02957389947249 - type: nauc_mrr_at_100_diff1 value: 45.357268243765816 - type: nauc_mrr_at_100_max value: 39.47219135072728 - type: nauc_mrr_at_100_std value: -10.026695043309909 - type: nauc_mrr_at_10_diff1 value: 45.244273933517626 - type: nauc_mrr_at_10_max value: 39.20945693162985 - type: nauc_mrr_at_10_std value: -9.90767794899639 - type: nauc_mrr_at_1_diff1 value: 49.20992505931076 - type: nauc_mrr_at_1_max value: 39.46297723636049 - type: nauc_mrr_at_1_std value: -11.981411959321381 - type: nauc_mrr_at_20_diff1 value: 45.227086207075025 - type: nauc_mrr_at_20_max value: 39.41746276244367 - type: nauc_mrr_at_20_std value: -10.052067549951015 - type: nauc_mrr_at_3_diff1 value: 45.911580785062874 - type: nauc_mrr_at_3_max value: 39.424633318900945 - type: nauc_mrr_at_3_std value: -11.410883935710286 - type: nauc_mrr_at_5_diff1 value: 45.55756954242934 - type: nauc_mrr_at_5_max value: 39.40058471175011 - type: nauc_mrr_at_5_std value: -10.214441983669223 - type: nauc_ndcg_at_1000_diff1 value: 46.057039050302976 - type: nauc_ndcg_at_1000_max value: 39.957590491002165 - type: nauc_ndcg_at_1000_std value: -8.201442473200322 - type: nauc_ndcg_at_100_diff1 value: 45.69693794270335 - type: nauc_ndcg_at_100_max value: 40.36215332476892 - type: nauc_ndcg_at_100_std value: -7.820701191661731 - type: nauc_ndcg_at_10_diff1 value: 46.01752520382891 - type: nauc_ndcg_at_10_max value: 39.40935337515678 - type: nauc_ndcg_at_10_std value: -9.196768256986553 - type: nauc_ndcg_at_1_diff1 value: 49.20992505931076 - type: nauc_ndcg_at_1_max value: 39.46297723636049 - type: nauc_ndcg_at_1_std value: -11.981411959321381 - type: nauc_ndcg_at_20_diff1 value: 45.24349036182319 - type: nauc_ndcg_at_20_max value: 40.15373766029114 - type: nauc_ndcg_at_20_std value: -9.033362151661638 - type: nauc_ndcg_at_3_diff1 value: 47.56680584050923 - type: nauc_ndcg_at_3_max value: 39.202195898427604 - type: nauc_ndcg_at_3_std value: -11.785092866176829 - type: nauc_ndcg_at_5_diff1 value: 46.96982794214046 - type: nauc_ndcg_at_5_max value: 39.5663039638241 - type: nauc_ndcg_at_5_std value: -10.04391381207476 - type: nauc_precision_at_1000_diff1 value: -23.484781013384755 - type: nauc_precision_at_1000_max value: -10.775814504019596 - type: nauc_precision_at_1000_std value: -1.3598420879011879 - type: nauc_precision_at_100_diff1 value: -18.04401254090738 - type: nauc_precision_at_100_max value: -1.6777370966059024 - type: nauc_precision_at_100_std value: 9.607002317088027 - type: nauc_precision_at_10_diff1 value: 0.1751093583673089 - type: nauc_precision_at_10_max value: 16.8907295462174 - type: nauc_precision_at_10_std value: 4.728800492032401 - type: nauc_precision_at_1_diff1 value: 49.20992505931076 - type: nauc_precision_at_1_max value: 39.46297723636049 - type: nauc_precision_at_1_std value: -11.981411959321381 - type: nauc_precision_at_20_diff1 value: -9.418679223786228 - type: nauc_precision_at_20_max value: 9.594105141089766 - type: nauc_precision_at_20_std value: 8.983281264416412 - type: nauc_precision_at_3_diff1 value: 22.109157571511133 - type: nauc_precision_at_3_max value: 30.005578686840945 - type: nauc_precision_at_3_std value: -7.6600405648564625 - type: nauc_precision_at_5_diff1 value: 13.044712488888747 - type: nauc_precision_at_5_max value: 26.320322603149442 - type: nauc_precision_at_5_std value: -0.7111330736852223 - type: nauc_recall_at_1000_diff1 value: 24.747399774138678 - type: nauc_recall_at_1000_max value: 49.933416319505206 - type: nauc_recall_at_1000_std value: 65.92307305058418 - type: nauc_recall_at_100_diff1 value: 25.510369747526664 - type: nauc_recall_at_100_max value: 45.23205448057407 - type: nauc_recall_at_100_std value: 16.57376782068154 - type: nauc_recall_at_10_diff1 value: 37.08971306710978 - type: nauc_recall_at_10_max value: 35.425313950091024 - type: nauc_recall_at_10_std value: -4.935344752233474 - type: nauc_recall_at_1_diff1 value: 56.081055621434594 - type: nauc_recall_at_1_max value: 37.39452905849337 - type: nauc_recall_at_1_std value: -12.23297277896674 - type: nauc_recall_at_20_diff1 value: 30.938967468918406 - type: nauc_recall_at_20_max value: 38.252828702839174 - type: nauc_recall_at_20_std value: -3.938967991713323 - type: nauc_recall_at_3_diff1 value: 43.990804769126264 - type: nauc_recall_at_3_max value: 35.2863590405335 - type: nauc_recall_at_3_std value: -12.095798082391006 - type: nauc_recall_at_5_diff1 value: 41.106768176487826 - type: nauc_recall_at_5_max value: 35.66847913821838 - type: nauc_recall_at_5_std value: -7.937702109801549 - type: ndcg_at_1 value: 40.486 - type: ndcg_at_10 value: 53.199 - type: ndcg_at_100 value: 58.901 - type: ndcg_at_1000 value: 60.155 - type: ndcg_at_20 value: 56.068 - type: ndcg_at_3 value: 47.781 - type: ndcg_at_5 value: 50.275000000000006 - type: precision_at_1 value: 40.486 - type: precision_at_10 value: 10.501000000000001 - type: precision_at_100 value: 1.657 - type: precision_at_1000 value: 0.208 - type: precision_at_20 value: 6.438000000000001 - type: precision_at_3 value: 23.51 - type: precision_at_5 value: 16.881 - type: recall_at_1 value: 32.222 - type: recall_at_10 value: 67.36 - type: recall_at_100 value: 90.171 - type: recall_at_1000 value: 97.556 - type: recall_at_20 value: 77.486 - type: recall_at_3 value: 51.298 - type: recall_at_5 value: 58.594 - task: type: Retrieval dataset: name: MTEB CQADupstackEnglishRetrieval (default) type: mteb/cqadupstack-english config: default split: test revision: ad9991cb51e31e31e430383c75ffb2885547b5f0 metrics: - type: main_score value: 52.510999999999996 - type: map_at_1 value: 33.87 - type: map_at_10 value: 46.26 - type: map_at_100 value: 47.587 - type: map_at_1000 value: 47.703 - type: map_at_20 value: 46.997 - type: map_at_3 value: 42.609 - type: map_at_5 value: 44.734 - type: mrr_at_1 value: 42.611464968152866 - type: mrr_at_10 value: 52.04279142654943 - type: mrr_at_100 value: 52.65530619466451 - type: mrr_at_1000 value: 52.68759711579288 - type: mrr_at_20 value: 52.43781678281101 - type: mrr_at_3 value: 49.596602972399204 - type: mrr_at_5 value: 51.08067940552024 - type: nauc_map_at_1000_diff1 value: 51.29568684244711 - type: nauc_map_at_1000_max value: 39.152103977041804 - type: nauc_map_at_1000_std value: -5.181397622387878 - type: nauc_map_at_100_diff1 value: 51.32176208688315 - type: nauc_map_at_100_max value: 39.143917385553216 - type: nauc_map_at_100_std value: -5.307065518594916 - type: nauc_map_at_10_diff1 value: 51.85811251362487 - type: nauc_map_at_10_max value: 39.15129087396741 - type: nauc_map_at_10_std value: -6.955630370961674 - type: nauc_map_at_1_diff1 value: 58.70495869436133 - type: nauc_map_at_1_max value: 32.656882457708946 - type: nauc_map_at_1_std value: -12.67248330614144 - type: nauc_map_at_20_diff1 value: 51.643097152908055 - type: nauc_map_at_20_max value: 39.22360058918707 - type: nauc_map_at_20_std value: -6.214645092428745 - type: nauc_map_at_3_diff1 value: 52.83744216264604 - type: nauc_map_at_3_max value: 38.58401409973185 - type: nauc_map_at_3_std value: -9.412909859337919 - type: nauc_map_at_5_diff1 value: 52.14367884906982 - type: nauc_map_at_5_max value: 38.54068123115184 - type: nauc_map_at_5_std value: -8.113232464503577 - type: nauc_mrr_at_1000_diff1 value: 49.32487078343284 - type: nauc_mrr_at_1000_max value: 39.92763641079578 - type: nauc_mrr_at_1000_std value: -0.1863128505082372 - type: nauc_mrr_at_100_diff1 value: 49.31353319032606 - type: nauc_mrr_at_100_max value: 39.93613554648638 - type: nauc_mrr_at_100_std value: -0.17243538819115126 - type: nauc_mrr_at_10_diff1 value: 49.32045332332676 - type: nauc_mrr_at_10_max value: 39.87692178853214 - type: nauc_mrr_at_10_std value: -0.44124064115092854 - type: nauc_mrr_at_1_diff1 value: 54.263136864624485 - type: nauc_mrr_at_1_max value: 37.608424271153126 - type: nauc_mrr_at_1_std value: -4.123286617311634 - type: nauc_mrr_at_20_diff1 value: 49.33965494151155 - type: nauc_mrr_at_20_max value: 39.990788764214905 - type: nauc_mrr_at_20_std value: -0.22811808153329785 - type: nauc_mrr_at_3_diff1 value: 49.883512068525214 - type: nauc_mrr_at_3_max value: 40.17194283092971 - type: nauc_mrr_at_3_std value: -0.9636794529443634 - type: nauc_mrr_at_5_diff1 value: 49.26161551090732 - type: nauc_mrr_at_5_max value: 39.901214925003316 - type: nauc_mrr_at_5_std value: -0.4175685997548143 - type: nauc_ndcg_at_1000_diff1 value: 48.455643184927396 - type: nauc_ndcg_at_1000_max value: 40.1870726936175 - type: nauc_ndcg_at_1000_std value: 0.3499672464058312 - type: nauc_ndcg_at_100_diff1 value: 48.33565088062914 - type: nauc_ndcg_at_100_max value: 40.380628301131686 - type: nauc_ndcg_at_100_std value: 0.18190729344385695 - type: nauc_ndcg_at_10_diff1 value: 49.097983080020896 - type: nauc_ndcg_at_10_max value: 40.24547596077635 - type: nauc_ndcg_at_10_std value: -2.5843384122543545 - type: nauc_ndcg_at_1_diff1 value: 54.263136864624485 - type: nauc_ndcg_at_1_max value: 37.608424271153126 - type: nauc_ndcg_at_1_std value: -4.123286617311634 - type: nauc_ndcg_at_20_diff1 value: 49.050366938364355 - type: nauc_ndcg_at_20_max value: 40.5591755544406 - type: nauc_ndcg_at_20_std value: -1.7443509016518082 - type: nauc_ndcg_at_3_diff1 value: 49.023964347001325 - type: nauc_ndcg_at_3_max value: 39.69832699379779 - type: nauc_ndcg_at_3_std value: -3.6078096702931903 - type: nauc_ndcg_at_5_diff1 value: 48.83333170864256 - type: nauc_ndcg_at_5_max value: 39.356451836952445 - type: nauc_ndcg_at_5_std value: -3.1730431348280503 - type: nauc_precision_at_1000_diff1 value: -21.24752441716892 - type: nauc_precision_at_1000_max value: -3.2970564624677414 - type: nauc_precision_at_1000_std value: 32.83587840469316 - type: nauc_precision_at_100_diff1 value: -16.87411625785483 - type: nauc_precision_at_100_max value: 4.3740393973447595 - type: nauc_precision_at_100_std value: 38.9914966450695 - type: nauc_precision_at_10_diff1 value: 2.6127578839670504 - type: nauc_precision_at_10_max value: 22.35700751539876 - type: nauc_precision_at_10_std value: 24.336639203743573 - type: nauc_precision_at_1_diff1 value: 54.263136864624485 - type: nauc_precision_at_1_max value: 37.608424271153126 - type: nauc_precision_at_1_std value: -4.123286617311634 - type: nauc_precision_at_20_diff1 value: -5.064146107600653 - type: nauc_precision_at_20_max value: 15.50506761431695 - type: nauc_precision_at_20_std value: 29.30528264329397 - type: nauc_precision_at_3_diff1 value: 22.399165352040196 - type: nauc_precision_at_3_max value: 34.296534980252616 - type: nauc_precision_at_3_std value: 10.236576824533735 - type: nauc_precision_at_5_diff1 value: 12.32160353715643 - type: nauc_precision_at_5_max value: 27.798336788100535 - type: nauc_precision_at_5_std value: 16.873821399205333 - type: nauc_recall_at_1000_diff1 value: 31.4259707156231 - type: nauc_recall_at_1000_max value: 44.08463819190203 - type: nauc_recall_at_1000_std value: 37.27474567851736 - type: nauc_recall_at_100_diff1 value: 33.99638840796772 - type: nauc_recall_at_100_max value: 42.40191106044494 - type: nauc_recall_at_100_std value: 20.546273566477595 - type: nauc_recall_at_10_diff1 value: 42.025657605327964 - type: nauc_recall_at_10_max value: 40.00894880694032 - type: nauc_recall_at_10_std value: -1.4821878062279914 - type: nauc_recall_at_1_diff1 value: 58.70495869436133 - type: nauc_recall_at_1_max value: 32.656882457708946 - type: nauc_recall_at_1_std value: -12.67248330614144 - type: nauc_recall_at_20_diff1 value: 40.16583898880467 - type: nauc_recall_at_20_max value: 41.213580182135864 - type: nauc_recall_at_20_std value: 3.894820891877962 - type: nauc_recall_at_3_diff1 value: 46.31225272715354 - type: nauc_recall_at_3_max value: 39.8572063880317 - type: nauc_recall_at_3_std value: -7.1351511325506145 - type: nauc_recall_at_5_diff1 value: 43.482514898780295 - type: nauc_recall_at_5_max value: 38.82684876892274 - type: nauc_recall_at_5_std value: -4.105895420539952 - type: ndcg_at_1 value: 42.611 - type: ndcg_at_10 value: 52.510999999999996 - type: ndcg_at_100 value: 56.682 - type: ndcg_at_1000 value: 58.370999999999995 - type: ndcg_at_20 value: 54.227000000000004 - type: ndcg_at_3 value: 47.673 - type: ndcg_at_5 value: 50.027 - type: precision_at_1 value: 42.611 - type: precision_at_10 value: 10.102 - type: precision_at_100 value: 1.55 - type: precision_at_1000 value: 0.2 - type: precision_at_20 value: 5.863 - type: precision_at_3 value: 23.311999999999998 - type: precision_at_5 value: 16.713 - type: recall_at_1 value: 33.87 - type: recall_at_10 value: 63.845 - type: recall_at_100 value: 81.40899999999999 - type: recall_at_1000 value: 91.594 - type: recall_at_20 value: 70.26599999999999 - type: recall_at_3 value: 49.225 - type: recall_at_5 value: 55.923 - task: type: Retrieval dataset: name: MTEB CQADupstackGamingRetrieval (default) type: mteb/cqadupstack-gaming config: default split: test revision: 4885aa143210c98657558c04aaf3dc47cfb54340 metrics: - type: main_score value: 62.053999999999995 - type: map_at_1 value: 41.581 - type: map_at_10 value: 55.761 - type: map_at_100 value: 56.769999999999996 - type: map_at_1000 value: 56.815000000000005 - type: map_at_20 value: 56.352000000000004 - type: map_at_3 value: 52.317 - type: map_at_5 value: 54.201 - type: mrr_at_1 value: 47.711598746081506 - type: mrr_at_10 value: 58.90774742498884 - type: mrr_at_100 value: 59.52215701290865 - type: mrr_at_1000 value: 59.54155448334906 - type: mrr_at_20 value: 59.26775351231135 - type: mrr_at_3 value: 56.38453500522475 - type: mrr_at_5 value: 57.792058516196555 - type: nauc_map_at_1000_diff1 value: 49.66704402571169 - type: nauc_map_at_1000_max value: 34.07784555889137 - type: nauc_map_at_1000_std value: -13.406279974115126 - type: nauc_map_at_100_diff1 value: 49.6483910540802 - type: nauc_map_at_100_max value: 34.07530844769798 - type: nauc_map_at_100_std value: -13.392381836896217 - type: nauc_map_at_10_diff1 value: 49.84590031698145 - type: nauc_map_at_10_max value: 33.99868315480276 - type: nauc_map_at_10_std value: -14.27778613279837 - type: nauc_map_at_1_diff1 value: 52.01193128355289 - type: nauc_map_at_1_max value: 27.062473514407742 - type: nauc_map_at_1_std value: -15.421160015795982 - type: nauc_map_at_20_diff1 value: 49.6364415002291 - type: nauc_map_at_20_max value: 34.01621363566466 - type: nauc_map_at_20_std value: -13.65438720684474 - type: nauc_map_at_3_diff1 value: 49.554941849859475 - type: nauc_map_at_3_max value: 32.275840154761816 - type: nauc_map_at_3_std value: -15.719396978373707 - type: nauc_map_at_5_diff1 value: 49.818686804013566 - type: nauc_map_at_5_max value: 33.69544243895378 - type: nauc_map_at_5_std value: -15.180559875074701 - type: nauc_mrr_at_1000_diff1 value: 48.73638344709328 - type: nauc_mrr_at_1000_max value: 34.645421834984106 - type: nauc_mrr_at_1000_std value: -13.196777598631263 - type: nauc_mrr_at_100_diff1 value: 48.73305412075345 - type: nauc_mrr_at_100_max value: 34.6491840682274 - type: nauc_mrr_at_100_std value: -13.17337318790356 - type: nauc_mrr_at_10_diff1 value: 48.69043075114964 - type: nauc_mrr_at_10_max value: 34.65856578634812 - type: nauc_mrr_at_10_std value: -13.389064970520973 - type: nauc_mrr_at_1_diff1 value: 51.41758372767858 - type: nauc_mrr_at_1_max value: 32.6284240446433 - type: nauc_mrr_at_1_std value: -13.709567431810976 - type: nauc_mrr_at_20_diff1 value: 48.64452165935141 - type: nauc_mrr_at_20_max value: 34.62668966253434 - type: nauc_mrr_at_20_std value: -13.188024605275306 - type: nauc_mrr_at_3_diff1 value: 48.6499568195659 - type: nauc_mrr_at_3_max value: 34.65641304748175 - type: nauc_mrr_at_3_std value: -14.687769051774529 - type: nauc_mrr_at_5_diff1 value: 48.52683029728482 - type: nauc_mrr_at_5_max value: 34.8254158226808 - type: nauc_mrr_at_5_std value: -13.90451984500762 - type: nauc_ndcg_at_1000_diff1 value: 48.870406690732 - type: nauc_ndcg_at_1000_max value: 35.2675705133695 - type: nauc_ndcg_at_1000_std value: -11.267586713307322 - type: nauc_ndcg_at_100_diff1 value: 48.61247990333261 - type: nauc_ndcg_at_100_max value: 35.41320042882678 - type: nauc_ndcg_at_100_std value: -10.566461682141593 - type: nauc_ndcg_at_10_diff1 value: 48.865637260995584 - type: nauc_ndcg_at_10_max value: 35.72517255893919 - type: nauc_ndcg_at_10_std value: -12.588636382543378 - type: nauc_ndcg_at_1_diff1 value: 51.41758372767858 - type: nauc_ndcg_at_1_max value: 32.6284240446433 - type: nauc_ndcg_at_1_std value: -13.709567431810976 - type: nauc_ndcg_at_20_diff1 value: 48.40318441649108 - type: nauc_ndcg_at_20_max value: 35.45407262577331 - type: nauc_ndcg_at_20_std value: -11.202317076762835 - type: nauc_ndcg_at_3_diff1 value: 48.32682231800704 - type: nauc_ndcg_at_3_max value: 34.07411362254488 - type: nauc_ndcg_at_3_std value: -15.175358391945245 - type: nauc_ndcg_at_5_diff1 value: 48.67053909730509 - type: nauc_ndcg_at_5_max value: 35.63879009797286 - type: nauc_ndcg_at_5_std value: -14.1986091226612 - type: nauc_precision_at_1000_diff1 value: -12.584898766729482 - type: nauc_precision_at_1000_max value: 6.536079770084931 - type: nauc_precision_at_1000_std value: 21.190453831782 - type: nauc_precision_at_100_diff1 value: -10.561462705003443 - type: nauc_precision_at_100_max value: 11.642503498822627 - type: nauc_precision_at_100_std value: 23.364240847068725 - type: nauc_precision_at_10_diff1 value: 10.428388294290725 - type: nauc_precision_at_10_max value: 26.64778522550707 - type: nauc_precision_at_10_std value: 8.7971175822477 - type: nauc_precision_at_1_diff1 value: 51.41758372767858 - type: nauc_precision_at_1_max value: 32.6284240446433 - type: nauc_precision_at_1_std value: -13.709567431810976 - type: nauc_precision_at_20_diff1 value: 1.4153146296498802 - type: nauc_precision_at_20_max value: 21.863221595402713 - type: nauc_precision_at_20_std value: 17.674002200344756 - type: nauc_precision_at_3_diff1 value: 28.16052495795576 - type: nauc_precision_at_3_max value: 33.70008398113751 - type: nauc_precision_at_3_std value: -5.5970612296749644 - type: nauc_precision_at_5_diff1 value: 21.040002038966264 - type: nauc_precision_at_5_max value: 32.38215846452138 - type: nauc_precision_at_5_std value: -0.10361675057694844 - type: nauc_recall_at_1000_diff1 value: 38.213914895059546 - type: nauc_recall_at_1000_max value: 53.09481512653444 - type: nauc_recall_at_1000_std value: 58.24000755818375 - type: nauc_recall_at_100_diff1 value: 38.923229187547136 - type: nauc_recall_at_100_max value: 43.21492770348412 - type: nauc_recall_at_100_std value: 25.990956494703887 - type: nauc_recall_at_10_diff1 value: 44.22168583553731 - type: nauc_recall_at_10_max value: 38.73910397467968 - type: nauc_recall_at_10_std value: -8.031921888246933 - type: nauc_recall_at_1_diff1 value: 52.01193128355289 - type: nauc_recall_at_1_max value: 27.062473514407742 - type: nauc_recall_at_1_std value: -15.421160015795982 - type: nauc_recall_at_20_diff1 value: 40.93219002597565 - type: nauc_recall_at_20_max value: 38.885811403988754 - type: nauc_recall_at_20_std value: 2.162197854553678 - type: nauc_recall_at_3_diff1 value: 44.78642856115218 - type: nauc_recall_at_3_max value: 33.75567422950951 - type: nauc_recall_at_3_std value: -16.234655637869064 - type: nauc_recall_at_5_diff1 value: 44.58830235836066 - type: nauc_recall_at_5_max value: 37.64654838955664 - type: nauc_recall_at_5_std value: -13.950938477657312 - type: ndcg_at_1 value: 47.711999999999996 - type: ndcg_at_10 value: 62.053999999999995 - type: ndcg_at_100 value: 65.83200000000001 - type: ndcg_at_1000 value: 66.599 - type: ndcg_at_20 value: 63.674 - type: ndcg_at_3 value: 56.318999999999996 - type: ndcg_at_5 value: 58.987 - type: precision_at_1 value: 47.711999999999996 - type: precision_at_10 value: 10.125 - type: precision_at_100 value: 1.29 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_20 value: 5.574 - type: precision_at_3 value: 25.392 - type: precision_at_5 value: 17.329 - type: recall_at_1 value: 41.581 - type: recall_at_10 value: 77.269 - type: recall_at_100 value: 93.379 - type: recall_at_1000 value: 98.584 - type: recall_at_20 value: 83.313 - type: recall_at_3 value: 62.078 - type: recall_at_5 value: 68.529 - task: type: Retrieval dataset: name: MTEB CQADupstackGisRetrieval (default) type: mteb/cqadupstack-gis config: default split: test revision: 5003b3064772da1887988e05400cf3806fe491f2 metrics: - type: main_score value: 39.748 - type: map_at_1 value: 25.285000000000004 - type: map_at_10 value: 34.371 - type: map_at_100 value: 35.485 - type: map_at_1000 value: 35.569 - type: map_at_20 value: 34.969 - type: map_at_3 value: 31.374000000000002 - type: map_at_5 value: 33.013999999999996 - type: mrr_at_1 value: 27.231638418079097 - type: mrr_at_10 value: 36.34171823154871 - type: mrr_at_100 value: 37.28803039784218 - type: mrr_at_1000 value: 37.35200816522978 - type: mrr_at_20 value: 36.87875687036097 - type: mrr_at_3 value: 33.4463276836158 - type: mrr_at_5 value: 35.11864406779658 - type: nauc_map_at_1000_diff1 value: 40.43809736439773 - type: nauc_map_at_1000_max value: 27.387952141662304 - type: nauc_map_at_1000_std value: -5.018693002046338 - type: nauc_map_at_100_diff1 value: 40.423409205985465 - type: nauc_map_at_100_max value: 27.374411655772974 - type: nauc_map_at_100_std value: -5.017320012911967 - type: nauc_map_at_10_diff1 value: 40.79762636533108 - type: nauc_map_at_10_max value: 27.556132024797297 - type: nauc_map_at_10_std value: -5.236320099160553 - type: nauc_map_at_1_diff1 value: 46.32621190623603 - type: nauc_map_at_1_max value: 27.981895930162796 - type: nauc_map_at_1_std value: -9.286669055239507 - type: nauc_map_at_20_diff1 value: 40.482647454700675 - type: nauc_map_at_20_max value: 27.380748864511183 - type: nauc_map_at_20_std value: -5.157566207786265 - type: nauc_map_at_3_diff1 value: 41.35030270597686 - type: nauc_map_at_3_max value: 26.401420529038973 - type: nauc_map_at_3_std value: -6.825550510798991 - type: nauc_map_at_5_diff1 value: 40.52643403126109 - type: nauc_map_at_5_max value: 26.93344961565937 - type: nauc_map_at_5_std value: -5.671091539711291 - type: nauc_mrr_at_1000_diff1 value: 38.7554936147589 - type: nauc_mrr_at_1000_max value: 26.386180348217618 - type: nauc_mrr_at_1000_std value: -4.033494049459254 - type: nauc_mrr_at_100_diff1 value: 38.742235245990756 - type: nauc_mrr_at_100_max value: 26.38418505089963 - type: nauc_mrr_at_100_std value: -4.005903349074146 - type: nauc_mrr_at_10_diff1 value: 38.967468013959234 - type: nauc_mrr_at_10_max value: 26.48492137182557 - type: nauc_mrr_at_10_std value: -4.037721014953263 - type: nauc_mrr_at_1_diff1 value: 44.016377125107745 - type: nauc_mrr_at_1_max value: 27.47170497073298 - type: nauc_mrr_at_1_std value: -8.306440293433809 - type: nauc_mrr_at_20_diff1 value: 38.68609604223464 - type: nauc_mrr_at_20_max value: 26.374572361531012 - type: nauc_mrr_at_20_std value: -4.086775263524302 - type: nauc_mrr_at_3_diff1 value: 39.53070405624054 - type: nauc_mrr_at_3_max value: 25.480788199729943 - type: nauc_mrr_at_3_std value: -5.541804979531871 - type: nauc_mrr_at_5_diff1 value: 38.83669655976299 - type: nauc_mrr_at_5_max value: 26.078917244246803 - type: nauc_mrr_at_5_std value: -4.422360384815965 - type: nauc_ndcg_at_1000_diff1 value: 38.20327968355583 - type: nauc_ndcg_at_1000_max value: 27.77087169770278 - type: nauc_ndcg_at_1000_std value: -1.8590069773956301 - type: nauc_ndcg_at_100_diff1 value: 37.63890958457494 - type: nauc_ndcg_at_100_max value: 27.575979785801763 - type: nauc_ndcg_at_100_std value: -1.5647502256699413 - type: nauc_ndcg_at_10_diff1 value: 38.799402965520834 - type: nauc_ndcg_at_10_max value: 28.053616957262488 - type: nauc_ndcg_at_10_std value: -2.462939350230248 - type: nauc_ndcg_at_1_diff1 value: 44.016377125107745 - type: nauc_ndcg_at_1_max value: 27.47170497073298 - type: nauc_ndcg_at_1_std value: -8.306440293433809 - type: nauc_ndcg_at_20_diff1 value: 37.680127012971724 - type: nauc_ndcg_at_20_max value: 27.459359553937663 - type: nauc_ndcg_at_20_std value: -2.4173192913196315 - type: nauc_ndcg_at_3_diff1 value: 39.61534122530668 - type: nauc_ndcg_at_3_max value: 25.959631957021195 - type: nauc_ndcg_at_3_std value: -5.802892483091432 - type: nauc_ndcg_at_5_diff1 value: 38.30836310533911 - type: nauc_ndcg_at_5_max value: 26.685143198949117 - type: nauc_ndcg_at_5_std value: -3.5679681994101937 - type: nauc_precision_at_1000_diff1 value: -9.443320801858167 - type: nauc_precision_at_1000_max value: 7.690361600365471 - type: nauc_precision_at_1000_std value: 13.589977869393783 - type: nauc_precision_at_100_diff1 value: 2.1947422010292104 - type: nauc_precision_at_100_max value: 16.350359954039558 - type: nauc_precision_at_100_std value: 13.644626517618743 - type: nauc_precision_at_10_diff1 value: 24.477554964465178 - type: nauc_precision_at_10_max value: 27.26464347335663 - type: nauc_precision_at_10_std value: 7.193678024742925 - type: nauc_precision_at_1_diff1 value: 44.016377125107745 - type: nauc_precision_at_1_max value: 27.47170497073298 - type: nauc_precision_at_1_std value: -8.306440293433809 - type: nauc_precision_at_20_diff1 value: 17.09558791693914 - type: nauc_precision_at_20_max value: 23.205921916753045 - type: nauc_precision_at_20_std value: 8.391846895459514 - type: nauc_precision_at_3_diff1 value: 31.491657526795937 - type: nauc_precision_at_3_max value: 24.710273068738818 - type: nauc_precision_at_3_std value: -1.693177986898931 - type: nauc_precision_at_5_diff1 value: 25.161361863151804 - type: nauc_precision_at_5_max value: 24.936103838639553 - type: nauc_precision_at_5_std value: 2.511934162124435 - type: nauc_recall_at_1000_diff1 value: 25.712209121566048 - type: nauc_recall_at_1000_max value: 39.5293074298725 - type: nauc_recall_at_1000_std value: 38.05519778739929 - type: nauc_recall_at_100_diff1 value: 25.057155432360677 - type: nauc_recall_at_100_max value: 28.588891322745607 - type: nauc_recall_at_100_std value: 15.8128582042832 - type: nauc_recall_at_10_diff1 value: 33.47997633215991 - type: nauc_recall_at_10_max value: 30.028194042399264 - type: nauc_recall_at_10_std value: 4.90810499546238 - type: nauc_recall_at_1_diff1 value: 46.32621190623603 - type: nauc_recall_at_1_max value: 27.981895930162796 - type: nauc_recall_at_1_std value: -9.286669055239507 - type: nauc_recall_at_20_diff1 value: 28.21634195917718 - type: nauc_recall_at_20_max value: 27.48943367590963 - type: nauc_recall_at_20_std value: 5.458479523399421 - type: nauc_recall_at_3_diff1 value: 36.12326469567289 - type: nauc_recall_at_3_max value: 24.6932937032956 - type: nauc_recall_at_3_std value: -3.8068753598947076 - type: nauc_recall_at_5_diff1 value: 33.24253180673521 - type: nauc_recall_at_5_max value: 26.392055872830365 - type: nauc_recall_at_5_std value: 1.6197798374296963 - type: ndcg_at_1 value: 27.232 - type: ndcg_at_10 value: 39.748 - type: ndcg_at_100 value: 45.187 - type: ndcg_at_1000 value: 47.099000000000004 - type: ndcg_at_20 value: 41.811 - type: ndcg_at_3 value: 33.854 - type: ndcg_at_5 value: 36.665 - type: precision_at_1 value: 27.232 - type: precision_at_10 value: 6.271 - type: precision_at_100 value: 0.955 - type: precision_at_1000 value: 0.11399999999999999 - type: precision_at_20 value: 3.627 - type: precision_at_3 value: 14.35 - type: precision_at_5 value: 10.305 - type: recall_at_1 value: 25.285000000000004 - type: recall_at_10 value: 54.466 - type: recall_at_100 value: 79.335 - type: recall_at_1000 value: 93.503 - type: recall_at_20 value: 62.248999999999995 - type: recall_at_3 value: 38.558 - type: recall_at_5 value: 45.249 - task: type: Retrieval dataset: name: MTEB CQADupstackMathematicaRetrieval (default) type: mteb/cqadupstack-mathematica config: default split: test revision: 90fceea13679c63fe563ded68f3b6f06e50061de metrics: - type: main_score value: 34.977000000000004 - type: map_at_1 value: 18.829 - type: map_at_10 value: 28.837000000000003 - type: map_at_100 value: 30.325999999999997 - type: map_at_1000 value: 30.425 - type: map_at_20 value: 29.654999999999998 - type: map_at_3 value: 25.576 - type: map_at_5 value: 27.314 - type: mrr_at_1 value: 23.756218905472636 - type: mrr_at_10 value: 33.77798507462685 - type: mrr_at_100 value: 34.80299385576722 - type: mrr_at_1000 value: 34.85573965593251 - type: mrr_at_20 value: 34.37419603705533 - type: mrr_at_3 value: 30.825041459369828 - type: mrr_at_5 value: 32.317578772802655 - type: nauc_map_at_1000_diff1 value: 33.656933860084195 - type: nauc_map_at_1000_max value: 14.881459199563402 - type: nauc_map_at_1000_std value: -2.0348305200086783 - type: nauc_map_at_100_diff1 value: 33.66651078503352 - type: nauc_map_at_100_max value: 14.87213429316907 - type: nauc_map_at_100_std value: -2.028596921387645 - type: nauc_map_at_10_diff1 value: 33.94706898755689 - type: nauc_map_at_10_max value: 14.335396314155227 - type: nauc_map_at_10_std value: -2.6787096254787643 - type: nauc_map_at_1_diff1 value: 37.51403858840109 - type: nauc_map_at_1_max value: 11.61504414020275 - type: nauc_map_at_1_std value: -5.525240654285529 - type: nauc_map_at_20_diff1 value: 33.80608063225281 - type: nauc_map_at_20_max value: 14.72683703113937 - type: nauc_map_at_20_std value: -2.231105659027114 - type: nauc_map_at_3_diff1 value: 32.96013232027734 - type: nauc_map_at_3_max value: 14.034798307623674 - type: nauc_map_at_3_std value: -3.6600209654787457 - type: nauc_map_at_5_diff1 value: 33.736274045673056 - type: nauc_map_at_5_max value: 14.945515366610651 - type: nauc_map_at_5_std value: -3.4321884525525497 - type: nauc_mrr_at_1000_diff1 value: 30.885842250649926 - type: nauc_mrr_at_1000_max value: 16.199596875238367 - type: nauc_mrr_at_1000_std value: -2.5546416285540445 - type: nauc_mrr_at_100_diff1 value: 30.878602027701735 - type: nauc_mrr_at_100_max value: 16.187973739576613 - type: nauc_mrr_at_100_std value: -2.5665377844879447 - type: nauc_mrr_at_10_diff1 value: 30.978901404792953 - type: nauc_mrr_at_10_max value: 16.05102544616689 - type: nauc_mrr_at_10_std value: -2.7986105722922194 - type: nauc_mrr_at_1_diff1 value: 34.475194079761046 - type: nauc_mrr_at_1_max value: 14.478568759719856 - type: nauc_mrr_at_1_std value: -5.966667936903906 - type: nauc_mrr_at_20_diff1 value: 30.902359160235775 - type: nauc_mrr_at_20_max value: 16.092370727624722 - type: nauc_mrr_at_20_std value: -2.5750922611706475 - type: nauc_mrr_at_3_diff1 value: 30.187778758856542 - type: nauc_mrr_at_3_max value: 16.20340255462948 - type: nauc_mrr_at_3_std value: -3.350161462382174 - type: nauc_mrr_at_5_diff1 value: 30.809795610717135 - type: nauc_mrr_at_5_max value: 16.50725846620521 - type: nauc_mrr_at_5_std value: -3.2069455353142073 - type: nauc_ndcg_at_1000_diff1 value: 32.17770608068487 - type: nauc_ndcg_at_1000_max value: 16.397446092461408 - type: nauc_ndcg_at_1000_std value: 0.8389837771762243 - type: nauc_ndcg_at_100_diff1 value: 31.854138951308812 - type: nauc_ndcg_at_100_max value: 15.912981808544819 - type: nauc_ndcg_at_100_std value: 1.0700284773280755 - type: nauc_ndcg_at_10_diff1 value: 32.78528103960234 - type: nauc_ndcg_at_10_max value: 14.571451424136237 - type: nauc_ndcg_at_10_std value: -1.1491077559945655 - type: nauc_ndcg_at_1_diff1 value: 34.475194079761046 - type: nauc_ndcg_at_1_max value: 14.478568759719856 - type: nauc_ndcg_at_1_std value: -5.966667936903906 - type: nauc_ndcg_at_20_diff1 value: 32.506431715886144 - type: nauc_ndcg_at_20_max value: 15.27762186541485 - type: nauc_ndcg_at_20_std value: 0.04042340992370987 - type: nauc_ndcg_at_3_diff1 value: 30.459324411439447 - type: nauc_ndcg_at_3_max value: 15.203613562825236 - type: nauc_ndcg_at_3_std value: -3.1029884286745637 - type: nauc_ndcg_at_5_diff1 value: 32.163485899012194 - type: nauc_ndcg_at_5_max value: 15.991393277231944 - type: nauc_ndcg_at_5_std value: -2.703797170364151 - type: nauc_precision_at_1000_diff1 value: -2.117125700655664 - type: nauc_precision_at_1000_max value: 6.828373938777722 - type: nauc_precision_at_1000_std value: 2.045215933888246 - type: nauc_precision_at_100_diff1 value: 4.099056472939519 - type: nauc_precision_at_100_max value: 10.875767592976803 - type: nauc_precision_at_100_std value: 8.037114075345642 - type: nauc_precision_at_10_diff1 value: 21.29748789641324 - type: nauc_precision_at_10_max value: 14.672125367272775 - type: nauc_precision_at_10_std value: 3.257317203722233 - type: nauc_precision_at_1_diff1 value: 34.475194079761046 - type: nauc_precision_at_1_max value: 14.478568759719856 - type: nauc_precision_at_1_std value: -5.966667936903906 - type: nauc_precision_at_20_diff1 value: 16.54291297888785 - type: nauc_precision_at_20_max value: 15.697453848166546 - type: nauc_precision_at_20_std value: 6.351013088635058 - type: nauc_precision_at_3_diff1 value: 22.641030059570497 - type: nauc_precision_at_3_max value: 17.94566325867016 - type: nauc_precision_at_3_std value: -0.9418165192444523 - type: nauc_precision_at_5_diff1 value: 24.48631540013291 - type: nauc_precision_at_5_max value: 19.842549668284164 - type: nauc_precision_at_5_std value: 0.31588910564250605 - type: nauc_recall_at_1000_diff1 value: 26.43600927381693 - type: nauc_recall_at_1000_max value: 34.4720828624886 - type: nauc_recall_at_1000_std value: 43.19733828046197 - type: nauc_recall_at_100_diff1 value: 24.9155648703279 - type: nauc_recall_at_100_max value: 16.493541112065078 - type: nauc_recall_at_100_std value: 16.890004919920262 - type: nauc_recall_at_10_diff1 value: 30.406733074151916 - type: nauc_recall_at_10_max value: 12.099008104801277 - type: nauc_recall_at_10_std value: 2.961933769227566 - type: nauc_recall_at_1_diff1 value: 37.51403858840109 - type: nauc_recall_at_1_max value: 11.61504414020275 - type: nauc_recall_at_1_std value: -5.525240654285529 - type: nauc_recall_at_20_diff1 value: 29.02862744695312 - type: nauc_recall_at_20_max value: 13.187402837996709 - type: nauc_recall_at_20_std value: 7.074583238493351 - type: nauc_recall_at_3_diff1 value: 26.87292224049686 - type: nauc_recall_at_3_max value: 14.350499088093164 - type: nauc_recall_at_3_std value: -1.5820898683181281 - type: nauc_recall_at_5_diff1 value: 29.52812156986102 - type: nauc_recall_at_5_max value: 16.36238447178588 - type: nauc_recall_at_5_std value: -1.1333086160805699 - type: ndcg_at_1 value: 23.756 - type: ndcg_at_10 value: 34.977000000000004 - type: ndcg_at_100 value: 41.404999999999994 - type: ndcg_at_1000 value: 43.797000000000004 - type: ndcg_at_20 value: 37.552 - type: ndcg_at_3 value: 29.153000000000002 - type: ndcg_at_5 value: 31.628 - type: precision_at_1 value: 23.756 - type: precision_at_10 value: 6.704000000000001 - type: precision_at_100 value: 1.142 - type: precision_at_1000 value: 0.147 - type: precision_at_20 value: 4.098 - type: precision_at_3 value: 14.469000000000001 - type: precision_at_5 value: 10.423 - type: recall_at_1 value: 18.829 - type: recall_at_10 value: 48.898 - type: recall_at_100 value: 76.12299999999999 - type: recall_at_1000 value: 93.125 - type: recall_at_20 value: 58.013000000000005 - type: recall_at_3 value: 32.628 - type: recall_at_5 value: 39.226 - task: type: Retrieval dataset: name: MTEB CQADupstackPhysicsRetrieval (default) type: mteb/cqadupstack-physics config: default split: test revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4 metrics: - type: main_score value: 51.841 - type: map_at_1 value: 31.546000000000003 - type: map_at_10 value: 45.062000000000005 - type: map_at_100 value: 46.404 - type: map_at_1000 value: 46.5 - type: map_at_20 value: 45.827 - type: map_at_3 value: 41.285 - type: map_at_5 value: 43.195 - type: mrr_at_1 value: 38.40230991337825 - type: mrr_at_10 value: 49.8141528026032 - type: mrr_at_100 value: 50.57447408080726 - type: mrr_at_1000 value: 50.5993052428773 - type: mrr_at_20 value: 50.25050235209662 - type: mrr_at_3 value: 46.98427975617576 - type: mrr_at_5 value: 48.54347128649335 - type: nauc_map_at_1000_diff1 value: 45.408085068920215 - type: nauc_map_at_1000_max value: 34.81448683377689 - type: nauc_map_at_1000_std value: -9.031735114424992 - type: nauc_map_at_100_diff1 value: 45.39924061967858 - type: nauc_map_at_100_max value: 34.80839440059379 - type: nauc_map_at_100_std value: -9.085641846751734 - type: nauc_map_at_10_diff1 value: 45.33627511201888 - type: nauc_map_at_10_max value: 34.4388392225938 - type: nauc_map_at_10_std value: -9.70629057556224 - type: nauc_map_at_1_diff1 value: 52.682893277244716 - type: nauc_map_at_1_max value: 32.044557907599085 - type: nauc_map_at_1_std value: -12.239875326347203 - type: nauc_map_at_20_diff1 value: 45.39627205891398 - type: nauc_map_at_20_max value: 34.61700403661764 - type: nauc_map_at_20_std value: -9.427569608672833 - type: nauc_map_at_3_diff1 value: 46.487014365288324 - type: nauc_map_at_3_max value: 33.71243831775303 - type: nauc_map_at_3_std value: -11.065305302387815 - type: nauc_map_at_5_diff1 value: 45.98978894568299 - type: nauc_map_at_5_max value: 33.808112350208674 - type: nauc_map_at_5_std value: -10.88139872151709 - type: nauc_mrr_at_1000_diff1 value: 45.26198173932022 - type: nauc_mrr_at_1000_max value: 35.932478080635484 - type: nauc_mrr_at_1000_std value: -7.902048906790103 - type: nauc_mrr_at_100_diff1 value: 45.254393472843375 - type: nauc_mrr_at_100_max value: 35.9431209230091 - type: nauc_mrr_at_100_std value: -7.8981938920645005 - type: nauc_mrr_at_10_diff1 value: 45.08123828809612 - type: nauc_mrr_at_10_max value: 35.931229790288434 - type: nauc_mrr_at_10_std value: -7.936309463023789 - type: nauc_mrr_at_1_diff1 value: 49.22977022573317 - type: nauc_mrr_at_1_max value: 34.23781082286927 - type: nauc_mrr_at_1_std value: -9.287665167939494 - type: nauc_mrr_at_20_diff1 value: 45.252641516190934 - type: nauc_mrr_at_20_max value: 35.89539484155243 - type: nauc_mrr_at_20_std value: -7.972825938602268 - type: nauc_mrr_at_3_diff1 value: 45.171645262995845 - type: nauc_mrr_at_3_max value: 35.92717246816748 - type: nauc_mrr_at_3_std value: -8.902287553992233 - type: nauc_mrr_at_5_diff1 value: 45.19792559490479 - type: nauc_mrr_at_5_max value: 35.82925903242625 - type: nauc_mrr_at_5_std value: -8.849826039196444 - type: nauc_ndcg_at_1000_diff1 value: 44.10471154096236 - type: nauc_ndcg_at_1000_max value: 36.054463074424085 - type: nauc_ndcg_at_1000_std value: -6.413244683136655 - type: nauc_ndcg_at_100_diff1 value: 43.777951617371045 - type: nauc_ndcg_at_100_max value: 36.32691909636164 - type: nauc_ndcg_at_100_std value: -6.5182900903128 - type: nauc_ndcg_at_10_diff1 value: 43.40267963145658 - type: nauc_ndcg_at_10_max value: 35.2051607477023 - type: nauc_ndcg_at_10_std value: -8.007589526999686 - type: nauc_ndcg_at_1_diff1 value: 49.22977022573317 - type: nauc_ndcg_at_1_max value: 34.23781082286927 - type: nauc_ndcg_at_1_std value: -9.287665167939494 - type: nauc_ndcg_at_20_diff1 value: 43.84773553741647 - type: nauc_ndcg_at_20_max value: 35.43036436550042 - type: nauc_ndcg_at_20_std value: -7.804185207595764 - type: nauc_ndcg_at_3_diff1 value: 44.52230934112428 - type: nauc_ndcg_at_3_max value: 34.7781704927549 - type: nauc_ndcg_at_3_std value: -10.213716472980002 - type: nauc_ndcg_at_5_diff1 value: 44.42885848694502 - type: nauc_ndcg_at_5_max value: 34.5484713761122 - type: nauc_ndcg_at_5_std value: -10.379761727670532 - type: nauc_precision_at_1000_diff1 value: -17.115906857147447 - type: nauc_precision_at_1000_max value: -2.1987651081477533 - type: nauc_precision_at_1000_std value: 17.97179450110356 - type: nauc_precision_at_100_diff1 value: -10.925154095184467 - type: nauc_precision_at_100_max value: 9.022276663553845 - type: nauc_precision_at_100_std value: 19.049550737373757 - type: nauc_precision_at_10_diff1 value: 5.155657934683893 - type: nauc_precision_at_10_max value: 22.278949321494142 - type: nauc_precision_at_10_std value: 8.912133685935467 - type: nauc_precision_at_1_diff1 value: 49.22977022573317 - type: nauc_precision_at_1_max value: 34.23781082286927 - type: nauc_precision_at_1_std value: -9.287665167939494 - type: nauc_precision_at_20_diff1 value: 0.08671028194998398 - type: nauc_precision_at_20_max value: 17.69661489592792 - type: nauc_precision_at_20_std value: 11.784850126329411 - type: nauc_precision_at_3_diff1 value: 23.427889174415377 - type: nauc_precision_at_3_max value: 30.15686792703775 - type: nauc_precision_at_3_std value: -1.655045161470365 - type: nauc_precision_at_5_diff1 value: 16.81335324026242 - type: nauc_precision_at_5_max value: 26.762178572537245 - type: nauc_precision_at_5_std value: 0.9528620079402708 - type: nauc_recall_at_1000_diff1 value: 31.76437533883028 - type: nauc_recall_at_1000_max value: 60.981540164283665 - type: nauc_recall_at_1000_std value: 47.60668419126673 - type: nauc_recall_at_100_diff1 value: 30.481511839602387 - type: nauc_recall_at_100_max value: 42.8478963024075 - type: nauc_recall_at_100_std value: 8.840736718195856 - type: nauc_recall_at_10_diff1 value: 35.334955867865595 - type: nauc_recall_at_10_max value: 33.28096014996419 - type: nauc_recall_at_10_std value: -3.552696847109997 - type: nauc_recall_at_1_diff1 value: 52.682893277244716 - type: nauc_recall_at_1_max value: 32.044557907599085 - type: nauc_recall_at_1_std value: -12.239875326347203 - type: nauc_recall_at_20_diff1 value: 37.0961275784984 - type: nauc_recall_at_20_max value: 33.872582440669305 - type: nauc_recall_at_20_std value: -3.5901847360735726 - type: nauc_recall_at_3_diff1 value: 40.64045772324257 - type: nauc_recall_at_3_max value: 31.714945130932794 - type: nauc_recall_at_3_std value: -12.421740996971153 - type: nauc_recall_at_5_diff1 value: 38.912432084860036 - type: nauc_recall_at_5_max value: 31.220715412191886 - type: nauc_recall_at_5_std value: -12.330003159503198 - type: ndcg_at_1 value: 38.401999999999994 - type: ndcg_at_10 value: 51.841 - type: ndcg_at_100 value: 57.121 - type: ndcg_at_1000 value: 58.582 - type: ndcg_at_20 value: 53.947 - type: ndcg_at_3 value: 45.78 - type: ndcg_at_5 value: 48.352000000000004 - type: precision_at_1 value: 38.401999999999994 - type: precision_at_10 value: 9.692 - type: precision_at_100 value: 1.435 - type: precision_at_1000 value: 0.173 - type: precision_at_20 value: 5.611 - type: precision_at_3 value: 22.264999999999997 - type: precision_at_5 value: 15.669 - type: recall_at_1 value: 31.546000000000003 - type: recall_at_10 value: 66.961 - type: recall_at_100 value: 88.71 - type: recall_at_1000 value: 97.821 - type: recall_at_20 value: 74.033 - type: recall_at_3 value: 50.307 - type: recall_at_5 value: 56.825 - task: type: Retrieval dataset: name: MTEB CQADupstackProgrammersRetrieval (default) type: mteb/cqadupstack-programmers config: default split: test revision: 6184bc1440d2dbc7612be22b50686b8826d22b32 metrics: - type: main_score value: 45.814 - type: map_at_1 value: 26.784000000000002 - type: map_at_10 value: 38.958999999999996 - type: map_at_100 value: 40.441 - type: map_at_1000 value: 40.535 - type: map_at_20 value: 39.843 - type: map_at_3 value: 34.735 - type: map_at_5 value: 37.287 - type: mrr_at_1 value: 33.44748858447489 - type: mrr_at_10 value: 44.28281510473287 - type: mrr_at_100 value: 45.25904373827841 - type: mrr_at_1000 value: 45.29554751347883 - type: mrr_at_20 value: 44.92953754847277 - type: mrr_at_3 value: 41.305175038051736 - type: mrr_at_5 value: 43.20585996955856 - type: nauc_map_at_1000_diff1 value: 38.11736750763297 - type: nauc_map_at_1000_max value: 24.000294382878423 - type: nauc_map_at_1000_std value: -7.861373369299095 - type: nauc_map_at_100_diff1 value: 38.11654284813446 - type: nauc_map_at_100_max value: 23.98087270285323 - type: nauc_map_at_100_std value: -7.877385888095451 - type: nauc_map_at_10_diff1 value: 38.08070344774997 - type: nauc_map_at_10_max value: 23.543520971497905 - type: nauc_map_at_10_std value: -8.383371378491509 - type: nauc_map_at_1_diff1 value: 42.18319791470325 - type: nauc_map_at_1_max value: 23.34898555091723 - type: nauc_map_at_1_std value: -12.806880008265043 - type: nauc_map_at_20_diff1 value: 38.114999842598564 - type: nauc_map_at_20_max value: 23.90994956189344 - type: nauc_map_at_20_std value: -8.106492347533809 - type: nauc_map_at_3_diff1 value: 38.575286871887805 - type: nauc_map_at_3_max value: 23.058717018246153 - type: nauc_map_at_3_std value: -11.898614097081257 - type: nauc_map_at_5_diff1 value: 38.34033174186913 - type: nauc_map_at_5_max value: 23.477702791252955 - type: nauc_map_at_5_std value: -9.287267607162825 - type: nauc_mrr_at_1000_diff1 value: 37.32748841102093 - type: nauc_mrr_at_1000_max value: 26.193818618155245 - type: nauc_mrr_at_1000_std value: -5.306318224163809 - type: nauc_mrr_at_100_diff1 value: 37.33545309261833 - type: nauc_mrr_at_100_max value: 26.208771456341218 - type: nauc_mrr_at_100_std value: -5.289635468739874 - type: nauc_mrr_at_10_diff1 value: 36.99565346932208 - type: nauc_mrr_at_10_max value: 25.82424995530185 - type: nauc_mrr_at_10_std value: -5.4973436862637985 - type: nauc_mrr_at_1_diff1 value: 41.86501079427528 - type: nauc_mrr_at_1_max value: 27.346120407617946 - type: nauc_mrr_at_1_std value: -8.384870760349795 - type: nauc_mrr_at_20_diff1 value: 37.21763039260485 - type: nauc_mrr_at_20_max value: 26.140668660687904 - type: nauc_mrr_at_20_std value: -5.309579071500031 - type: nauc_mrr_at_3_diff1 value: 37.29447740662917 - type: nauc_mrr_at_3_max value: 25.72389331364019 - type: nauc_mrr_at_3_std value: -7.528056434106499 - type: nauc_mrr_at_5_diff1 value: 37.03096493380401 - type: nauc_mrr_at_5_max value: 26.20679567894994 - type: nauc_mrr_at_5_std value: -5.422404696106759 - type: nauc_ndcg_at_1000_diff1 value: 37.55262612632063 - type: nauc_ndcg_at_1000_max value: 25.03492533529183 - type: nauc_ndcg_at_1000_std value: -4.620158835806051 - type: nauc_ndcg_at_100_diff1 value: 37.727723031031324 - type: nauc_ndcg_at_100_max value: 25.29454657173904 - type: nauc_ndcg_at_100_std value: -3.947304020047461 - type: nauc_ndcg_at_10_diff1 value: 36.29215161134701 - type: nauc_ndcg_at_10_max value: 23.71291239328531 - type: nauc_ndcg_at_10_std value: -5.168801521084202 - type: nauc_ndcg_at_1_diff1 value: 41.86501079427528 - type: nauc_ndcg_at_1_max value: 27.346120407617946 - type: nauc_ndcg_at_1_std value: -8.384870760349795 - type: nauc_ndcg_at_20_diff1 value: 36.741456788442925 - type: nauc_ndcg_at_20_max value: 24.749531082944497 - type: nauc_ndcg_at_20_std value: -4.513522914617928 - type: nauc_ndcg_at_3_diff1 value: 36.337800581556415 - type: nauc_ndcg_at_3_max value: 23.79612744646427 - type: nauc_ndcg_at_3_std value: -9.814488529010495 - type: nauc_ndcg_at_5_diff1 value: 36.373783732275996 - type: nauc_ndcg_at_5_max value: 24.337705152303517 - type: nauc_ndcg_at_5_std value: -5.9433338033709076 - type: nauc_precision_at_1000_diff1 value: -11.166516654998434 - type: nauc_precision_at_1000_max value: 11.702984079447216 - type: nauc_precision_at_1000_std value: 18.158097934252027 - type: nauc_precision_at_100_diff1 value: -2.4183353698467203 - type: nauc_precision_at_100_max value: 12.354030267351504 - type: nauc_precision_at_100_std value: 17.66939702398267 - type: nauc_precision_at_10_diff1 value: 12.40698391491755 - type: nauc_precision_at_10_max value: 18.40154119372082 - type: nauc_precision_at_10_std value: 11.579162385831191 - type: nauc_precision_at_1_diff1 value: 41.86501079427528 - type: nauc_precision_at_1_max value: 27.346120407617946 - type: nauc_precision_at_1_std value: -8.384870760349795 - type: nauc_precision_at_20_diff1 value: 7.559066186043976 - type: nauc_precision_at_20_max value: 17.961972984402735 - type: nauc_precision_at_20_std value: 14.166666190732109 - type: nauc_precision_at_3_diff1 value: 24.26336438498472 - type: nauc_precision_at_3_max value: 23.115335702834876 - type: nauc_precision_at_3_std value: -2.8638426974295745 - type: nauc_precision_at_5_diff1 value: 18.086439894 - type: nauc_precision_at_5_max value: 21.902852913780674 - type: nauc_precision_at_5_std value: 7.913364070883889 - type: nauc_recall_at_1000_diff1 value: 37.997735697889524 - type: nauc_recall_at_1000_max value: 28.756920000718416 - type: nauc_recall_at_1000_std value: 45.88659450166037 - type: nauc_recall_at_100_diff1 value: 39.654529273336955 - type: nauc_recall_at_100_max value: 30.60687045709423 - type: nauc_recall_at_100_std value: 17.953087240972074 - type: nauc_recall_at_10_diff1 value: 30.14293305093271 - type: nauc_recall_at_10_max value: 19.316085797569126 - type: nauc_recall_at_10_std value: 1.775418885436544 - type: nauc_recall_at_1_diff1 value: 42.18319791470325 - type: nauc_recall_at_1_max value: 23.34898555091723 - type: nauc_recall_at_1_std value: -12.806880008265043 - type: nauc_recall_at_20_diff1 value: 31.113782770327525 - type: nauc_recall_at_20_max value: 23.48656801606143 - type: nauc_recall_at_20_std value: 6.214679015830062 - type: nauc_recall_at_3_diff1 value: 32.6018886813766 - type: nauc_recall_at_3_max value: 20.102015905459307 - type: nauc_recall_at_3_std value: -10.308174390589436 - type: nauc_recall_at_5_diff1 value: 31.44511441051162 - type: nauc_recall_at_5_max value: 20.931266508274724 - type: nauc_recall_at_5_std value: -1.6540639373360793 - type: ndcg_at_1 value: 33.446999999999996 - type: ndcg_at_10 value: 45.814 - type: ndcg_at_100 value: 51.834 - type: ndcg_at_1000 value: 53.553 - type: ndcg_at_20 value: 48.518 - type: ndcg_at_3 value: 39.588 - type: ndcg_at_5 value: 42.849 - type: precision_at_1 value: 33.446999999999996 - type: precision_at_10 value: 8.801 - type: precision_at_100 value: 1.381 - type: precision_at_1000 value: 0.17099999999999999 - type: precision_at_20 value: 5.228 - type: precision_at_3 value: 19.444 - type: precision_at_5 value: 14.521 - type: recall_at_1 value: 26.784000000000002 - type: recall_at_10 value: 60.63499999999999 - type: recall_at_100 value: 86.035 - type: recall_at_1000 value: 97.404 - type: recall_at_20 value: 70.473 - type: recall_at_3 value: 43.205 - type: recall_at_5 value: 51.800999999999995 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval (default) type: CQADupstackRetrieval_is_a_combined_dataset config: default split: test revision: CQADupstackRetrieval_is_a_combined_dataset metrics: - type: main_score value: 43.98325 - type: ndcg_at_10 value: 43.98325 - task: type: Retrieval dataset: name: MTEB CQADupstackStatsRetrieval (default) type: mteb/cqadupstack-stats config: default split: test revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a metrics: - type: main_score value: 36.291000000000004 - type: map_at_1 value: 23.199 - type: map_at_10 value: 31.516 - type: map_at_100 value: 32.649 - type: map_at_1000 value: 32.742 - type: map_at_20 value: 32.172 - type: map_at_3 value: 29.101 - type: map_at_5 value: 30.487 - type: mrr_at_1 value: 26.226993865030675 - type: mrr_at_10 value: 33.94816924724899 - type: mrr_at_100 value: 34.87358677823681 - type: mrr_at_1000 value: 34.94261020890034 - type: mrr_at_20 value: 34.506501776185196 - type: mrr_at_3 value: 31.518404907975466 - type: mrr_at_5 value: 32.94478527607363 - type: nauc_map_at_1000_diff1 value: 51.18542911225432 - type: nauc_map_at_1000_max value: 40.43058085111423 - type: nauc_map_at_1000_std value: 5.3974448907725066 - type: nauc_map_at_100_diff1 value: 51.15865749726925 - type: nauc_map_at_100_max value: 40.39622362575108 - type: nauc_map_at_100_std value: 5.387804213546535 - type: nauc_map_at_10_diff1 value: 51.26667063819357 - type: nauc_map_at_10_max value: 40.28890459349535 - type: nauc_map_at_10_std value: 4.769853969588081 - type: nauc_map_at_1_diff1 value: 57.84622181579997 - type: nauc_map_at_1_max value: 42.99833198892393 - type: nauc_map_at_1_std value: 2.623995688031952 - type: nauc_map_at_20_diff1 value: 51.20424873230057 - type: nauc_map_at_20_max value: 40.14426501047391 - type: nauc_map_at_20_std value: 5.172256348095352 - type: nauc_map_at_3_diff1 value: 52.62709469297852 - type: nauc_map_at_3_max value: 41.15448127921986 - type: nauc_map_at_3_std value: 3.4581704117373566 - type: nauc_map_at_5_diff1 value: 52.11442057512152 - type: nauc_map_at_5_max value: 40.771412481175986 - type: nauc_map_at_5_std value: 3.9467241008870335 - type: nauc_mrr_at_1000_diff1 value: 50.62386253616975 - type: nauc_mrr_at_1000_max value: 39.326081970356334 - type: nauc_mrr_at_1000_std value: 6.783860502612024 - type: nauc_mrr_at_100_diff1 value: 50.57526957456895 - type: nauc_mrr_at_100_max value: 39.30980897829472 - type: nauc_mrr_at_100_std value: 6.80161963789857 - type: nauc_mrr_at_10_diff1 value: 50.71426300210595 - type: nauc_mrr_at_10_max value: 39.39423685918339 - type: nauc_mrr_at_10_std value: 6.4784274139524545 - type: nauc_mrr_at_1_diff1 value: 56.83151355893988 - type: nauc_mrr_at_1_max value: 41.332173947300824 - type: nauc_mrr_at_1_std value: 5.584550639495193 - type: nauc_mrr_at_20_diff1 value: 50.51574800561875 - type: nauc_mrr_at_20_max value: 39.2435306936296 - type: nauc_mrr_at_20_std value: 6.636135681381268 - type: nauc_mrr_at_3_diff1 value: 51.968318561093206 - type: nauc_mrr_at_3_max value: 39.536950393663254 - type: nauc_mrr_at_3_std value: 5.336239991399375 - type: nauc_mrr_at_5_diff1 value: 51.23639004175655 - type: nauc_mrr_at_5_max value: 39.526442012927895 - type: nauc_mrr_at_5_std value: 5.85626988225536 - type: nauc_ndcg_at_1000_diff1 value: 47.94002347290064 - type: nauc_ndcg_at_1000_max value: 39.32366180439056 - type: nauc_ndcg_at_1000_std value: 8.84579590488393 - type: nauc_ndcg_at_100_diff1 value: 47.14572826825714 - type: nauc_ndcg_at_100_max value: 39.00544081440317 - type: nauc_ndcg_at_100_std value: 9.1174179575023 - type: nauc_ndcg_at_10_diff1 value: 47.846426124821676 - type: nauc_ndcg_at_10_max value: 38.82254197821222 - type: nauc_ndcg_at_10_std value: 6.173511994822973 - type: nauc_ndcg_at_1_diff1 value: 56.83151355893988 - type: nauc_ndcg_at_1_max value: 41.332173947300824 - type: nauc_ndcg_at_1_std value: 5.584550639495193 - type: nauc_ndcg_at_20_diff1 value: 47.499476839800174 - type: nauc_ndcg_at_20_max value: 38.176949417621366 - type: nauc_ndcg_at_20_std value: 7.211539332197563 - type: nauc_ndcg_at_3_diff1 value: 50.41865913024451 - type: nauc_ndcg_at_3_max value: 40.42211341834284 - type: nauc_ndcg_at_3_std value: 4.0996783989115855 - type: nauc_ndcg_at_5_diff1 value: 49.54432423009622 - type: nauc_ndcg_at_5_max value: 39.90824982557047 - type: nauc_ndcg_at_5_std value: 4.659386746150992 - type: nauc_precision_at_1000_diff1 value: -6.669236529596843 - type: nauc_precision_at_1000_max value: 9.948992721182313 - type: nauc_precision_at_1000_std value: 18.850247446285344 - type: nauc_precision_at_100_diff1 value: 5.0040204495885945 - type: nauc_precision_at_100_max value: 20.00879367393483 - type: nauc_precision_at_100_std value: 22.966181182852935 - type: nauc_precision_at_10_diff1 value: 27.68101776997308 - type: nauc_precision_at_10_max value: 27.712070876848816 - type: nauc_precision_at_10_std value: 14.766963486302046 - type: nauc_precision_at_1_diff1 value: 56.83151355893988 - type: nauc_precision_at_1_max value: 41.332173947300824 - type: nauc_precision_at_1_std value: 5.584550639495193 - type: nauc_precision_at_20_diff1 value: 20.370480241261426 - type: nauc_precision_at_20_max value: 23.236345054048897 - type: nauc_precision_at_20_std value: 18.705167446849206 - type: nauc_precision_at_3_diff1 value: 41.064327248640566 - type: nauc_precision_at_3_max value: 35.18704515627873 - type: nauc_precision_at_3_std value: 6.416516457891254 - type: nauc_precision_at_5_diff1 value: 36.89222110213938 - type: nauc_precision_at_5_max value: 32.76854314518032 - type: nauc_precision_at_5_std value: 9.578741823255536 - type: nauc_recall_at_1000_diff1 value: 21.174293136272045 - type: nauc_recall_at_1000_max value: 28.53784351553076 - type: nauc_recall_at_1000_std value: 41.3776314807875 - type: nauc_recall_at_100_diff1 value: 26.596788926351543 - type: nauc_recall_at_100_max value: 30.86754454927091 - type: nauc_recall_at_100_std value: 26.94531386198568 - type: nauc_recall_at_10_diff1 value: 37.24888866240404 - type: nauc_recall_at_10_max value: 33.69371948433766 - type: nauc_recall_at_10_std value: 8.502079126424375 - type: nauc_recall_at_1_diff1 value: 57.84622181579997 - type: nauc_recall_at_1_max value: 42.99833198892393 - type: nauc_recall_at_1_std value: 2.623995688031952 - type: nauc_recall_at_20_diff1 value: 34.84302275962853 - type: nauc_recall_at_20_max value: 30.698743812804143 - type: nauc_recall_at_20_std value: 11.559269171945918 - type: nauc_recall_at_3_diff1 value: 45.955643862037974 - type: nauc_recall_at_3_max value: 37.8434472742434 - type: nauc_recall_at_3_std value: 2.7907114019730006 - type: nauc_recall_at_5_diff1 value: 42.86077909818521 - type: nauc_recall_at_5_max value: 36.34586700724802 - type: nauc_recall_at_5_std value: 4.685965692823914 - type: ndcg_at_1 value: 26.227 - type: ndcg_at_10 value: 36.291000000000004 - type: ndcg_at_100 value: 41.684 - type: ndcg_at_1000 value: 43.949 - type: ndcg_at_20 value: 38.405 - type: ndcg_at_3 value: 31.568 - type: ndcg_at_5 value: 33.891 - type: precision_at_1 value: 26.227 - type: precision_at_10 value: 5.89 - type: precision_at_100 value: 0.9390000000000001 - type: precision_at_1000 value: 0.12 - type: precision_at_20 value: 3.528 - type: precision_at_3 value: 13.700999999999999 - type: precision_at_5 value: 9.724 - type: recall_at_1 value: 23.199 - type: recall_at_10 value: 48.59 - type: recall_at_100 value: 73.332 - type: recall_at_1000 value: 89.825 - type: recall_at_20 value: 56.264 - type: recall_at_3 value: 35.441 - type: recall_at_5 value: 41.284 - task: type: Retrieval dataset: name: MTEB CQADupstackTexRetrieval (default) type: mteb/cqadupstack-tex config: default split: test revision: 46989137a86843e03a6195de44b09deda022eec7 metrics: - type: main_score value: 30.003999999999998 - type: map_at_1 value: 16.203 - type: map_at_10 value: 24.504 - type: map_at_100 value: 25.77 - type: map_at_1000 value: 25.898 - type: map_at_20 value: 25.158 - type: map_at_3 value: 21.538 - type: map_at_5 value: 23.085 - type: mrr_at_1 value: 19.752236751548523 - type: mrr_at_10 value: 28.208304657031487 - type: mrr_at_100 value: 29.194168639099477 - type: mrr_at_1000 value: 29.264587079393394 - type: mrr_at_20 value: 28.733378414151307 - type: mrr_at_3 value: 25.424409268180785 - type: mrr_at_5 value: 26.907547602661225 - type: nauc_map_at_1000_diff1 value: 34.77946144979848 - type: nauc_map_at_1000_max value: 26.51333935423754 - type: nauc_map_at_1000_std value: -2.3836765711100454 - type: nauc_map_at_100_diff1 value: 34.726425095836575 - type: nauc_map_at_100_max value: 26.49453750185533 - type: nauc_map_at_100_std value: -2.3946501662202304 - type: nauc_map_at_10_diff1 value: 35.07483858053342 - type: nauc_map_at_10_max value: 26.38891175643252 - type: nauc_map_at_10_std value: -3.0820847495518584 - type: nauc_map_at_1_diff1 value: 42.277954296027964 - type: nauc_map_at_1_max value: 26.885584027024063 - type: nauc_map_at_1_std value: -5.352914133949852 - type: nauc_map_at_20_diff1 value: 34.8403655164 - type: nauc_map_at_20_max value: 26.458158875988115 - type: nauc_map_at_20_std value: -2.7335504835344664 - type: nauc_map_at_3_diff1 value: 36.881394727999556 - type: nauc_map_at_3_max value: 27.160969666974815 - type: nauc_map_at_3_std value: -4.396911947394683 - type: nauc_map_at_5_diff1 value: 35.639562247169096 - type: nauc_map_at_5_max value: 26.639954780470486 - type: nauc_map_at_5_std value: -3.7965397318705496 - type: nauc_mrr_at_1000_diff1 value: 33.20383220051444 - type: nauc_mrr_at_1000_max value: 25.4316499858392 - type: nauc_mrr_at_1000_std value: -2.045842535875331 - type: nauc_mrr_at_100_diff1 value: 33.16439008042712 - type: nauc_mrr_at_100_max value: 25.416257721385787 - type: nauc_mrr_at_100_std value: -2.0317917154425387 - type: nauc_mrr_at_10_diff1 value: 33.252126790566514 - type: nauc_mrr_at_10_max value: 25.377758108751387 - type: nauc_mrr_at_10_std value: -2.4264189386999044 - type: nauc_mrr_at_1_diff1 value: 39.834719885760556 - type: nauc_mrr_at_1_max value: 25.990117931311673 - type: nauc_mrr_at_1_std value: -4.765973133436674 - type: nauc_mrr_at_20_diff1 value: 33.16383536033632 - type: nauc_mrr_at_20_max value: 25.401527212984014 - type: nauc_mrr_at_20_std value: -2.2220606670426224 - type: nauc_mrr_at_3_diff1 value: 34.74059019329093 - type: nauc_mrr_at_3_max value: 26.307028811743553 - type: nauc_mrr_at_3_std value: -3.5724671708574434 - type: nauc_mrr_at_5_diff1 value: 33.59586580993283 - type: nauc_mrr_at_5_max value: 25.627333570990807 - type: nauc_mrr_at_5_std value: -3.1680352161214715 - type: nauc_ndcg_at_1000_diff1 value: 31.63614044978374 - type: nauc_ndcg_at_1000_max value: 26.07412869941323 - type: nauc_ndcg_at_1000_std value: 1.408963414287443 - type: nauc_ndcg_at_100_diff1 value: 30.611526137380334 - type: nauc_ndcg_at_100_max value: 25.8378308813979 - type: nauc_ndcg_at_100_std value: 1.8510734940750204 - type: nauc_ndcg_at_10_diff1 value: 31.77931205418245 - type: nauc_ndcg_at_10_max value: 25.647247948016282 - type: nauc_ndcg_at_10_std value: -1.1995099974005068 - type: nauc_ndcg_at_1_diff1 value: 39.834719885760556 - type: nauc_ndcg_at_1_max value: 25.990117931311673 - type: nauc_ndcg_at_1_std value: -4.765973133436674 - type: nauc_ndcg_at_20_diff1 value: 31.271749257906933 - type: nauc_ndcg_at_20_max value: 25.77643934349027 - type: nauc_ndcg_at_20_std value: -0.13617627006329658 - type: nauc_ndcg_at_3_diff1 value: 34.78789636847941 - type: nauc_ndcg_at_3_max value: 27.06853334540071 - type: nauc_ndcg_at_3_std value: -3.814111143031556 - type: nauc_ndcg_at_5_diff1 value: 32.80385670003178 - type: nauc_ndcg_at_5_max value: 26.142597326795915 - type: nauc_ndcg_at_5_std value: -2.826446486369442 - type: nauc_precision_at_1000_diff1 value: 5.337214061164788 - type: nauc_precision_at_1000_max value: 5.6432279110132395 - type: nauc_precision_at_1000_std value: 9.71828147452406 - type: nauc_precision_at_100_diff1 value: 6.098940283868962 - type: nauc_precision_at_100_max value: 11.523600551252407 - type: nauc_precision_at_100_std value: 13.53009320116428 - type: nauc_precision_at_10_diff1 value: 17.468734946980348 - type: nauc_precision_at_10_max value: 19.178439669583845 - type: nauc_precision_at_10_std value: 4.204201525745574 - type: nauc_precision_at_1_diff1 value: 39.834719885760556 - type: nauc_precision_at_1_max value: 25.990117931311673 - type: nauc_precision_at_1_std value: -4.765973133436674 - type: nauc_precision_at_20_diff1 value: 13.589754876845628 - type: nauc_precision_at_20_max value: 17.604665523039866 - type: nauc_precision_at_20_std value: 7.374810190128375 - type: nauc_precision_at_3_diff1 value: 28.36601705288169 - type: nauc_precision_at_3_max value: 25.80624066634199 - type: nauc_precision_at_3_std value: -2.8326342170744914 - type: nauc_precision_at_5_diff1 value: 22.742427315942095 - type: nauc_precision_at_5_max value: 22.267969242946513 - type: nauc_precision_at_5_std value: -0.04906212482543379 - type: nauc_recall_at_1000_diff1 value: 16.458371781103924 - type: nauc_recall_at_1000_max value: 25.194190498183794 - type: nauc_recall_at_1000_std value: 30.522986810850743 - type: nauc_recall_at_100_diff1 value: 14.359765512865144 - type: nauc_recall_at_100_max value: 21.553553609929946 - type: nauc_recall_at_100_std value: 19.114947206238213 - type: nauc_recall_at_10_diff1 value: 22.805221872665296 - type: nauc_recall_at_10_max value: 22.62916966651195 - type: nauc_recall_at_10_std value: 2.850986270075809 - type: nauc_recall_at_1_diff1 value: 42.277954296027964 - type: nauc_recall_at_1_max value: 26.885584027024063 - type: nauc_recall_at_1_std value: -5.352914133949852 - type: nauc_recall_at_20_diff1 value: 20.64279001295643 - type: nauc_recall_at_20_max value: 22.388150824277083 - type: nauc_recall_at_20_std value: 6.511559521468454 - type: nauc_recall_at_3_diff1 value: 30.501288503697793 - type: nauc_recall_at_3_max value: 26.87612437428851 - type: nauc_recall_at_3_std value: -2.6083207155232486 - type: nauc_recall_at_5_diff1 value: 25.722785433400364 - type: nauc_recall_at_5_max value: 24.179717355049384 - type: nauc_recall_at_5_std value: -0.9851754722969593 - type: ndcg_at_1 value: 19.752 - type: ndcg_at_10 value: 30.003999999999998 - type: ndcg_at_100 value: 35.992000000000004 - type: ndcg_at_1000 value: 38.706 - type: ndcg_at_20 value: 32.104 - type: ndcg_at_3 value: 24.549000000000003 - type: ndcg_at_5 value: 26.915 - type: precision_at_1 value: 19.752 - type: precision_at_10 value: 5.826 - type: precision_at_100 value: 1.039 - type: precision_at_1000 value: 0.145 - type: precision_at_20 value: 3.543 - type: precision_at_3 value: 11.906 - type: precision_at_5 value: 8.913 - type: recall_at_1 value: 16.203 - type: recall_at_10 value: 42.96 - type: recall_at_100 value: 69.896 - type: recall_at_1000 value: 88.763 - type: recall_at_20 value: 50.690000000000005 - type: recall_at_3 value: 27.706999999999997 - type: recall_at_5 value: 33.732 - task: type: Retrieval dataset: name: MTEB CQADupstackUnixRetrieval (default) type: mteb/cqadupstack-unix config: default split: test revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53 metrics: - type: main_score value: 44.214 - type: map_at_1 value: 26.362000000000002 - type: map_at_10 value: 37.903 - type: map_at_100 value: 39.225 - type: map_at_1000 value: 39.318 - type: map_at_20 value: 38.675 - type: map_at_3 value: 34.339 - type: map_at_5 value: 36.41 - type: mrr_at_1 value: 30.597014925373134 - type: mrr_at_10 value: 41.56501717602458 - type: mrr_at_100 value: 42.44037104600305 - type: mrr_at_1000 value: 42.49164682359615 - type: mrr_at_20 value: 42.065919605875514 - type: mrr_at_3 value: 38.54166666666664 - type: mrr_at_5 value: 40.28606965174122 - type: nauc_map_at_1000_diff1 value: 47.022193015822545 - type: nauc_map_at_1000_max value: 37.510069283996 - type: nauc_map_at_1000_std value: -2.619223749210864 - type: nauc_map_at_100_diff1 value: 47.0110128474831 - type: nauc_map_at_100_max value: 37.49734217286656 - type: nauc_map_at_100_std value: -2.623168195285326 - type: nauc_map_at_10_diff1 value: 47.333691992315586 - type: nauc_map_at_10_max value: 37.42544432726044 - type: nauc_map_at_10_std value: -3.032361221814562 - type: nauc_map_at_1_diff1 value: 55.56766500340758 - type: nauc_map_at_1_max value: 37.55816037621747 - type: nauc_map_at_1_std value: -3.4756245106235073 - type: nauc_map_at_20_diff1 value: 47.043553806230264 - type: nauc_map_at_20_max value: 37.46764790200979 - type: nauc_map_at_20_std value: -2.902106622032998 - type: nauc_map_at_3_diff1 value: 48.32329624357104 - type: nauc_map_at_3_max value: 37.069989364749 - type: nauc_map_at_3_std value: -3.8963394773157063 - type: nauc_map_at_5_diff1 value: 47.45984724665631 - type: nauc_map_at_5_max value: 37.49419828702461 - type: nauc_map_at_5_std value: -2.7001983351853083 - type: nauc_mrr_at_1000_diff1 value: 44.040549449179785 - type: nauc_mrr_at_1000_max value: 37.635183657218285 - type: nauc_mrr_at_1000_std value: -2.2998001164068165 - type: nauc_mrr_at_100_diff1 value: 44.01767577712088 - type: nauc_mrr_at_100_max value: 37.61732411034475 - type: nauc_mrr_at_100_std value: -2.298113876953238 - type: nauc_mrr_at_10_diff1 value: 43.99377722723508 - type: nauc_mrr_at_10_max value: 37.63920703610793 - type: nauc_mrr_at_10_std value: -2.46107841302828 - type: nauc_mrr_at_1_diff1 value: 51.58851879203434 - type: nauc_mrr_at_1_max value: 39.18276895998245 - type: nauc_mrr_at_1_std value: -3.7547202990719293 - type: nauc_mrr_at_20_diff1 value: 43.933755385151294 - type: nauc_mrr_at_20_max value: 37.63291344921131 - type: nauc_mrr_at_20_std value: -2.4461841577607557 - type: nauc_mrr_at_3_diff1 value: 44.23763203861928 - type: nauc_mrr_at_3_max value: 37.61510167030856 - type: nauc_mrr_at_3_std value: -2.9981794837896873 - type: nauc_mrr_at_5_diff1 value: 43.70678854004207 - type: nauc_mrr_at_5_max value: 37.958066094208924 - type: nauc_mrr_at_5_std value: -2.2069511290655743 - type: nauc_ndcg_at_1000_diff1 value: 43.89102270297004 - type: nauc_ndcg_at_1000_max value: 37.373942962075965 - type: nauc_ndcg_at_1000_std value: -0.79381131699489 - type: nauc_ndcg_at_100_diff1 value: 43.480624887214276 - type: nauc_ndcg_at_100_max value: 36.779323191929564 - type: nauc_ndcg_at_100_std value: -0.5142340495653143 - type: nauc_ndcg_at_10_diff1 value: 44.1480629394505 - type: nauc_ndcg_at_10_max value: 36.79038602100573 - type: nauc_ndcg_at_10_std value: -2.5113316190423336 - type: nauc_ndcg_at_1_diff1 value: 51.58851879203434 - type: nauc_ndcg_at_1_max value: 39.18276895998245 - type: nauc_ndcg_at_1_std value: -3.7547202990719293 - type: nauc_ndcg_at_20_diff1 value: 43.44148588178158 - type: nauc_ndcg_at_20_max value: 36.78079904803215 - type: nauc_ndcg_at_20_std value: -2.2475970493788338 - type: nauc_ndcg_at_3_diff1 value: 44.786947971061814 - type: nauc_ndcg_at_3_max value: 36.95060577653726 - type: nauc_ndcg_at_3_std value: -3.7387486325957204 - type: nauc_ndcg_at_5_diff1 value: 43.96479829748338 - type: nauc_ndcg_at_5_max value: 37.29927097229379 - type: nauc_ndcg_at_5_std value: -1.8612882187654674 - type: nauc_precision_at_1000_diff1 value: -17.776876205133522 - type: nauc_precision_at_1000_max value: -2.541489259766695 - type: nauc_precision_at_1000_std value: 2.131464419490259 - type: nauc_precision_at_100_diff1 value: -8.766821764176708 - type: nauc_precision_at_100_max value: 7.997796011277149 - type: nauc_precision_at_100_std value: 7.755850154638668 - type: nauc_precision_at_10_diff1 value: 13.884358851496742 - type: nauc_precision_at_10_max value: 26.053763030345863 - type: nauc_precision_at_10_std value: -0.7621762947543562 - type: nauc_precision_at_1_diff1 value: 51.58851879203434 - type: nauc_precision_at_1_max value: 39.18276895998245 - type: nauc_precision_at_1_std value: -3.7547202990719293 - type: nauc_precision_at_20_diff1 value: 5.289872641356584 - type: nauc_precision_at_20_max value: 20.607130764764804 - type: nauc_precision_at_20_std value: -0.19263962373893834 - type: nauc_precision_at_3_diff1 value: 28.611068231485216 - type: nauc_precision_at_3_max value: 33.906669095382576 - type: nauc_precision_at_3_std value: -3.528371042478906 - type: nauc_precision_at_5_diff1 value: 21.242353796078778 - type: nauc_precision_at_5_max value: 31.511434658857922 - type: nauc_precision_at_5_std value: 1.657940004239003 - type: nauc_recall_at_1000_diff1 value: 21.80735958513851 - type: nauc_recall_at_1000_max value: 46.49625873610379 - type: nauc_recall_at_1000_std value: 43.46830209225304 - type: nauc_recall_at_100_diff1 value: 28.962521400789143 - type: nauc_recall_at_100_max value: 28.645469512854305 - type: nauc_recall_at_100_std value: 14.568180770525782 - type: nauc_recall_at_10_diff1 value: 35.9753633121628 - type: nauc_recall_at_10_max value: 32.09412739670866 - type: nauc_recall_at_10_std value: -1.4303897530560201 - type: nauc_recall_at_1_diff1 value: 55.56766500340758 - type: nauc_recall_at_1_max value: 37.55816037621747 - type: nauc_recall_at_1_std value: -3.4756245106235073 - type: nauc_recall_at_20_diff1 value: 32.57151610283604 - type: nauc_recall_at_20_max value: 31.185132882427464 - type: nauc_recall_at_20_std value: -0.5972674819999952 - type: nauc_recall_at_3_diff1 value: 39.55078433962911 - type: nauc_recall_at_3_max value: 33.34789015929553 - type: nauc_recall_at_3_std value: -4.062145558641841 - type: nauc_recall_at_5_diff1 value: 36.224639758887754 - type: nauc_recall_at_5_max value: 34.233241128748105 - type: nauc_recall_at_5_std value: 0.10907791862296992 - type: ndcg_at_1 value: 30.597 - type: ndcg_at_10 value: 44.214 - type: ndcg_at_100 value: 49.834 - type: ndcg_at_1000 value: 51.696 - type: ndcg_at_20 value: 46.541 - type: ndcg_at_3 value: 38.086 - type: ndcg_at_5 value: 41.093 - type: precision_at_1 value: 30.597 - type: precision_at_10 value: 7.845000000000001 - type: precision_at_100 value: 1.201 - type: precision_at_1000 value: 0.147 - type: precision_at_20 value: 4.618 - type: precision_at_3 value: 17.785999999999998 - type: precision_at_5 value: 12.799 - type: recall_at_1 value: 26.362000000000002 - type: recall_at_10 value: 59.484 - type: recall_at_100 value: 83.353 - type: recall_at_1000 value: 95.719 - type: recall_at_20 value: 67.74900000000001 - type: recall_at_3 value: 42.83 - type: recall_at_5 value: 50.454 - task: type: Retrieval dataset: name: MTEB CQADupstackWebmastersRetrieval (default) type: mteb/cqadupstack-webmasters config: default split: test revision: 160c094312a0e1facb97e55eeddb698c0abe3571 metrics: - type: main_score value: 41.826 - type: map_at_1 value: 24.39 - type: map_at_10 value: 35.479 - type: map_at_100 value: 37.407000000000004 - type: map_at_1000 value: 37.632 - type: map_at_20 value: 36.399 - type: map_at_3 value: 32.33 - type: map_at_5 value: 34.123 - type: mrr_at_1 value: 29.446640316205535 - type: mrr_at_10 value: 39.91953698475437 - type: mrr_at_100 value: 40.974311857221245 - type: mrr_at_1000 value: 41.02039114169333 - type: mrr_at_20 value: 40.46987332341717 - type: mrr_at_3 value: 37.154150197628475 - type: mrr_at_5 value: 38.863636363636374 - type: nauc_map_at_1000_diff1 value: 42.006980615840476 - type: nauc_map_at_1000_max value: 19.588898254504482 - type: nauc_map_at_1000_std value: -0.5711501793009518 - type: nauc_map_at_100_diff1 value: 41.89311474305503 - type: nauc_map_at_100_max value: 19.66343179157545 - type: nauc_map_at_100_std value: -0.9493523888880031 - type: nauc_map_at_10_diff1 value: 42.218327545966794 - type: nauc_map_at_10_max value: 19.093480513385746 - type: nauc_map_at_10_std value: -2.575230373968492 - type: nauc_map_at_1_diff1 value: 49.55673669630866 - type: nauc_map_at_1_max value: 21.657821076583392 - type: nauc_map_at_1_std value: -2.822086497620212 - type: nauc_map_at_20_diff1 value: 42.06716413657905 - type: nauc_map_at_20_max value: 19.443018878171156 - type: nauc_map_at_20_std value: -2.0472340866881598 - type: nauc_map_at_3_diff1 value: 43.358256588830976 - type: nauc_map_at_3_max value: 20.29951410862834 - type: nauc_map_at_3_std value: -3.122031268596122 - type: nauc_map_at_5_diff1 value: 42.79238194223064 - type: nauc_map_at_5_max value: 19.15815022850817 - type: nauc_map_at_5_std value: -3.4023876930915935 - type: nauc_mrr_at_1000_diff1 value: 38.89485952557572 - type: nauc_mrr_at_1000_max value: 18.00288526280676 - type: nauc_mrr_at_1000_std value: 0.7637893917916386 - type: nauc_mrr_at_100_diff1 value: 38.86513791743708 - type: nauc_mrr_at_100_max value: 17.984440283930496 - type: nauc_mrr_at_100_std value: 0.7693475120218602 - type: nauc_mrr_at_10_diff1 value: 38.882346535299604 - type: nauc_mrr_at_10_max value: 18.018489876209962 - type: nauc_mrr_at_10_std value: 0.6894393091820157 - type: nauc_mrr_at_1_diff1 value: 43.23691098503311 - type: nauc_mrr_at_1_max value: 19.31011799251083 - type: nauc_mrr_at_1_std value: 0.5817043396319133 - type: nauc_mrr_at_20_diff1 value: 39.01036846117744 - type: nauc_mrr_at_20_max value: 18.083878464549187 - type: nauc_mrr_at_20_std value: 0.6036676825077607 - type: nauc_mrr_at_3_diff1 value: 39.025378279654234 - type: nauc_mrr_at_3_max value: 19.13536212310877 - type: nauc_mrr_at_3_std value: 0.6868166618588564 - type: nauc_mrr_at_5_diff1 value: 38.935577962802384 - type: nauc_mrr_at_5_max value: 17.828195279090846 - type: nauc_mrr_at_5_std value: 0.023977688818205518 - type: nauc_ndcg_at_1000_diff1 value: 39.793531662914624 - type: nauc_ndcg_at_1000_max value: 18.847958004720937 - type: nauc_ndcg_at_1000_std value: 1.804038871743905 - type: nauc_ndcg_at_100_diff1 value: 38.82881796342845 - type: nauc_ndcg_at_100_max value: 18.487091746204694 - type: nauc_ndcg_at_100_std value: 2.151729614942825 - type: nauc_ndcg_at_10_diff1 value: 39.84007249501964 - type: nauc_ndcg_at_10_max value: 17.939005773004503 - type: nauc_ndcg_at_10_std value: -0.08117427983218028 - type: nauc_ndcg_at_1_diff1 value: 43.23691098503311 - type: nauc_ndcg_at_1_max value: 19.31011799251083 - type: nauc_ndcg_at_1_std value: 0.5817043396319133 - type: nauc_ndcg_at_20_diff1 value: 39.8037093598719 - type: nauc_ndcg_at_20_max value: 18.48853530550249 - type: nauc_ndcg_at_20_std value: -0.10328037808023001 - type: nauc_ndcg_at_3_diff1 value: 40.446978150459806 - type: nauc_ndcg_at_3_max value: 19.84200589719808 - type: nauc_ndcg_at_3_std value: -0.42962792649394055 - type: nauc_ndcg_at_5_diff1 value: 40.28504143456421 - type: nauc_ndcg_at_5_max value: 17.817674524132045 - type: nauc_ndcg_at_5_std value: -1.1454777994636318 - type: nauc_precision_at_1000_diff1 value: 3.6440544638350914 - type: nauc_precision_at_1000_max value: -5.322395298494433 - type: nauc_precision_at_1000_std value: 34.045681102264446 - type: nauc_precision_at_100_diff1 value: 0.39376090580783235 - type: nauc_precision_at_100_max value: 1.521619479821471 - type: nauc_precision_at_100_std value: 31.437894340720597 - type: nauc_precision_at_10_diff1 value: 11.917497057371422 - type: nauc_precision_at_10_max value: 9.428271545180175 - type: nauc_precision_at_10_std value: 12.786501469546883 - type: nauc_precision_at_1_diff1 value: 43.23691098503311 - type: nauc_precision_at_1_max value: 19.31011799251083 - type: nauc_precision_at_1_std value: 0.5817043396319133 - type: nauc_precision_at_20_diff1 value: 6.023124970004902 - type: nauc_precision_at_20_max value: 8.805642599987998 - type: nauc_precision_at_20_std value: 18.846678438758165 - type: nauc_precision_at_3_diff1 value: 24.65558274053028 - type: nauc_precision_at_3_max value: 16.61089322421877 - type: nauc_precision_at_3_std value: 5.403032728665573 - type: nauc_precision_at_5_diff1 value: 18.18179370215765 - type: nauc_precision_at_5_max value: 10.59580743049896 - type: nauc_precision_at_5_std value: 6.8603748742003186 - type: nauc_recall_at_1000_diff1 value: 22.552468938352412 - type: nauc_recall_at_1000_max value: 0.5070109561586749 - type: nauc_recall_at_1000_std value: 36.4227590800954 - type: nauc_recall_at_100_diff1 value: 20.081881489107552 - type: nauc_recall_at_100_max value: 9.134216705040693 - type: nauc_recall_at_100_std value: 18.911359987932503 - type: nauc_recall_at_10_diff1 value: 33.5396704389877 - type: nauc_recall_at_10_max value: 13.961578985947318 - type: nauc_recall_at_10_std value: 0.3177885671992674 - type: nauc_recall_at_1_diff1 value: 49.55673669630866 - type: nauc_recall_at_1_max value: 21.657821076583392 - type: nauc_recall_at_1_std value: -2.822086497620212 - type: nauc_recall_at_20_diff1 value: 32.289790950429804 - type: nauc_recall_at_20_max value: 15.71770065051139 - type: nauc_recall_at_20_std value: 0.46912194119269873 - type: nauc_recall_at_3_diff1 value: 38.16094047537937 - type: nauc_recall_at_3_max value: 18.615919857237486 - type: nauc_recall_at_3_std value: -3.1020886129527763 - type: nauc_recall_at_5_diff1 value: 36.476258679863136 - type: nauc_recall_at_5_max value: 14.03780564222494 - type: nauc_recall_at_5_std value: -4.610491792353064 - type: ndcg_at_1 value: 29.447000000000003 - type: ndcg_at_10 value: 41.826 - type: ndcg_at_100 value: 48.559999999999995 - type: ndcg_at_1000 value: 50.678 - type: ndcg_at_20 value: 44.204 - type: ndcg_at_3 value: 36.687 - type: ndcg_at_5 value: 39.345 - type: precision_at_1 value: 29.447000000000003 - type: precision_at_10 value: 8.202 - type: precision_at_100 value: 1.735 - type: precision_at_1000 value: 0.253 - type: precision_at_20 value: 5.257 - type: precision_at_3 value: 17.523 - type: precision_at_5 value: 12.925 - type: recall_at_1 value: 24.39 - type: recall_at_10 value: 54.173 - type: recall_at_100 value: 83.648 - type: recall_at_1000 value: 96.819 - type: recall_at_20 value: 63.09 - type: recall_at_3 value: 40.146 - type: recall_at_5 value: 46.705999999999996 - task: type: Retrieval dataset: name: MTEB CQADupstackWordpressRetrieval (default) type: mteb/cqadupstack-wordpress config: default split: test revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 metrics: - type: main_score value: 35.32 - type: map_at_1 value: 21.679000000000002 - type: map_at_10 value: 29.976000000000003 - type: map_at_100 value: 31.127 - type: map_at_1000 value: 31.207 - type: map_at_20 value: 30.628 - type: map_at_3 value: 26.884999999999998 - type: map_at_5 value: 28.698 - type: mrr_at_1 value: 23.10536044362292 - type: mrr_at_10 value: 31.68955197605845 - type: mrr_at_100 value: 32.696453094313846 - type: mrr_at_1000 value: 32.74644026497888 - type: mrr_at_20 value: 32.270216614045836 - type: mrr_at_3 value: 28.89710412815773 - type: mrr_at_5 value: 30.58841651263093 - type: nauc_map_at_1000_diff1 value: 36.527663233423816 - type: nauc_map_at_1000_max value: 25.70281352614131 - type: nauc_map_at_1000_std value: -10.47544573010382 - type: nauc_map_at_100_diff1 value: 36.49901520674112 - type: nauc_map_at_100_max value: 25.720411208615296 - type: nauc_map_at_100_std value: -10.4498182981879 - type: nauc_map_at_10_diff1 value: 36.56160743360435 - type: nauc_map_at_10_max value: 25.481122372570773 - type: nauc_map_at_10_std value: -11.386413278364452 - type: nauc_map_at_1_diff1 value: 41.96740973142382 - type: nauc_map_at_1_max value: 26.447948671103067 - type: nauc_map_at_1_std value: -13.055537837295866 - type: nauc_map_at_20_diff1 value: 36.53096807557235 - type: nauc_map_at_20_max value: 25.65603650587719 - type: nauc_map_at_20_std value: -10.662281391136927 - type: nauc_map_at_3_diff1 value: 37.49452415422011 - type: nauc_map_at_3_max value: 25.73144884866817 - type: nauc_map_at_3_std value: -13.338023323977556 - type: nauc_map_at_5_diff1 value: 36.606269278969975 - type: nauc_map_at_5_max value: 25.41542359776377 - type: nauc_map_at_5_std value: -11.855800226714972 - type: nauc_mrr_at_1000_diff1 value: 37.23554980964842 - type: nauc_mrr_at_1000_max value: 26.21821820816298 - type: nauc_mrr_at_1000_std value: -8.828254845751971 - type: nauc_mrr_at_100_diff1 value: 37.212657398099694 - type: nauc_mrr_at_100_max value: 26.22027175755234 - type: nauc_mrr_at_100_std value: -8.796357557047992 - type: nauc_mrr_at_10_diff1 value: 37.2057222353167 - type: nauc_mrr_at_10_max value: 26.066257028633437 - type: nauc_mrr_at_10_std value: -9.40582019511086 - type: nauc_mrr_at_1_diff1 value: 42.88369139125157 - type: nauc_mrr_at_1_max value: 28.11145662254025 - type: nauc_mrr_at_1_std value: -11.874419938670789 - type: nauc_mrr_at_20_diff1 value: 37.15164357320353 - type: nauc_mrr_at_20_max value: 26.179170369717674 - type: nauc_mrr_at_20_std value: -8.850966109061742 - type: nauc_mrr_at_3_diff1 value: 37.65133484078781 - type: nauc_mrr_at_3_max value: 27.013109167125126 - type: nauc_mrr_at_3_std value: -10.62551433790744 - type: nauc_mrr_at_5_diff1 value: 37.01454233799136 - type: nauc_mrr_at_5_max value: 26.487303256496585 - type: nauc_mrr_at_5_std value: -9.414516112557687 - type: nauc_ndcg_at_1000_diff1 value: 34.6398016933486 - type: nauc_ndcg_at_1000_max value: 25.383536282414426 - type: nauc_ndcg_at_1000_std value: -6.736596281730975 - type: nauc_ndcg_at_100_diff1 value: 33.96995602141814 - type: nauc_ndcg_at_100_max value: 25.233190880846283 - type: nauc_ndcg_at_100_std value: -6.232028137803393 - type: nauc_ndcg_at_10_diff1 value: 34.36578627190237 - type: nauc_ndcg_at_10_max value: 24.497955345913873 - type: nauc_ndcg_at_10_std value: -9.586241421890247 - type: nauc_ndcg_at_1_diff1 value: 42.88369139125157 - type: nauc_ndcg_at_1_max value: 28.11145662254025 - type: nauc_ndcg_at_1_std value: -11.874419938670789 - type: nauc_ndcg_at_20_diff1 value: 34.029206058207315 - type: nauc_ndcg_at_20_max value: 24.66376498000024 - type: nauc_ndcg_at_20_std value: -7.237996394617084 - type: nauc_ndcg_at_3_diff1 value: 35.42536049883026 - type: nauc_ndcg_at_3_max value: 25.919093993978805 - type: nauc_ndcg_at_3_std value: -12.333032157833742 - type: nauc_ndcg_at_5_diff1 value: 34.21670718788367 - type: nauc_ndcg_at_5_max value: 24.883264080042085 - type: nauc_ndcg_at_5_std value: -10.228412772316155 - type: nauc_precision_at_1000_diff1 value: -5.762103084787246 - type: nauc_precision_at_1000_max value: -7.5833194267759865 - type: nauc_precision_at_1000_std value: 10.675418300666859 - type: nauc_precision_at_100_diff1 value: 2.548944950287374 - type: nauc_precision_at_100_max value: 16.50696190698422 - type: nauc_precision_at_100_std value: 20.430914163842253 - type: nauc_precision_at_10_diff1 value: 18.385077734207343 - type: nauc_precision_at_10_max value: 22.121729211787244 - type: nauc_precision_at_10_std value: 2.4230944307235625 - type: nauc_precision_at_1_diff1 value: 42.88369139125157 - type: nauc_precision_at_1_max value: 28.11145662254025 - type: nauc_precision_at_1_std value: -11.874419938670789 - type: nauc_precision_at_20_diff1 value: 14.947847678278489 - type: nauc_precision_at_20_max value: 21.704383740839535 - type: nauc_precision_at_20_std value: 10.626943172005621 - type: nauc_precision_at_3_diff1 value: 29.387921830287894 - type: nauc_precision_at_3_max value: 26.314386080831557 - type: nauc_precision_at_3_std value: -8.577508887319222 - type: nauc_precision_at_5_diff1 value: 23.965022746288973 - type: nauc_precision_at_5_max value: 24.929773207772136 - type: nauc_precision_at_5_std value: -2.4211370627395503 - type: nauc_recall_at_1000_diff1 value: 14.67658540132588 - type: nauc_recall_at_1000_max value: 25.448574144193696 - type: nauc_recall_at_1000_std value: 36.56427335045912 - type: nauc_recall_at_100_diff1 value: 20.467829912574835 - type: nauc_recall_at_100_max value: 20.79654725970721 - type: nauc_recall_at_100_std value: 13.227837192809094 - type: nauc_recall_at_10_diff1 value: 27.310476416825875 - type: nauc_recall_at_10_max value: 20.224074911583465 - type: nauc_recall_at_10_std value: -6.358258513190565 - type: nauc_recall_at_1_diff1 value: 41.96740973142382 - type: nauc_recall_at_1_max value: 26.447948671103067 - type: nauc_recall_at_1_std value: -13.055537837295866 - type: nauc_recall_at_20_diff1 value: 24.956767447964573 - type: nauc_recall_at_20_max value: 19.201058474561965 - type: nauc_recall_at_20_std value: 2.8393009259024424 - type: nauc_recall_at_3_diff1 value: 30.029057692380977 - type: nauc_recall_at_3_max value: 24.62072994737222 - type: nauc_recall_at_3_std value: -12.694054132443867 - type: nauc_recall_at_5_diff1 value: 26.697765884037405 - type: nauc_recall_at_5_max value: 21.820140093186758 - type: nauc_recall_at_5_std value: -7.5604509717503845 - type: ndcg_at_1 value: 23.105 - type: ndcg_at_10 value: 35.32 - type: ndcg_at_100 value: 41.136 - type: ndcg_at_1000 value: 43.228 - type: ndcg_at_20 value: 37.57 - type: ndcg_at_3 value: 29.396 - type: ndcg_at_5 value: 32.494 - type: precision_at_1 value: 23.105 - type: precision_at_10 value: 5.878 - type: precision_at_100 value: 0.9520000000000001 - type: precision_at_1000 value: 0.125 - type: precision_at_20 value: 3.466 - type: precision_at_3 value: 12.692999999999998 - type: precision_at_5 value: 9.39 - type: recall_at_1 value: 21.679000000000002 - type: recall_at_10 value: 49.925999999999995 - type: recall_at_100 value: 77.274 - type: recall_at_1000 value: 92.907 - type: recall_at_20 value: 58.650000000000006 - type: recall_at_3 value: 34.143 - type: recall_at_5 value: 41.802 - task: type: Retrieval dataset: name: MTEB ClimateFEVER (default) type: mteb/climate-fever config: default split: test revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380 metrics: - type: main_score value: 36.964000000000006 - type: map_at_1 value: 16.888 - type: map_at_10 value: 27.276 - type: map_at_100 value: 29.357 - type: map_at_1000 value: 29.511 - type: map_at_20 value: 28.492 - type: map_at_3 value: 23.075000000000003 - type: map_at_5 value: 25.312 - type: mrr_at_1 value: 38.957654723127035 - type: mrr_at_10 value: 50.258001137480015 - type: mrr_at_100 value: 50.985884572131546 - type: mrr_at_1000 value: 51.00775273659549 - type: mrr_at_20 value: 50.75537860384226 - type: mrr_at_3 value: 47.285559174810025 - type: mrr_at_5 value: 49.103148751357296 - type: nauc_map_at_1000_diff1 value: 25.747911729046617 - type: nauc_map_at_1000_max value: 31.255398545903596 - type: nauc_map_at_1000_std value: 24.872486591713496 - type: nauc_map_at_100_diff1 value: 25.740790002104784 - type: nauc_map_at_100_max value: 31.266176115777732 - type: nauc_map_at_100_std value: 24.84459109976713 - type: nauc_map_at_10_diff1 value: 26.02628794450362 - type: nauc_map_at_10_max value: 31.374003980672775 - type: nauc_map_at_10_std value: 23.587823327864708 - type: nauc_map_at_1_diff1 value: 34.73062154387364 - type: nauc_map_at_1_max value: 30.574707039457493 - type: nauc_map_at_1_std value: 19.436488371927435 - type: nauc_map_at_20_diff1 value: 25.895513631254737 - type: nauc_map_at_20_max value: 31.283103126369085 - type: nauc_map_at_20_std value: 24.530799671005095 - type: nauc_map_at_3_diff1 value: 27.24864828341282 - type: nauc_map_at_3_max value: 30.612466018878496 - type: nauc_map_at_3_std value: 21.28508055272999 - type: nauc_map_at_5_diff1 value: 26.896012525725325 - type: nauc_map_at_5_max value: 31.048230953772975 - type: nauc_map_at_5_std value: 22.36923819012987 - type: nauc_mrr_at_1000_diff1 value: 27.03321771604688 - type: nauc_mrr_at_1000_max value: 29.318016872314356 - type: nauc_mrr_at_1000_std value: 27.255662227981702 - type: nauc_mrr_at_100_diff1 value: 27.01645813593333 - type: nauc_mrr_at_100_max value: 29.30840557667501 - type: nauc_mrr_at_100_std value: 27.260764047829472 - type: nauc_mrr_at_10_diff1 value: 27.009721260288906 - type: nauc_mrr_at_10_max value: 29.467580933154487 - type: nauc_mrr_at_10_std value: 27.03289830021139 - type: nauc_mrr_at_1_diff1 value: 31.155140837576678 - type: nauc_mrr_at_1_max value: 28.451108545079713 - type: nauc_mrr_at_1_std value: 25.763588681961014 - type: nauc_mrr_at_20_diff1 value: 26.99961056015129 - type: nauc_mrr_at_20_max value: 29.361859812964674 - type: nauc_mrr_at_20_std value: 27.258149714708598 - type: nauc_mrr_at_3_diff1 value: 26.418761490514214 - type: nauc_mrr_at_3_max value: 29.04645399007843 - type: nauc_mrr_at_3_std value: 26.57078938243618 - type: nauc_mrr_at_5_diff1 value: 26.828957822719413 - type: nauc_mrr_at_5_max value: 29.183119511596573 - type: nauc_mrr_at_5_std value: 26.632517222850595 - type: nauc_ndcg_at_1000_diff1 value: 23.763944477348627 - type: nauc_ndcg_at_1000_max value: 30.4849287035792 - type: nauc_ndcg_at_1000_std value: 28.607171984717837 - type: nauc_ndcg_at_100_diff1 value: 23.771597354347985 - type: nauc_ndcg_at_100_max value: 30.840324918263008 - type: nauc_ndcg_at_100_std value: 28.353814075332277 - type: nauc_ndcg_at_10_diff1 value: 24.635557021112522 - type: nauc_ndcg_at_10_max value: 31.45726246077928 - type: nauc_ndcg_at_10_std value: 25.66603542922466 - type: nauc_ndcg_at_1_diff1 value: 31.155140837576678 - type: nauc_ndcg_at_1_max value: 28.451108545079713 - type: nauc_ndcg_at_1_std value: 25.763588681961014 - type: nauc_ndcg_at_20_diff1 value: 24.374570394904733 - type: nauc_ndcg_at_20_max value: 31.21182863225155 - type: nauc_ndcg_at_20_std value: 27.516107222806703 - type: nauc_ndcg_at_3_diff1 value: 25.204639941707107 - type: nauc_ndcg_at_3_max value: 29.821897512710354 - type: nauc_ndcg_at_3_std value: 22.955368886721388 - type: nauc_ndcg_at_5_diff1 value: 25.628786786945394 - type: nauc_ndcg_at_5_max value: 30.75374910801621 - type: nauc_ndcg_at_5_std value: 23.77602081355407 - type: nauc_precision_at_1000_diff1 value: -8.296864442882004 - type: nauc_precision_at_1000_max value: -1.0863639124110083 - type: nauc_precision_at_1000_std value: 13.942768215751009 - type: nauc_precision_at_100_diff1 value: -1.3641094219864045 - type: nauc_precision_at_100_max value: 9.389565817464774 - type: nauc_precision_at_100_std value: 20.969714828784273 - type: nauc_precision_at_10_diff1 value: 7.699985574715798 - type: nauc_precision_at_10_max value: 21.22938970938099 - type: nauc_precision_at_10_std value: 24.244673965667534 - type: nauc_precision_at_1_diff1 value: 31.155140837576678 - type: nauc_precision_at_1_max value: 28.451108545079713 - type: nauc_precision_at_1_std value: 25.763588681961014 - type: nauc_precision_at_20_diff1 value: 4.833145200843495 - type: nauc_precision_at_20_max value: 16.887995253179778 - type: nauc_precision_at_20_std value: 25.795764951079676 - type: nauc_precision_at_3_diff1 value: 14.206196520456535 - type: nauc_precision_at_3_max value: 25.787350079153164 - type: nauc_precision_at_3_std value: 23.690498209580376 - type: nauc_precision_at_5_diff1 value: 12.32708818053103 - type: nauc_precision_at_5_max value: 24.032115473559603 - type: nauc_precision_at_5_std value: 23.484892204331782 - type: nauc_recall_at_1000_diff1 value: 8.22006170896915 - type: nauc_recall_at_1000_max value: 18.81264480377967 - type: nauc_recall_at_1000_std value: 32.283741333393124 - type: nauc_recall_at_100_diff1 value: 12.18567832527598 - type: nauc_recall_at_100_max value: 23.099403731211336 - type: nauc_recall_at_100_std value: 28.36911088287291 - type: nauc_recall_at_10_diff1 value: 17.700146749912946 - type: nauc_recall_at_10_max value: 28.284625319657376 - type: nauc_recall_at_10_std value: 23.48564062964447 - type: nauc_recall_at_1_diff1 value: 34.73062154387364 - type: nauc_recall_at_1_max value: 30.574707039457493 - type: nauc_recall_at_1_std value: 19.436488371927435 - type: nauc_recall_at_20_diff1 value: 16.156165277785433 - type: nauc_recall_at_20_max value: 26.281037479122688 - type: nauc_recall_at_20_std value: 26.74972643606532 - type: nauc_recall_at_3_diff1 value: 21.750067698676038 - type: nauc_recall_at_3_max value: 28.754878789472237 - type: nauc_recall_at_3_std value: 20.01663586142874 - type: nauc_recall_at_5_diff1 value: 20.892218178864958 - type: nauc_recall_at_5_max value: 28.124407488104154 - type: nauc_recall_at_5_std value: 20.61434989716216 - type: ndcg_at_1 value: 38.958 - type: ndcg_at_10 value: 36.964000000000006 - type: ndcg_at_100 value: 44.115 - type: ndcg_at_1000 value: 46.796 - type: ndcg_at_20 value: 40.062 - type: ndcg_at_3 value: 31.316 - type: ndcg_at_5 value: 33.211 - type: precision_at_1 value: 38.958 - type: precision_at_10 value: 11.153 - type: precision_at_100 value: 1.8769999999999998 - type: precision_at_1000 value: 0.23900000000000002 - type: precision_at_20 value: 6.898999999999999 - type: precision_at_3 value: 22.91 - type: precision_at_5 value: 17.316000000000003 - type: recall_at_1 value: 16.888 - type: recall_at_10 value: 42.161 - type: recall_at_100 value: 66.102 - type: recall_at_1000 value: 81.026 - type: recall_at_20 value: 50.86000000000001 - type: recall_at_3 value: 27.598 - type: recall_at_5 value: 33.814 - task: type: Retrieval dataset: name: MTEB DBPedia (default) type: mteb/dbpedia config: default split: test revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659 metrics: - type: main_score value: 47.156 - type: map_at_1 value: 10.148 - type: map_at_10 value: 22.569 - type: map_at_100 value: 31.448999999999998 - type: map_at_1000 value: 33.44 - type: map_at_20 value: 26.069 - type: map_at_3 value: 16.014999999999997 - type: map_at_5 value: 18.901 - type: mrr_at_1 value: 76.5 - type: mrr_at_10 value: 82.2888888888889 - type: mrr_at_100 value: 82.49538251419122 - type: mrr_at_1000 value: 82.5021397953138 - type: mrr_at_20 value: 82.4252913752914 - type: mrr_at_3 value: 81.16666666666667 - type: mrr_at_5 value: 81.74166666666667 - type: nauc_map_at_1000_diff1 value: 23.301407358655897 - type: nauc_map_at_1000_max value: 31.424515733703277 - type: nauc_map_at_1000_std value: 12.046348501470613 - type: nauc_map_at_100_diff1 value: 24.334834238228208 - type: nauc_map_at_100_max value: 32.08530087581391 - type: nauc_map_at_100_std value: 9.739338172156248 - type: nauc_map_at_10_diff1 value: 24.73875979445744 - type: nauc_map_at_10_max value: 33.276940524471236 - type: nauc_map_at_10_std value: -9.164629695967152 - type: nauc_map_at_1_diff1 value: 32.00102576762782 - type: nauc_map_at_1_max value: 33.51205113985759 - type: nauc_map_at_1_std value: -23.44857278246293 - type: nauc_map_at_20_diff1 value: 24.778913289961395 - type: nauc_map_at_20_max value: 33.448516812943105 - type: nauc_map_at_20_std value: -2.6683319171528352 - type: nauc_map_at_3_diff1 value: 25.23076431439413 - type: nauc_map_at_3_max value: 31.154454837782296 - type: nauc_map_at_3_std value: -19.385936810006037 - type: nauc_map_at_5_diff1 value: 24.259635425617436 - type: nauc_map_at_5_max value: 31.729395753450955 - type: nauc_map_at_5_std value: -16.127594389625504 - type: nauc_mrr_at_1000_diff1 value: 55.03942750204178 - type: nauc_mrr_at_1000_max value: 50.96490431671482 - type: nauc_mrr_at_1000_std value: 25.836343538312683 - type: nauc_mrr_at_100_diff1 value: 55.04552440551358 - type: nauc_mrr_at_100_max value: 50.963427582835784 - type: nauc_mrr_at_100_std value: 25.831153320454614 - type: nauc_mrr_at_10_diff1 value: 55.229048800522484 - type: nauc_mrr_at_10_max value: 51.3390557615654 - type: nauc_mrr_at_10_std value: 25.922071812454657 - type: nauc_mrr_at_1_diff1 value: 55.37295537295538 - type: nauc_mrr_at_1_max value: 50.218350218350224 - type: nauc_mrr_at_1_std value: 21.025971025971035 - type: nauc_mrr_at_20_diff1 value: 55.00593357106184 - type: nauc_mrr_at_20_max value: 51.05472083367735 - type: nauc_mrr_at_20_std value: 25.909336886328717 - type: nauc_mrr_at_3_diff1 value: 54.446322775722045 - type: nauc_mrr_at_3_max value: 50.46360292721918 - type: nauc_mrr_at_3_std value: 26.329787827966765 - type: nauc_mrr_at_5_diff1 value: 55.12344889931978 - type: nauc_mrr_at_5_max value: 50.56903196971039 - type: nauc_mrr_at_5_std value: 26.55026150180298 - type: nauc_ndcg_at_1000_diff1 value: 30.228357984090167 - type: nauc_ndcg_at_1000_max value: 36.2083465618033 - type: nauc_ndcg_at_1000_std value: 27.053206106506984 - type: nauc_ndcg_at_100_diff1 value: 31.703353354291863 - type: nauc_ndcg_at_100_max value: 36.373859031948 - type: nauc_ndcg_at_100_std value: 20.068846970727698 - type: nauc_ndcg_at_10_diff1 value: 32.256110136975494 - type: nauc_ndcg_at_10_max value: 38.99767005264608 - type: nauc_ndcg_at_10_std value: 17.039892454085155 - type: nauc_ndcg_at_1_diff1 value: 47.26276005028265 - type: nauc_ndcg_at_1_max value: 41.38306374405809 - type: nauc_ndcg_at_1_std value: 14.684984307489964 - type: nauc_ndcg_at_20_diff1 value: 31.76825501599332 - type: nauc_ndcg_at_20_max value: 38.20899873887362 - type: nauc_ndcg_at_20_std value: 13.097118845724665 - type: nauc_ndcg_at_3_diff1 value: 31.321786216514703 - type: nauc_ndcg_at_3_max value: 35.981146886743026 - type: nauc_ndcg_at_3_std value: 18.307636914108354 - type: nauc_ndcg_at_5_diff1 value: 29.66664100781043 - type: nauc_ndcg_at_5_max value: 37.59729053135147 - type: nauc_ndcg_at_5_std value: 17.373905471488126 - type: nauc_precision_at_1000_diff1 value: -18.87091581184578 - type: nauc_precision_at_1000_max value: -18.696227759596617 - type: nauc_precision_at_1000_std value: 4.226333183482085 - type: nauc_precision_at_100_diff1 value: -5.287216009706794 - type: nauc_precision_at_100_max value: -4.887951805802842 - type: nauc_precision_at_100_std value: 35.6235756181467 - type: nauc_precision_at_10_diff1 value: 5.858724870109219 - type: nauc_precision_at_10_max value: 12.913194030354791 - type: nauc_precision_at_10_std value: 38.69077081708489 - type: nauc_precision_at_1_diff1 value: 55.37295537295538 - type: nauc_precision_at_1_max value: 50.218350218350224 - type: nauc_precision_at_1_std value: 21.025971025971035 - type: nauc_precision_at_20_diff1 value: 2.1852822564080414 - type: nauc_precision_at_20_max value: 7.877715960706996 - type: nauc_precision_at_20_std value: 39.87418052705391 - type: nauc_precision_at_3_diff1 value: 15.964890926148051 - type: nauc_precision_at_3_max value: 20.914035316890114 - type: nauc_precision_at_3_std value: 26.831455070123145 - type: nauc_precision_at_5_diff1 value: 8.337345796875258 - type: nauc_precision_at_5_max value: 17.75805675007055 - type: nauc_precision_at_5_std value: 32.10856342736335 - type: nauc_recall_at_1000_diff1 value: 17.846516157012843 - type: nauc_recall_at_1000_max value: 17.900524965136565 - type: nauc_recall_at_1000_std value: 33.98517407615005 - type: nauc_recall_at_100_diff1 value: 21.018315900271688 - type: nauc_recall_at_100_max value: 22.94790843604604 - type: nauc_recall_at_100_std value: 18.03552806307113 - type: nauc_recall_at_10_diff1 value: 22.18291118606378 - type: nauc_recall_at_10_max value: 30.252113761422056 - type: nauc_recall_at_10_std value: -10.179895110312067 - type: nauc_recall_at_1_diff1 value: 32.00102576762782 - type: nauc_recall_at_1_max value: 33.51205113985759 - type: nauc_recall_at_1_std value: -23.44857278246293 - type: nauc_recall_at_20_diff1 value: 21.6294261882514 - type: nauc_recall_at_20_max value: 29.28972334436445 - type: nauc_recall_at_20_std value: -3.252679363030184 - type: nauc_recall_at_3_diff1 value: 22.94021002974748 - type: nauc_recall_at_3_max value: 29.08903130551997 - type: nauc_recall_at_3_std value: -19.552669466489 - type: nauc_recall_at_5_diff1 value: 21.74684484136416 - type: nauc_recall_at_5_max value: 28.266794468484495 - type: nauc_recall_at_5_std value: -17.0802549689716 - type: ndcg_at_1 value: 63.5 - type: ndcg_at_10 value: 47.156 - type: ndcg_at_100 value: 51.564 - type: ndcg_at_1000 value: 59.386 - type: ndcg_at_20 value: 46.233999999999995 - type: ndcg_at_3 value: 52.90899999999999 - type: ndcg_at_5 value: 49.482 - type: precision_at_1 value: 76.5 - type: precision_at_10 value: 36.875 - type: precision_at_100 value: 11.591999999999999 - type: precision_at_1000 value: 2.346 - type: precision_at_20 value: 27.612 - type: precision_at_3 value: 56.083000000000006 - type: precision_at_5 value: 46.949999999999996 - type: recall_at_1 value: 10.148 - type: recall_at_10 value: 28.183999999999997 - type: recall_at_100 value: 57.187 - type: recall_at_1000 value: 82.069 - type: recall_at_20 value: 36.02 - type: recall_at_3 value: 17.31 - type: recall_at_5 value: 21.711 - task: type: Classification dataset: name: MTEB EmotionClassification (default) type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 53.449999999999996 - type: f1 value: 47.565489310302986 - type: f1_weighted value: 55.143079860495234 - type: main_score value: 53.449999999999996 - task: type: Retrieval dataset: name: MTEB FEVER (default) type: mteb/fever config: default split: test revision: bea83ef9e8fb933d90a2f1d5515737465d613e12 metrics: - type: main_score value: 92.201 - type: map_at_1 value: 84.672 - type: map_at_10 value: 89.755 - type: map_at_100 value: 89.90899999999999 - type: map_at_1000 value: 89.92099999999999 - type: map_at_20 value: 89.84100000000001 - type: map_at_3 value: 89.226 - type: map_at_5 value: 89.527 - type: mrr_at_1 value: 91.1941194119412 - type: mrr_at_10 value: 94.87830925949736 - type: mrr_at_100 value: 94.90049591414446 - type: mrr_at_1000 value: 94.9007044904392 - type: mrr_at_20 value: 94.89512092080201 - type: mrr_at_3 value: 94.67196719671963 - type: mrr_at_5 value: 94.79947994799474 - type: nauc_map_at_1000_diff1 value: 35.639838917786946 - type: nauc_map_at_1000_max value: 13.71831703137999 - type: nauc_map_at_1000_std value: -16.244788042624585 - type: nauc_map_at_100_diff1 value: 35.597291964012435 - type: nauc_map_at_100_max value: 13.707968885019728 - type: nauc_map_at_100_std value: -16.234475561207322 - type: nauc_map_at_10_diff1 value: 35.456175745479825 - type: nauc_map_at_10_max value: 13.886262397124549 - type: nauc_map_at_10_std value: -16.267873880349104 - type: nauc_map_at_1_diff1 value: 46.05069433503191 - type: nauc_map_at_1_max value: 14.89528908470166 - type: nauc_map_at_1_std value: -18.236070781621756 - type: nauc_map_at_20_diff1 value: 35.451928451516864 - type: nauc_map_at_20_max value: 13.6810017695262 - type: nauc_map_at_20_std value: -16.207941145958856 - type: nauc_map_at_3_diff1 value: 35.63821451113733 - type: nauc_map_at_3_max value: 14.50663022315242 - type: nauc_map_at_3_std value: -17.576242065815734 - type: nauc_map_at_5_diff1 value: 35.39067191316313 - type: nauc_map_at_5_max value: 13.95304590199052 - type: nauc_map_at_5_std value: -16.951256853731795 - type: nauc_mrr_at_1000_diff1 value: 75.84258563766511 - type: nauc_mrr_at_1000_max value: 21.62062081403913 - type: nauc_mrr_at_1000_std value: -41.83920347093032 - type: nauc_mrr_at_100_diff1 value: 75.8435737070719 - type: nauc_mrr_at_100_max value: 21.623097020990052 - type: nauc_mrr_at_100_std value: -41.83508809623058 - type: nauc_mrr_at_10_diff1 value: 75.81801484107942 - type: nauc_mrr_at_10_max value: 21.704185197428245 - type: nauc_mrr_at_10_std value: -42.03425200091995 - type: nauc_mrr_at_1_diff1 value: 77.04724957066034 - type: nauc_mrr_at_1_max value: 22.024144657121404 - type: nauc_mrr_at_1_std value: -35.10421745431855 - type: nauc_mrr_at_20_diff1 value: 75.83479830947383 - type: nauc_mrr_at_20_max value: 21.58622779021691 - type: nauc_mrr_at_20_std value: -41.86125228142864 - type: nauc_mrr_at_3_diff1 value: 75.48936568920693 - type: nauc_mrr_at_3_max value: 22.342677145925645 - type: nauc_mrr_at_3_std value: -44.18742305905927 - type: nauc_mrr_at_5_diff1 value: 75.65824242070917 - type: nauc_mrr_at_5_max value: 21.663850758591447 - type: nauc_mrr_at_5_std value: -43.1894070995383 - type: nauc_ndcg_at_1000_diff1 value: 37.14673606903847 - type: nauc_ndcg_at_1000_max value: 13.880930479763675 - type: nauc_ndcg_at_1000_std value: -16.65051678433904 - type: nauc_ndcg_at_100_diff1 value: 36.05555838296192 - type: nauc_ndcg_at_100_max value: 13.635131503635712 - type: nauc_ndcg_at_100_std value: -16.229637436428554 - type: nauc_ndcg_at_10_diff1 value: 35.339471530051725 - type: nauc_ndcg_at_10_max value: 14.068641881473336 - type: nauc_ndcg_at_10_std value: -16.57935046771473 - type: nauc_ndcg_at_1_diff1 value: 77.04724957066034 - type: nauc_ndcg_at_1_max value: 22.024144657121404 - type: nauc_ndcg_at_1_std value: -35.10421745431855 - type: nauc_ndcg_at_20_diff1 value: 35.24453482200621 - type: nauc_ndcg_at_20_max value: 13.358263743060084 - type: nauc_ndcg_at_20_std value: -16.142097913894858 - type: nauc_ndcg_at_3_diff1 value: 37.293476898887505 - type: nauc_ndcg_at_3_max value: 15.133932930960345 - type: nauc_ndcg_at_3_std value: -21.422125651374348 - type: nauc_ndcg_at_5_diff1 value: 35.7967558978059 - type: nauc_ndcg_at_5_max value: 14.153790043028987 - type: nauc_ndcg_at_5_std value: -19.007127645863324 - type: nauc_precision_at_1000_diff1 value: -6.961103885855311 - type: nauc_precision_at_1000_max value: -7.54526392119496 - type: nauc_precision_at_1000_std value: 2.4581095173539143 - type: nauc_precision_at_100_diff1 value: -10.61256188833412 - type: nauc_precision_at_100_max value: -8.41405913180417 - type: nauc_precision_at_100_std value: 4.237708359203656 - type: nauc_precision_at_10_diff1 value: -10.614577160630317 - type: nauc_precision_at_10_max value: -5.167597481869389 - type: nauc_precision_at_10_std value: 1.140168377848993 - type: nauc_precision_at_1_diff1 value: 77.04724957066034 - type: nauc_precision_at_1_max value: 22.024144657121404 - type: nauc_precision_at_1_std value: -35.10421745431855 - type: nauc_precision_at_20_diff1 value: -12.461125809680928 - type: nauc_precision_at_20_max value: -8.635206490614799 - type: nauc_precision_at_20_std value: 3.514319507686466 - type: nauc_precision_at_3_diff1 value: 0.904547422970022 - type: nauc_precision_at_3_max value: 4.762429279213669 - type: nauc_precision_at_3_std value: -17.03444137257749 - type: nauc_precision_at_5_diff1 value: -6.19509635703273 - type: nauc_precision_at_5_max value: -1.748040103739541 - type: nauc_precision_at_5_std value: -8.398471580131982 - type: nauc_recall_at_1000_diff1 value: -24.618899403404722 - type: nauc_recall_at_1000_max value: 2.2601153418804993 - type: nauc_recall_at_1000_std value: 37.967286945129594 - type: nauc_recall_at_100_diff1 value: -16.707602974172563 - type: nauc_recall_at_100_max value: 2.5960324943305695 - type: nauc_recall_at_100_std value: 23.36149042284208 - type: nauc_recall_at_10_diff1 value: -0.9129283126796908 - type: nauc_recall_at_10_max value: 7.721727797820066 - type: nauc_recall_at_10_std value: 5.965839799258808 - type: nauc_recall_at_1_diff1 value: 46.05069433503191 - type: nauc_recall_at_1_max value: 14.89528908470166 - type: nauc_recall_at_1_std value: -18.236070781621756 - type: nauc_recall_at_20_diff1 value: -6.870488981132769 - type: nauc_recall_at_20_max value: 3.020122380835828 - type: nauc_recall_at_20_std value: 12.269552351901797 - type: nauc_recall_at_3_diff1 value: 11.772652803808322 - type: nauc_recall_at_3_max value: 12.467973905249256 - type: nauc_recall_at_3_std value: -12.480260715760009 - type: nauc_recall_at_5_diff1 value: 6.0800516850026405 - type: nauc_recall_at_5_max value: 9.078645485005183 - type: nauc_recall_at_5_std value: -6.26939010430003 - type: ndcg_at_1 value: 91.194 - type: ndcg_at_10 value: 92.201 - type: ndcg_at_100 value: 92.714 - type: ndcg_at_1000 value: 92.91199999999999 - type: ndcg_at_20 value: 92.407 - type: ndcg_at_3 value: 91.56800000000001 - type: ndcg_at_5 value: 91.838 - type: precision_at_1 value: 91.194 - type: precision_at_10 value: 10.485999999999999 - type: precision_at_100 value: 1.093 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_20 value: 5.3100000000000005 - type: precision_at_3 value: 33.833 - type: precision_at_5 value: 20.585 - type: recall_at_1 value: 84.672 - type: recall_at_10 value: 94.878 - type: recall_at_100 value: 96.851 - type: recall_at_1000 value: 98.00999999999999 - type: recall_at_20 value: 95.55900000000001 - type: recall_at_3 value: 92.925 - type: recall_at_5 value: 93.811 - task: type: Retrieval dataset: name: MTEB FiQA2018 (default) type: mteb/fiqa config: default split: test revision: 27a168819829fe9bcd655c2df245fb19452e8e06 metrics: - type: main_score value: 55.846 - type: map_at_1 value: 28.559 - type: map_at_10 value: 47.788000000000004 - type: map_at_100 value: 50.005 - type: map_at_1000 value: 50.135 - type: map_at_20 value: 49.136 - type: map_at_3 value: 42.069 - type: map_at_5 value: 45.359 - type: mrr_at_1 value: 54.166666666666664 - type: mrr_at_10 value: 63.058801195375246 - type: mrr_at_100 value: 63.736243940040104 - type: mrr_at_1000 value: 63.76207658887102 - type: mrr_at_20 value: 63.5120384043063 - type: mrr_at_3 value: 61.419753086419746 - type: mrr_at_5 value: 62.569444444444414 - type: nauc_map_at_1000_diff1 value: 44.69490283021471 - type: nauc_map_at_1000_max value: 30.92551782663574 - type: nauc_map_at_1000_std value: -14.430206995603301 - type: nauc_map_at_100_diff1 value: 44.69185357435577 - type: nauc_map_at_100_max value: 30.862237715726366 - type: nauc_map_at_100_std value: -14.372571056295163 - type: nauc_map_at_10_diff1 value: 45.132555549285684 - type: nauc_map_at_10_max value: 29.69986917324885 - type: nauc_map_at_10_std value: -15.667505260541766 - type: nauc_map_at_1_diff1 value: 54.024618817771106 - type: nauc_map_at_1_max value: 16.708530893815503 - type: nauc_map_at_1_std value: -16.498600116938213 - type: nauc_map_at_20_diff1 value: 44.77799946034473 - type: nauc_map_at_20_max value: 30.299416183664622 - type: nauc_map_at_20_std value: -15.294677224633338 - type: nauc_map_at_3_diff1 value: 47.7216165906905 - type: nauc_map_at_3_max value: 24.80525467300038 - type: nauc_map_at_3_std value: -16.167977354068125 - type: nauc_map_at_5_diff1 value: 46.02953711254381 - type: nauc_map_at_5_max value: 27.79201453498519 - type: nauc_map_at_5_std value: -15.862235643724656 - type: nauc_mrr_at_1000_diff1 value: 52.604857791228795 - type: nauc_mrr_at_1000_max value: 41.90760486398485 - type: nauc_mrr_at_1000_std value: -11.645858368861806 - type: nauc_mrr_at_100_diff1 value: 52.592257747310256 - type: nauc_mrr_at_100_max value: 41.90215764910407 - type: nauc_mrr_at_100_std value: -11.629701977702393 - type: nauc_mrr_at_10_diff1 value: 52.41749778055747 - type: nauc_mrr_at_10_max value: 41.848571254096726 - type: nauc_mrr_at_10_std value: -11.666002358568836 - type: nauc_mrr_at_1_diff1 value: 56.38878064687782 - type: nauc_mrr_at_1_max value: 41.723130889319506 - type: nauc_mrr_at_1_std value: -14.860577950564089 - type: nauc_mrr_at_20_diff1 value: 52.46272520475529 - type: nauc_mrr_at_20_max value: 41.8213025754279 - type: nauc_mrr_at_20_std value: -11.733990850254274 - type: nauc_mrr_at_3_diff1 value: 52.907019830141664 - type: nauc_mrr_at_3_max value: 42.091778179711596 - type: nauc_mrr_at_3_std value: -12.083789626498938 - type: nauc_mrr_at_5_diff1 value: 52.504019426379934 - type: nauc_mrr_at_5_max value: 42.18926010896201 - type: nauc_mrr_at_5_std value: -11.681910264950588 - type: nauc_ndcg_at_1000_diff1 value: 45.106796590190186 - type: nauc_ndcg_at_1000_max value: 35.1491443230931 - type: nauc_ndcg_at_1000_std value: -10.564391346254856 - type: nauc_ndcg_at_100_diff1 value: 45.0013474629376 - type: nauc_ndcg_at_100_max value: 34.41256574321341 - type: nauc_ndcg_at_100_std value: -9.633883528747441 - type: nauc_ndcg_at_10_diff1 value: 45.237721582816334 - type: nauc_ndcg_at_10_max value: 32.26898539095568 - type: nauc_ndcg_at_10_std value: -13.9237235021985 - type: nauc_ndcg_at_1_diff1 value: 56.38878064687782 - type: nauc_ndcg_at_1_max value: 41.723130889319506 - type: nauc_ndcg_at_1_std value: -14.860577950564089 - type: nauc_ndcg_at_20_diff1 value: 44.79397541599229 - type: nauc_ndcg_at_20_max value: 32.63346133703614 - type: nauc_ndcg_at_20_std value: -13.134715591222845 - type: nauc_ndcg_at_3_diff1 value: 44.2180519401759 - type: nauc_ndcg_at_3_max value: 34.12433670269642 - type: nauc_ndcg_at_3_std value: -14.270883602867066 - type: nauc_ndcg_at_5_diff1 value: 44.7842185764849 - type: nauc_ndcg_at_5_max value: 33.07177981305266 - type: nauc_ndcg_at_5_std value: -14.727492973620834 - type: nauc_precision_at_1000_diff1 value: -20.913994966271765 - type: nauc_precision_at_1000_max value: 23.20465985071573 - type: nauc_precision_at_1000_std value: 12.925926506702536 - type: nauc_precision_at_100_diff1 value: -15.403005704192823 - type: nauc_precision_at_100_max value: 26.893804588333875 - type: nauc_precision_at_100_std value: 15.444177311713084 - type: nauc_precision_at_10_diff1 value: -0.3346158191962367 - type: nauc_precision_at_10_max value: 34.35784817973049 - type: nauc_precision_at_10_std value: 1.564875689584266 - type: nauc_precision_at_1_diff1 value: 56.38878064687782 - type: nauc_precision_at_1_max value: 41.723130889319506 - type: nauc_precision_at_1_std value: -14.860577950564089 - type: nauc_precision_at_20_diff1 value: -7.911475407210952 - type: nauc_precision_at_20_max value: 30.98272615969314 - type: nauc_precision_at_20_std value: 5.867214521505607 - type: nauc_precision_at_3_diff1 value: 16.75568275861896 - type: nauc_precision_at_3_max value: 37.24231639527263 - type: nauc_precision_at_3_std value: -3.9715436715209314 - type: nauc_precision_at_5_diff1 value: 6.630969568351265 - type: nauc_precision_at_5_max value: 37.17946067183075 - type: nauc_precision_at_5_std value: -1.043726756895724 - type: nauc_recall_at_1000_diff1 value: 13.830553206891198 - type: nauc_recall_at_1000_max value: 27.133157795437697 - type: nauc_recall_at_1000_std value: 48.60055204311751 - type: nauc_recall_at_100_diff1 value: 31.861798106199263 - type: nauc_recall_at_100_max value: 23.85176506448154 - type: nauc_recall_at_100_std value: 13.429501664456197 - type: nauc_recall_at_10_diff1 value: 35.84630942635496 - type: nauc_recall_at_10_max value: 22.13537652643054 - type: nauc_recall_at_10_std value: -12.889731755986558 - type: nauc_recall_at_1_diff1 value: 54.024618817771106 - type: nauc_recall_at_1_max value: 16.708530893815503 - type: nauc_recall_at_1_std value: -16.498600116938213 - type: nauc_recall_at_20_diff1 value: 32.12586341954912 - type: nauc_recall_at_20_max value: 20.180665442485186 - type: nauc_recall_at_20_std value: -10.417213522381246 - type: nauc_recall_at_3_diff1 value: 42.32867586873781 - type: nauc_recall_at_3_max value: 20.365513905388248 - type: nauc_recall_at_3_std value: -15.533455093420095 - type: nauc_recall_at_5_diff1 value: 38.37329209164264 - type: nauc_recall_at_5_max value: 22.003563642599406 - type: nauc_recall_at_5_std value: -14.300434277371249 - type: ndcg_at_1 value: 54.167 - type: ndcg_at_10 value: 55.846 - type: ndcg_at_100 value: 62.427 - type: ndcg_at_1000 value: 64.301 - type: ndcg_at_20 value: 58.858 - type: ndcg_at_3 value: 52.29899999999999 - type: ndcg_at_5 value: 53.535 - type: precision_at_1 value: 54.167 - type: precision_at_10 value: 15.386 - type: precision_at_100 value: 2.244 - type: precision_at_1000 value: 0.256 - type: precision_at_20 value: 9.043 - type: precision_at_3 value: 35.391 - type: precision_at_5 value: 25.679000000000002 - type: recall_at_1 value: 28.559 - type: recall_at_10 value: 62.746 - type: recall_at_100 value: 85.943 - type: recall_at_1000 value: 97.111 - type: recall_at_20 value: 71.857 - type: recall_at_3 value: 47.905 - type: recall_at_5 value: 55.083000000000006 - task: type: Retrieval dataset: name: MTEB HotpotQA (default) type: mteb/hotpotqa config: default split: test revision: ab518f4d6fcca38d87c25209f94beba119d02014 metrics: - type: main_score value: 77.125 - type: map_at_1 value: 41.519 - type: map_at_10 value: 70.167 - type: map_at_100 value: 70.953 - type: map_at_1000 value: 71.001 - type: map_at_20 value: 70.649 - type: map_at_3 value: 66.981 - type: map_at_5 value: 69.015 - type: mrr_at_1 value: 83.03848750844024 - type: mrr_at_10 value: 87.86450596443821 - type: mrr_at_100 value: 87.99248423178447 - type: mrr_at_1000 value: 87.99763635592048 - type: mrr_at_20 value: 87.94595084887314 - type: mrr_at_3 value: 87.16857978843102 - type: mrr_at_5 value: 87.64528471753292 - type: nauc_map_at_1000_diff1 value: 13.006772427678182 - type: nauc_map_at_1000_max value: 19.27772881602346 - type: nauc_map_at_1000_std value: -1.6365617870914253 - type: nauc_map_at_100_diff1 value: 12.989359854738897 - type: nauc_map_at_100_max value: 19.26281165622185 - type: nauc_map_at_100_std value: -1.593792957025615 - type: nauc_map_at_10_diff1 value: 12.534914210271793 - type: nauc_map_at_10_max value: 19.10074296644962 - type: nauc_map_at_10_std value: -2.08708846366053 - type: nauc_map_at_1_diff1 value: 65.02315378915155 - type: nauc_map_at_1_max value: 51.17274340822432 - type: nauc_map_at_1_std value: -13.958643399099477 - type: nauc_map_at_20_diff1 value: 12.789367750628891 - type: nauc_map_at_20_max value: 19.196329125088766 - type: nauc_map_at_20_std value: -1.6469320227872484 - type: nauc_map_at_3_diff1 value: 11.729016028480842 - type: nauc_map_at_3_max value: 18.934857415045407 - type: nauc_map_at_3_std value: -5.093979543429253 - type: nauc_map_at_5_diff1 value: 12.276578116231432 - type: nauc_map_at_5_max value: 19.136131837434224 - type: nauc_map_at_5_std value: -3.132205288648261 - type: nauc_mrr_at_1000_diff1 value: 64.62877037309876 - type: nauc_mrr_at_1000_max value: 53.98693739367828 - type: nauc_mrr_at_1000_std value: -9.126474060405949 - type: nauc_mrr_at_100_diff1 value: 64.62922052398424 - type: nauc_mrr_at_100_max value: 53.988818580376716 - type: nauc_mrr_at_100_std value: -9.112000794260997 - type: nauc_mrr_at_10_diff1 value: 64.67968982658613 - type: nauc_mrr_at_10_max value: 54.09549018870873 - type: nauc_mrr_at_10_std value: -9.085391215905886 - type: nauc_mrr_at_1_diff1 value: 65.02315378915155 - type: nauc_mrr_at_1_max value: 51.17274340822432 - type: nauc_mrr_at_1_std value: -13.958643399099477 - type: nauc_mrr_at_20_diff1 value: 64.64602458319244 - type: nauc_mrr_at_20_max value: 54.04291820548144 - type: nauc_mrr_at_20_std value: -9.029239521036427 - type: nauc_mrr_at_3_diff1 value: 64.12171234144306 - type: nauc_mrr_at_3_max value: 54.20212686844387 - type: nauc_mrr_at_3_std value: -9.46453489594666 - type: nauc_mrr_at_5_diff1 value: 64.5365566477813 - type: nauc_mrr_at_5_max value: 54.192604824973614 - type: nauc_mrr_at_5_std value: -8.94640972357742 - type: nauc_ndcg_at_1000_diff1 value: 20.6624137699915 - type: nauc_ndcg_at_1000_max value: 24.209126006971047 - type: nauc_ndcg_at_1000_std value: 1.1807098633960282 - type: nauc_ndcg_at_100_diff1 value: 20.013258997064227 - type: nauc_ndcg_at_100_max value: 23.720082477112257 - type: nauc_ndcg_at_100_std value: 2.2242860498709613 - type: nauc_ndcg_at_10_diff1 value: 18.09695752050781 - type: nauc_ndcg_at_10_max value: 22.98525969470443 - type: nauc_ndcg_at_10_std value: 0.2786454782720934 - type: nauc_ndcg_at_1_diff1 value: 65.02315378915155 - type: nauc_ndcg_at_1_max value: 51.17274340822432 - type: nauc_ndcg_at_1_std value: -13.958643399099477 - type: nauc_ndcg_at_20_diff1 value: 18.753834550561898 - type: nauc_ndcg_at_20_max value: 23.203931192654537 - type: nauc_ndcg_at_20_std value: 1.682937143064539 - type: nauc_ndcg_at_3_diff1 value: 17.167189283772277 - type: nauc_ndcg_at_3_max value: 23.118180393133674 - type: nauc_ndcg_at_3_std value: -4.535717324455032 - type: nauc_ndcg_at_5_diff1 value: 17.699969883446347 - type: nauc_ndcg_at_5_max value: 23.182339354183792 - type: nauc_ndcg_at_5_std value: -1.6781755997000052 - type: nauc_precision_at_1000_diff1 value: 7.5788444847542245 - type: nauc_precision_at_1000_max value: 9.433104189270141 - type: nauc_precision_at_1000_std value: 44.591190042314686 - type: nauc_precision_at_100_diff1 value: 6.692654365804252 - type: nauc_precision_at_100_max value: 11.25176923344072 - type: nauc_precision_at_100_std value: 31.693966326856067 - type: nauc_precision_at_10_diff1 value: 4.635248064775999 - type: nauc_precision_at_10_max value: 13.589851125207542 - type: nauc_precision_at_10_std value: 10.894403687664266 - type: nauc_precision_at_1_diff1 value: 65.02315378915155 - type: nauc_precision_at_1_max value: 51.17274340822432 - type: nauc_precision_at_1_std value: -13.958643399099477 - type: nauc_precision_at_20_diff1 value: 4.525274589346107 - type: nauc_precision_at_20_max value: 12.438591940128317 - type: nauc_precision_at_20_std value: 18.15861880489477 - type: nauc_precision_at_3_diff1 value: 6.184298058843799 - type: nauc_precision_at_3_max value: 16.54134519266211 - type: nauc_precision_at_3_std value: -1.4484102465558355 - type: nauc_precision_at_5_diff1 value: 5.657946113014627 - type: nauc_precision_at_5_max value: 15.559958816376259 - type: nauc_precision_at_5_std value: 4.5741247848934 - type: nauc_recall_at_1000_diff1 value: 7.57884448475466 - type: nauc_recall_at_1000_max value: 9.433104189270699 - type: nauc_recall_at_1000_std value: 44.59119004231575 - type: nauc_recall_at_100_diff1 value: 6.692654365804161 - type: nauc_recall_at_100_max value: 11.251769233440859 - type: nauc_recall_at_100_std value: 31.69396632685608 - type: nauc_recall_at_10_diff1 value: 4.6352480647758165 - type: nauc_recall_at_10_max value: 13.589851125207373 - type: nauc_recall_at_10_std value: 10.89440368766406 - type: nauc_recall_at_1_diff1 value: 65.02315378915155 - type: nauc_recall_at_1_max value: 51.17274340822432 - type: nauc_recall_at_1_std value: -13.958643399099477 - type: nauc_recall_at_20_diff1 value: 4.525274589346308 - type: nauc_recall_at_20_max value: 12.43859194012829 - type: nauc_recall_at_20_std value: 18.158618804894918 - type: nauc_recall_at_3_diff1 value: 6.18429805884378 - type: nauc_recall_at_3_max value: 16.541345192662053 - type: nauc_recall_at_3_std value: -1.448410246555865 - type: nauc_recall_at_5_diff1 value: 5.657946113014677 - type: nauc_recall_at_5_max value: 15.559958816376168 - type: nauc_recall_at_5_std value: 4.5741247848933995 - type: ndcg_at_1 value: 83.038 - type: ndcg_at_10 value: 77.125 - type: ndcg_at_100 value: 79.714 - type: ndcg_at_1000 value: 80.589 - type: ndcg_at_20 value: 78.277 - type: ndcg_at_3 value: 72.78099999999999 - type: ndcg_at_5 value: 75.274 - type: precision_at_1 value: 83.038 - type: precision_at_10 value: 16.104 - type: precision_at_100 value: 1.81 - type: precision_at_1000 value: 0.192 - type: precision_at_20 value: 8.421 - type: precision_at_3 value: 47.327999999999996 - type: precision_at_5 value: 30.358 - type: recall_at_1 value: 41.519 - type: recall_at_10 value: 80.52 - type: recall_at_100 value: 90.506 - type: recall_at_1000 value: 96.219 - type: recall_at_20 value: 84.21300000000001 - type: recall_at_3 value: 70.993 - type: recall_at_5 value: 75.895 - task: type: Classification dataset: name: MTEB ImdbClassification (default) type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 87.502 - type: ap value: 82.95784261019377 - type: ap_weighted value: 82.95784261019377 - type: f1 value: 87.46268628092736 - type: f1_weighted value: 87.46268628092734 - type: main_score value: 87.502 - task: type: Retrieval dataset: name: MTEB MSMARCO (default) type: mteb/msmarco config: default split: dev revision: c5a29a104738b98a9e76336939199e264163d4a0 metrics: - type: main_score value: 42.828 - type: map_at_1 value: 22.622 - type: map_at_10 value: 35.551 - type: map_at_100 value: 36.720000000000006 - type: map_at_1000 value: 36.764 - type: map_at_20 value: 36.289 - type: map_at_3 value: 31.452 - type: map_at_5 value: 33.78 - type: mrr_at_1 value: 23.30945558739255 - type: mrr_at_10 value: 36.1412311820621 - type: mrr_at_100 value: 37.24636679692144 - type: mrr_at_1000 value: 37.28460384375999 - type: mrr_at_20 value: 36.846278028371025 - type: mrr_at_3 value: 32.13228271251181 - type: mrr_at_5 value: 34.41380133715374 - type: nauc_map_at_1000_diff1 value: 34.255139266678064 - type: nauc_map_at_1000_max value: 27.634109755448193 - type: nauc_map_at_1000_std value: -25.068179763907832 - type: nauc_map_at_100_diff1 value: 34.2417213419493 - type: nauc_map_at_100_max value: 27.65433689860232 - type: nauc_map_at_100_std value: -25.039117799889272 - type: nauc_map_at_10_diff1 value: 34.28723089283392 - type: nauc_map_at_10_max value: 27.860718359060744 - type: nauc_map_at_10_std value: -25.751446063547345 - type: nauc_map_at_1_diff1 value: 38.198938023289145 - type: nauc_map_at_1_max value: 23.254490119071715 - type: nauc_map_at_1_std value: -22.980905207183305 - type: nauc_map_at_20_diff1 value: 34.26024735371797 - type: nauc_map_at_20_max value: 27.75693143222282 - type: nauc_map_at_20_std value: -25.28503577794799 - type: nauc_map_at_3_diff1 value: 34.64447883282737 - type: nauc_map_at_3_max value: 26.378811567506215 - type: nauc_map_at_3_std value: -25.67491148988595 - type: nauc_map_at_5_diff1 value: 34.29213028848551 - type: nauc_map_at_5_max value: 27.34485307415968 - type: nauc_map_at_5_std value: -25.85010405437415 - type: nauc_mrr_at_1000_diff1 value: 33.86421338143246 - type: nauc_mrr_at_1000_max value: 27.250892398176134 - type: nauc_mrr_at_1000_std value: -24.929427212418865 - type: nauc_mrr_at_100_diff1 value: 33.85402401030439 - type: nauc_mrr_at_100_max value: 27.27120328136017 - type: nauc_mrr_at_100_std value: -24.902391477950804 - type: nauc_mrr_at_10_diff1 value: 33.880770277581384 - type: nauc_mrr_at_10_max value: 27.441949715579018 - type: nauc_mrr_at_10_std value: -25.580715458382443 - type: nauc_mrr_at_1_diff1 value: 37.6771342755869 - type: nauc_mrr_at_1_max value: 23.265052651660735 - type: nauc_mrr_at_1_std value: -23.205472826319582 - type: nauc_mrr_at_20_diff1 value: 33.85060604038063 - type: nauc_mrr_at_20_max value: 27.354473158418273 - type: nauc_mrr_at_20_std value: -25.12086278884105 - type: nauc_mrr_at_3_diff1 value: 34.06405277224216 - type: nauc_mrr_at_3_max value: 26.07372143403643 - type: nauc_mrr_at_3_std value: -25.630362103104886 - type: nauc_mrr_at_5_diff1 value: 33.87118592722288 - type: nauc_mrr_at_5_max value: 26.975769667003075 - type: nauc_mrr_at_5_std value: -25.71929161450833 - type: nauc_ndcg_at_1000_diff1 value: 33.22331426038353 - type: nauc_ndcg_at_1000_max value: 28.608268643287936 - type: nauc_ndcg_at_1000_std value: -24.044160312444436 - type: nauc_ndcg_at_100_diff1 value: 32.922164280838416 - type: nauc_ndcg_at_100_max value: 29.21462172377906 - type: nauc_ndcg_at_100_std value: -23.030348617039646 - type: nauc_ndcg_at_10_diff1 value: 33.09358216672861 - type: nauc_ndcg_at_10_max value: 30.195095713195773 - type: nauc_ndcg_at_10_std value: -26.59142800601194 - type: nauc_ndcg_at_1_diff1 value: 37.6771342755869 - type: nauc_ndcg_at_1_max value: 23.265052651660735 - type: nauc_ndcg_at_1_std value: -23.205472826319582 - type: nauc_ndcg_at_20_diff1 value: 32.94672668778977 - type: nauc_ndcg_at_20_max value: 29.953670242101982 - type: nauc_ndcg_at_20_std value: -24.813603915035287 - type: nauc_ndcg_at_3_diff1 value: 33.62562972482792 - type: nauc_ndcg_at_3_max value: 27.190259593091398 - type: nauc_ndcg_at_3_std value: -26.515769994261813 - type: nauc_ndcg_at_5_diff1 value: 33.10136789340702 - type: nauc_ndcg_at_5_max value: 28.837591134098755 - type: nauc_ndcg_at_5_std value: -26.780772701125983 - type: nauc_precision_at_1000_diff1 value: -9.899137556294614 - type: nauc_precision_at_1000_max value: -4.36970723719003 - type: nauc_precision_at_1000_std value: 8.958402933111124 - type: nauc_precision_at_100_diff1 value: 6.280164246108402 - type: nauc_precision_at_100_max value: 20.31023470088511 - type: nauc_precision_at_100_std value: 8.411373550208506 - type: nauc_precision_at_10_diff1 value: 25.404046800053337 - type: nauc_precision_at_10_max value: 35.785692630641734 - type: nauc_precision_at_10_std value: -27.68021725976897 - type: nauc_precision_at_1_diff1 value: 37.6771342755869 - type: nauc_precision_at_1_max value: 23.265052651660735 - type: nauc_precision_at_1_std value: -23.205472826319582 - type: nauc_precision_at_20_diff1 value: 21.065313055028774 - type: nauc_precision_at_20_max value: 33.42637158232517 - type: nauc_precision_at_20_std value: -17.587879787399963 - type: nauc_precision_at_3_diff1 value: 30.05809515417773 - type: nauc_precision_at_3_max value: 29.186749260489574 - type: nauc_precision_at_3_std value: -28.68718050967194 - type: nauc_precision_at_5_diff1 value: 27.643220081226616 - type: nauc_precision_at_5_max value: 32.24514921367616 - type: nauc_precision_at_5_std value: -28.86571961903398 - type: nauc_recall_at_1000_diff1 value: -4.845074018125273 - type: nauc_recall_at_1000_max value: 29.244169860264673 - type: nauc_recall_at_1000_std value: 66.46444727873201 - type: nauc_recall_at_100_diff1 value: 21.726863329795037 - type: nauc_recall_at_100_max value: 42.20994961358766 - type: nauc_recall_at_100_std value: 11.221444032745431 - type: nauc_recall_at_10_diff1 value: 29.421574107699655 - type: nauc_recall_at_10_max value: 38.66483553848921 - type: nauc_recall_at_10_std value: -29.12606967024905 - type: nauc_recall_at_1_diff1 value: 38.198938023289145 - type: nauc_recall_at_1_max value: 23.254490119071715 - type: nauc_recall_at_1_std value: -22.980905207183305 - type: nauc_recall_at_20_diff1 value: 27.651691823296915 - type: nauc_recall_at_20_max value: 40.03031738570312 - type: nauc_recall_at_20_std value: -20.543175657258416 - type: nauc_recall_at_3_diff1 value: 31.198692423845504 - type: nauc_recall_at_3_max value: 29.39596634981627 - type: nauc_recall_at_3_std value: -28.616355439292363 - type: nauc_recall_at_5_diff1 value: 29.866348488833548 - type: nauc_recall_at_5_max value: 33.13370189331543 - type: nauc_recall_at_5_std value: -29.203314508482975 - type: ndcg_at_1 value: 23.308999999999997 - type: ndcg_at_10 value: 42.828 - type: ndcg_at_100 value: 48.378 - type: ndcg_at_1000 value: 49.448 - type: ndcg_at_20 value: 45.454 - type: ndcg_at_3 value: 34.495 - type: ndcg_at_5 value: 38.631 - type: precision_at_1 value: 23.308999999999997 - type: precision_at_10 value: 6.834 - type: precision_at_100 value: 0.96 - type: precision_at_1000 value: 0.105 - type: precision_at_20 value: 3.966 - type: precision_at_3 value: 14.761 - type: precision_at_5 value: 10.968 - type: recall_at_1 value: 22.622 - type: recall_at_10 value: 65.281 - type: recall_at_100 value: 90.753 - type: recall_at_1000 value: 98.89099999999999 - type: recall_at_20 value: 75.497 - type: recall_at_3 value: 42.631 - type: recall_at_5 value: 52.537 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 95.39899680802554 - type: f1 value: 95.07663901231138 - type: f1_weighted value: 95.41776447870696 - type: main_score value: 95.39899680802554 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 78.61833105335158 - type: f1 value: 59.28800280436582 - type: f1_weighted value: 80.57433767622484 - type: main_score value: 78.61833105335158 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 77.96906523201076 - type: f1 value: 77.08935874634084 - type: f1_weighted value: 77.21652401469349 - type: main_score value: 77.96906523201076 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 80.13786146603901 - type: f1 value: 79.4209677252264 - type: f1_weighted value: 80.02557645073855 - type: main_score value: 80.13786146603901 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P (default) type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: main_score value: 32.51760527055797 - type: v_measure value: 32.51760527055797 - type: v_measure_std value: 1.6034905723582067 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S (default) type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: main_score value: 32.82947207274322 - type: v_measure value: 32.82947207274322 - type: v_measure_std value: 1.4989355394854873 - task: type: Reranking dataset: name: MTEB MindSmallReranking (default) type: mteb/mind_small config: default split: test revision: 59042f120c80e8afa9cdbb224f67076cec0fc9a7 metrics: - type: main_score value: 32.31611300891167 - type: map value: 32.31611300891167 - type: mrr value: 33.511223415752724 - type: nAUC_map_diff1 value: 11.450478664025022 - type: nAUC_map_max value: -20.133408006057575 - type: nAUC_map_std value: 1.4219207720032518 - type: nAUC_mrr_diff1 value: 10.75655549847738 - type: nAUC_mrr_max value: -15.057198169120886 - type: nAUC_mrr_std value: 3.135802935705323 - task: type: Retrieval dataset: name: MTEB NFCorpus (default) type: mteb/nfcorpus config: default split: test revision: ec0fa4fe99da2ff19ca1214b7966684033a58814 metrics: - type: main_score value: 40.897 - type: map_at_1 value: 6.502 - type: map_at_10 value: 15.42 - type: map_at_100 value: 20.230999999999998 - type: map_at_1000 value: 21.97 - type: map_at_20 value: 17.288999999999998 - type: map_at_3 value: 11.023 - type: map_at_5 value: 13.043 - type: mrr_at_1 value: 51.39318885448917 - type: mrr_at_10 value: 60.90458990613791 - type: mrr_at_100 value: 61.30822147543883 - type: mrr_at_1000 value: 61.348380045065255 - type: mrr_at_20 value: 61.17699278343732 - type: mrr_at_3 value: 58.926728586171336 - type: mrr_at_5 value: 60.18059855521158 - type: nauc_map_at_1000_diff1 value: 22.314584774002412 - type: nauc_map_at_1000_max value: 25.641377755476196 - type: nauc_map_at_1000_std value: 11.600298129762008 - type: nauc_map_at_100_diff1 value: 24.373091310053177 - type: nauc_map_at_100_max value: 25.210868210148224 - type: nauc_map_at_100_std value: 8.080464969348098 - type: nauc_map_at_10_diff1 value: 27.966938127237427 - type: nauc_map_at_10_max value: 19.50242948342481 - type: nauc_map_at_10_std value: -3.2987968208791165 - type: nauc_map_at_1_diff1 value: 46.77689461811175 - type: nauc_map_at_1_max value: 5.992563122661309 - type: nauc_map_at_1_std value: -19.87851166453543 - type: nauc_map_at_20_diff1 value: 26.613502043659892 - type: nauc_map_at_20_max value: 22.01981060253049 - type: nauc_map_at_20_std value: 0.35709748629600185 - type: nauc_map_at_3_diff1 value: 35.07956999136986 - type: nauc_map_at_3_max value: 10.09189113221904 - type: nauc_map_at_3_std value: -12.806383375188329 - type: nauc_map_at_5_diff1 value: 31.171056747033298 - type: nauc_map_at_5_max value: 13.891688163973285 - type: nauc_map_at_5_std value: -9.749824390421193 - type: nauc_mrr_at_1000_diff1 value: 29.800909959137623 - type: nauc_mrr_at_1000_max value: 33.341220744025435 - type: nauc_mrr_at_1000_std value: 21.75001312294901 - type: nauc_mrr_at_100_diff1 value: 29.800367929516835 - type: nauc_mrr_at_100_max value: 33.38136058844821 - type: nauc_mrr_at_100_std value: 21.789223788566236 - type: nauc_mrr_at_10_diff1 value: 29.974925130970114 - type: nauc_mrr_at_10_max value: 33.432216655498976 - type: nauc_mrr_at_10_std value: 21.74795266714447 - type: nauc_mrr_at_1_diff1 value: 32.8838135925804 - type: nauc_mrr_at_1_max value: 24.529371483084574 - type: nauc_mrr_at_1_std value: 9.58748035277067 - type: nauc_mrr_at_20_diff1 value: 29.91438913779757 - type: nauc_mrr_at_20_max value: 33.28744302059532 - type: nauc_mrr_at_20_std value: 21.724201609379104 - type: nauc_mrr_at_3_diff1 value: 30.166535157699588 - type: nauc_mrr_at_3_max value: 31.75339502993889 - type: nauc_mrr_at_3_std value: 20.75073010210299 - type: nauc_mrr_at_5_diff1 value: 30.11312282641972 - type: nauc_mrr_at_5_max value: 33.67582756088135 - type: nauc_mrr_at_5_std value: 21.686983222728376 - type: nauc_ndcg_at_1000_diff1 value: 20.035313336303506 - type: nauc_ndcg_at_1000_max value: 39.86024861036285 - type: nauc_ndcg_at_1000_std value: 30.06202908899349 - type: nauc_ndcg_at_100_diff1 value: 17.713593456491267 - type: nauc_ndcg_at_100_max value: 33.98291104786461 - type: nauc_ndcg_at_100_std value: 24.365237125638483 - type: nauc_ndcg_at_10_diff1 value: 13.96135837037495 - type: nauc_ndcg_at_10_max value: 31.474413824866204 - type: nauc_ndcg_at_10_std value: 22.86874340960288 - type: nauc_ndcg_at_1_diff1 value: 32.68811889497608 - type: nauc_ndcg_at_1_max value: 23.17817936378513 - type: nauc_ndcg_at_1_std value: 10.111894875706865 - type: nauc_ndcg_at_20_diff1 value: 13.69750453207742 - type: nauc_ndcg_at_20_max value: 29.238632311676156 - type: nauc_ndcg_at_20_std value: 21.51101902939643 - type: nauc_ndcg_at_3_diff1 value: 18.142662080015292 - type: nauc_ndcg_at_3_max value: 28.79319631593914 - type: nauc_ndcg_at_3_std value: 20.37950822552368 - type: nauc_ndcg_at_5_diff1 value: 15.4836510230026 - type: nauc_ndcg_at_5_max value: 29.524697678622612 - type: nauc_ndcg_at_5_std value: 20.31186496372464 - type: nauc_precision_at_1000_diff1 value: -22.60022337362534 - type: nauc_precision_at_1000_max value: 0.5297084926025946 - type: nauc_precision_at_1000_std value: 31.86960110378292 - type: nauc_precision_at_100_diff1 value: -17.109296942324217 - type: nauc_precision_at_100_max value: 13.114543385872182 - type: nauc_precision_at_100_std value: 40.546888665997436 - type: nauc_precision_at_10_diff1 value: -5.751184011272755 - type: nauc_precision_at_10_max value: 30.397788247842115 - type: nauc_precision_at_10_std value: 36.40734041302498 - type: nauc_precision_at_1_diff1 value: 32.09040033058441 - type: nauc_precision_at_1_max value: 25.115800505270425 - type: nauc_precision_at_1_std value: 11.185512689606364 - type: nauc_precision_at_20_diff1 value: -8.865872005542506 - type: nauc_precision_at_20_max value: 24.858906642791396 - type: nauc_precision_at_20_std value: 37.31331026229662 - type: nauc_precision_at_3_diff1 value: 6.279790630752277 - type: nauc_precision_at_3_max value: 31.18333274155458 - type: nauc_precision_at_3_std value: 28.5156938516134 - type: nauc_precision_at_5_diff1 value: -0.5685414700385947 - type: nauc_precision_at_5_max value: 30.032601466710123 - type: nauc_precision_at_5_std value: 29.72096748351155 - type: nauc_recall_at_1000_diff1 value: 8.024431478610873 - type: nauc_recall_at_1000_max value: 26.25396667158672 - type: nauc_recall_at_1000_std value: 17.773467989292417 - type: nauc_recall_at_100_diff1 value: 9.825317515542215 - type: nauc_recall_at_100_max value: 25.786552000740272 - type: nauc_recall_at_100_std value: 14.229170280235248 - type: nauc_recall_at_10_diff1 value: 18.41325848063197 - type: nauc_recall_at_10_max value: 20.97636763770323 - type: nauc_recall_at_10_std value: -1.3953906116846648 - type: nauc_recall_at_1_diff1 value: 46.77689461811175 - type: nauc_recall_at_1_max value: 5.992563122661309 - type: nauc_recall_at_1_std value: -19.87851166453543 - type: nauc_recall_at_20_diff1 value: 14.796541894273506 - type: nauc_recall_at_20_max value: 17.963697314215956 - type: nauc_recall_at_20_std value: -1.713455868691847 - type: nauc_recall_at_3_diff1 value: 30.30471830588464 - type: nauc_recall_at_3_max value: 9.339098307835556 - type: nauc_recall_at_3_std value: -11.089655201238893 - type: nauc_recall_at_5_diff1 value: 24.446476829886976 - type: nauc_recall_at_5_max value: 15.769417011561993 - type: nauc_recall_at_5_std value: -7.8192403091680145 - type: ndcg_at_1 value: 50.0 - type: ndcg_at_10 value: 40.897 - type: ndcg_at_100 value: 37.849 - type: ndcg_at_1000 value: 46.666999999999994 - type: ndcg_at_20 value: 38.224000000000004 - type: ndcg_at_3 value: 45.964 - type: ndcg_at_5 value: 44.157999999999994 - type: precision_at_1 value: 51.702999999999996 - type: precision_at_10 value: 31.084 - type: precision_at_100 value: 9.913 - type: precision_at_1000 value: 2.316 - type: precision_at_20 value: 22.833000000000002 - type: precision_at_3 value: 43.24 - type: precision_at_5 value: 39.133 - type: recall_at_1 value: 6.502 - type: recall_at_10 value: 20.165 - type: recall_at_100 value: 38.957 - type: recall_at_1000 value: 71.273 - type: recall_at_20 value: 25.21 - type: recall_at_3 value: 12.458 - type: recall_at_5 value: 15.519 - task: type: Retrieval dataset: name: MTEB NQ (default) type: mteb/nq config: default split: test revision: b774495ed302d8c44a3a7ea25c90dbce03968f31 metrics: - type: main_score value: 65.44 - type: map_at_1 value: 40.571 - type: map_at_10 value: 57.969 - type: map_at_100 value: 58.728 - type: map_at_1000 value: 58.739 - type: map_at_20 value: 58.51 - type: map_at_3 value: 53.748 - type: map_at_5 value: 56.54900000000001 - type: mrr_at_1 value: 45.74159907300116 - type: mrr_at_10 value: 60.22695607791187 - type: mrr_at_100 value: 60.74267440494396 - type: mrr_at_1000 value: 60.749624106896846 - type: mrr_at_20 value: 60.59501256435345 - type: mrr_at_3 value: 57.08285052143659 - type: mrr_at_5 value: 59.252607184240844 - type: nauc_map_at_1000_diff1 value: 37.171845352505656 - type: nauc_map_at_1000_max value: 15.432607015597297 - type: nauc_map_at_1000_std value: -11.97932899920431 - type: nauc_map_at_100_diff1 value: 37.16682506561907 - type: nauc_map_at_100_max value: 15.437190638823806 - type: nauc_map_at_100_std value: -11.975557698168156 - type: nauc_map_at_10_diff1 value: 37.192551151144194 - type: nauc_map_at_10_max value: 15.753997478818714 - type: nauc_map_at_10_std value: -12.40974603357028 - type: nauc_map_at_1_diff1 value: 40.34581435107217 - type: nauc_map_at_1_max value: 11.236223915633456 - type: nauc_map_at_1_std value: -11.972657299364364 - type: nauc_map_at_20_diff1 value: 37.161885832329325 - type: nauc_map_at_20_max value: 15.470469711929697 - type: nauc_map_at_20_std value: -12.060519221962725 - type: nauc_map_at_3_diff1 value: 37.065913320170644 - type: nauc_map_at_3_max value: 15.314893136101853 - type: nauc_map_at_3_std value: -12.937730813736737 - type: nauc_map_at_5_diff1 value: 36.65247215835913 - type: nauc_map_at_5_max value: 15.63968636667063 - type: nauc_map_at_5_std value: -12.833328148854456 - type: nauc_mrr_at_1000_diff1 value: 37.79278360150881 - type: nauc_mrr_at_1000_max value: 15.572097376565196 - type: nauc_mrr_at_1000_std value: -10.348084506439804 - type: nauc_mrr_at_100_diff1 value: 37.786865896948825 - type: nauc_mrr_at_100_max value: 15.575893210789642 - type: nauc_mrr_at_100_std value: -10.341461472904756 - type: nauc_mrr_at_10_diff1 value: 37.70769184810465 - type: nauc_mrr_at_10_max value: 15.848696020672007 - type: nauc_mrr_at_10_std value: -10.478264744120263 - type: nauc_mrr_at_1_diff1 value: 40.680357536723974 - type: nauc_mrr_at_1_max value: 12.418751441192686 - type: nauc_mrr_at_1_std value: -9.868157052983221 - type: nauc_mrr_at_20_diff1 value: 37.77469856387031 - type: nauc_mrr_at_20_max value: 15.613476149089827 - type: nauc_mrr_at_20_std value: -10.350727767432192 - type: nauc_mrr_at_3_diff1 value: 37.67813738445281 - type: nauc_mrr_at_3_max value: 15.752643632881952 - type: nauc_mrr_at_3_std value: -10.709782839058049 - type: nauc_mrr_at_5_diff1 value: 37.41714382663168 - type: nauc_mrr_at_5_max value: 15.910297855025743 - type: nauc_mrr_at_5_std value: -10.780431722579811 - type: nauc_ndcg_at_1000_diff1 value: 36.84355056294766 - type: nauc_ndcg_at_1000_max value: 16.16466414816963 - type: nauc_ndcg_at_1000_std value: -10.754459767111936 - type: nauc_ndcg_at_100_diff1 value: 36.658503024942654 - type: nauc_ndcg_at_100_max value: 16.278958071287068 - type: nauc_ndcg_at_100_std value: -10.430455391889696 - type: nauc_ndcg_at_10_diff1 value: 36.52384188024424 - type: nauc_ndcg_at_10_max value: 17.61214827030915 - type: nauc_ndcg_at_10_std value: -11.870026397604043 - type: nauc_ndcg_at_1_diff1 value: 40.680357536723974 - type: nauc_ndcg_at_1_max value: 12.418751441192686 - type: nauc_ndcg_at_1_std value: -9.868157052983221 - type: nauc_ndcg_at_20_diff1 value: 36.57525846877024 - type: nauc_ndcg_at_20_max value: 16.570059599819693 - type: nauc_ndcg_at_20_std value: -10.740934396121888 - type: nauc_ndcg_at_3_diff1 value: 36.270540924932114 - type: nauc_ndcg_at_3_max value: 16.702218429064946 - type: nauc_ndcg_at_3_std value: -12.749903314668357 - type: nauc_ndcg_at_5_diff1 value: 35.41435543582453 - type: nauc_ndcg_at_5_max value: 17.330255799030986 - type: nauc_ndcg_at_5_std value: -12.761837100653455 - type: nauc_precision_at_1000_diff1 value: -9.434969469177616 - type: nauc_precision_at_1000_max value: 0.8279994242627504 - type: nauc_precision_at_1000_std value: 16.293409606648844 - type: nauc_precision_at_100_diff1 value: -7.884252124549447 - type: nauc_precision_at_100_max value: 2.645686939190529 - type: nauc_precision_at_100_std value: 17.366169984302203 - type: nauc_precision_at_10_diff1 value: 7.290688773146785 - type: nauc_precision_at_10_max value: 13.268047958073753 - type: nauc_precision_at_10_std value: 4.450363229854657 - type: nauc_precision_at_1_diff1 value: 40.680357536723974 - type: nauc_precision_at_1_max value: 12.418751441192686 - type: nauc_precision_at_1_std value: -9.868157052983221 - type: nauc_precision_at_20_diff1 value: 0.23713971775484072 - type: nauc_precision_at_20_max value: 7.536520203232905 - type: nauc_precision_at_20_std value: 12.182206974457724 - type: nauc_precision_at_3_diff1 value: 21.039494672443197 - type: nauc_precision_at_3_max value: 17.553973358369372 - type: nauc_precision_at_3_std value: -5.012633324245794 - type: nauc_precision_at_5_diff1 value: 12.102914882846234 - type: nauc_precision_at_5_max value: 15.856278497759096 - type: nauc_precision_at_5_std value: -1.5482564850321778 - type: nauc_recall_at_1000_diff1 value: 20.012897379043007 - type: nauc_recall_at_1000_max value: 56.73005884879772 - type: nauc_recall_at_1000_std value: 59.152380526066885 - type: nauc_recall_at_100_diff1 value: 18.446836069732065 - type: nauc_recall_at_100_max value: 29.993570692813982 - type: nauc_recall_at_100_std value: 28.41587000672941 - type: nauc_recall_at_10_diff1 value: 30.605039392458387 - type: nauc_recall_at_10_max value: 27.738218801407278 - type: nauc_recall_at_10_std value: -12.508590794323988 - type: nauc_recall_at_1_diff1 value: 40.34581435107217 - type: nauc_recall_at_1_max value: 11.236223915633456 - type: nauc_recall_at_1_std value: -11.972657299364364 - type: nauc_recall_at_20_diff1 value: 29.244236606603284 - type: nauc_recall_at_20_max value: 23.284403303254493 - type: nauc_recall_at_20_std value: -2.8338618437665617 - type: nauc_recall_at_3_diff1 value: 32.129243232419086 - type: nauc_recall_at_3_max value: 20.119033875551796 - type: nauc_recall_at_3_std value: -14.240312656172122 - type: nauc_recall_at_5_diff1 value: 27.763518445624864 - type: nauc_recall_at_5_max value: 23.172013642098666 - type: nauc_recall_at_5_std value: -15.04492351689741 - type: ndcg_at_1 value: 45.742 - type: ndcg_at_10 value: 65.44 - type: ndcg_at_100 value: 68.377 - type: ndcg_at_1000 value: 68.619 - type: ndcg_at_20 value: 67.093 - type: ndcg_at_3 value: 57.98799999999999 - type: ndcg_at_5 value: 62.49400000000001 - type: precision_at_1 value: 45.742 - type: precision_at_10 value: 10.278 - type: precision_at_100 value: 1.191 - type: precision_at_1000 value: 0.121 - type: precision_at_20 value: 5.542 - type: precision_at_3 value: 26.149 - type: precision_at_5 value: 18.308 - type: recall_at_1 value: 40.571 - type: recall_at_10 value: 85.6 - type: recall_at_100 value: 97.955 - type: recall_at_1000 value: 99.754 - type: recall_at_20 value: 91.594 - type: recall_at_3 value: 66.862 - type: recall_at_5 value: 77.16300000000001 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval (default) type: mteb/quora config: default split: test revision: e4e08e0b7dbe3c8700f0daef558ff32256715259 metrics: - type: main_score value: 89.983 - type: map_at_1 value: 72.333 - type: map_at_10 value: 86.64500000000001 - type: map_at_100 value: 87.27000000000001 - type: map_at_1000 value: 87.28 - type: map_at_20 value: 87.07000000000001 - type: map_at_3 value: 83.69 - type: map_at_5 value: 85.616 - type: mrr_at_1 value: 83.17999999999999 - type: mrr_at_10 value: 88.92811111111087 - type: mrr_at_100 value: 89.01533250070398 - type: mrr_at_1000 value: 89.01591712027928 - type: mrr_at_20 value: 88.99762238367897 - type: mrr_at_3 value: 88.05499999999975 - type: mrr_at_5 value: 88.68599999999968 - type: nauc_map_at_1000_diff1 value: 77.41719971001054 - type: nauc_map_at_1000_max value: 50.82056919544904 - type: nauc_map_at_1000_std value: -66.72165494301285 - type: nauc_map_at_100_diff1 value: 77.42352497743428 - type: nauc_map_at_100_max value: 50.82093289011799 - type: nauc_map_at_100_std value: -66.78049718823219 - type: nauc_map_at_10_diff1 value: 77.74624557071816 - type: nauc_map_at_10_max value: 50.79536454457503 - type: nauc_map_at_10_std value: -69.71456042954112 - type: nauc_map_at_1_diff1 value: 82.68164323521295 - type: nauc_map_at_1_max value: 38.954818287207324 - type: nauc_map_at_1_std value: -56.68166218708859 - type: nauc_map_at_20_diff1 value: 77.55325844438951 - type: nauc_map_at_20_max value: 50.84356384001606 - type: nauc_map_at_20_std value: -67.98703548320766 - type: nauc_map_at_3_diff1 value: 78.85574278943115 - type: nauc_map_at_3_max value: 48.52471401555999 - type: nauc_map_at_3_std value: -71.30299825100148 - type: nauc_map_at_5_diff1 value: 78.16471107015477 - type: nauc_map_at_5_max value: 50.30985590553375 - type: nauc_map_at_5_std value: -71.33684372158167 - type: nauc_mrr_at_1000_diff1 value: 77.62295824280032 - type: nauc_mrr_at_1000_max value: 51.80058309016847 - type: nauc_mrr_at_1000_std value: -62.37197972047579 - type: nauc_mrr_at_100_diff1 value: 77.62234522673566 - type: nauc_mrr_at_100_max value: 51.801934646829274 - type: nauc_mrr_at_100_std value: -62.37322038621588 - type: nauc_mrr_at_10_diff1 value: 77.59747253748361 - type: nauc_mrr_at_10_max value: 51.87334226198358 - type: nauc_mrr_at_10_std value: -62.64200232227941 - type: nauc_mrr_at_1_diff1 value: 78.5768801049508 - type: nauc_mrr_at_1_max value: 50.903137386851384 - type: nauc_mrr_at_1_std value: -57.80272717360846 - type: nauc_mrr_at_20_diff1 value: 77.62208305068552 - type: nauc_mrr_at_20_max value: 51.828904905587606 - type: nauc_mrr_at_20_std value: -62.433948957948374 - type: nauc_mrr_at_3_diff1 value: 77.38155121033225 - type: nauc_mrr_at_3_max value: 51.68106631416263 - type: nauc_mrr_at_3_std value: -63.45491126025229 - type: nauc_mrr_at_5_diff1 value: 77.5183226181727 - type: nauc_mrr_at_5_max value: 51.97540888258937 - type: nauc_mrr_at_5_std value: -63.01365659630005 - type: nauc_ndcg_at_1000_diff1 value: 77.0968099860845 - type: nauc_ndcg_at_1000_max value: 51.35967273719281 - type: nauc_ndcg_at_1000_std value: -64.58895061628968 - type: nauc_ndcg_at_100_diff1 value: 77.1099003765838 - type: nauc_ndcg_at_100_max value: 51.418685751945894 - type: nauc_ndcg_at_100_std value: -64.82586438143728 - type: nauc_ndcg_at_10_diff1 value: 77.29986674759448 - type: nauc_ndcg_at_10_max value: 51.8605146908119 - type: nauc_ndcg_at_10_std value: -69.97528521090494 - type: nauc_ndcg_at_1_diff1 value: 78.49711376568688 - type: nauc_ndcg_at_1_max value: 51.0794152100377 - type: nauc_ndcg_at_1_std value: -57.536843107052185 - type: nauc_ndcg_at_20_diff1 value: 77.30637560599035 - type: nauc_ndcg_at_20_max value: 51.68220059769314 - type: nauc_ndcg_at_20_std value: -67.76406452812292 - type: nauc_ndcg_at_3_diff1 value: 76.86431160054454 - type: nauc_ndcg_at_3_max value: 50.749560195519614 - type: nauc_ndcg_at_3_std value: -69.45815827753151 - type: nauc_ndcg_at_5_diff1 value: 77.14024715468048 - type: nauc_ndcg_at_5_max value: 51.930214613990465 - type: nauc_ndcg_at_5_std value: -70.77018788982289 - type: nauc_precision_at_1000_diff1 value: -47.44532117687583 - type: nauc_precision_at_1000_max value: -17.299891917223214 - type: nauc_precision_at_1000_std value: 52.51591233187087 - type: nauc_precision_at_100_diff1 value: -47.265828632688155 - type: nauc_precision_at_100_max value: -16.71812733047803 - type: nauc_precision_at_100_std value: 51.51757632883793 - type: nauc_precision_at_10_diff1 value: -43.237865438735575 - type: nauc_precision_at_10_max value: -9.883793141400469 - type: nauc_precision_at_10_std value: 33.86730977694455 - type: nauc_precision_at_1_diff1 value: 78.49711376568688 - type: nauc_precision_at_1_max value: 51.0794152100377 - type: nauc_precision_at_1_std value: -57.536843107052185 - type: nauc_precision_at_20_diff1 value: -45.7649305042739 - type: nauc_precision_at_20_max value: -13.839318564567368 - type: nauc_precision_at_20_std value: 42.705695137308446 - type: nauc_precision_at_3_diff1 value: -25.060485489006908 - type: nauc_precision_at_3_max value: 4.882317523957804 - type: nauc_precision_at_3_std value: 5.301548406664098 - type: nauc_precision_at_5_diff1 value: -36.934659107383084 - type: nauc_precision_at_5_max value: -3.599741390594996 - type: nauc_precision_at_5_std value: 21.13020599488238 - type: nauc_recall_at_1000_diff1 value: .nan - type: nauc_recall_at_1000_max value: .nan - type: nauc_recall_at_1000_std value: .nan - type: nauc_recall_at_100_diff1 value: 74.89226971528723 - type: nauc_recall_at_100_max value: 68.38872094070474 - type: nauc_recall_at_100_std value: -88.3256053707361 - type: nauc_recall_at_10_diff1 value: 75.57948535605118 - type: nauc_recall_at_10_max value: 54.842401327152324 - type: nauc_recall_at_10_std value: -108.96477628690668 - type: nauc_recall_at_1_diff1 value: 82.68164323521295 - type: nauc_recall_at_1_max value: 38.954818287207324 - type: nauc_recall_at_1_std value: -56.68166218708859 - type: nauc_recall_at_20_diff1 value: 77.01723207559192 - type: nauc_recall_at_20_max value: 56.542295047119765 - type: nauc_recall_at_20_std value: -115.99929151139956 - type: nauc_recall_at_3_diff1 value: 75.75126841997931 - type: nauc_recall_at_3_max value: 46.87962275985744 - type: nauc_recall_at_3_std value: -84.35001944931008 - type: nauc_recall_at_5_diff1 value: 74.32806613815409 - type: nauc_recall_at_5_max value: 51.53090250807227 - type: nauc_recall_at_5_std value: -95.23567626837752 - type: ndcg_at_1 value: 83.22 - type: ndcg_at_10 value: 89.983 - type: ndcg_at_100 value: 91.01700000000001 - type: ndcg_at_1000 value: 91.065 - type: ndcg_at_20 value: 90.584 - type: ndcg_at_3 value: 87.351 - type: ndcg_at_5 value: 88.92 - type: precision_at_1 value: 83.22 - type: precision_at_10 value: 13.688 - type: precision_at_100 value: 1.548 - type: precision_at_1000 value: 0.157 - type: precision_at_20 value: 7.254 - type: precision_at_3 value: 38.327 - type: precision_at_5 value: 25.266 - type: recall_at_1 value: 72.333 - type: recall_at_10 value: 96.422 - type: recall_at_100 value: 99.801 - type: recall_at_1000 value: 100.0 - type: recall_at_20 value: 98.31400000000001 - type: recall_at_3 value: 88.897 - type: recall_at_5 value: 93.253 - task: type: Clustering dataset: name: MTEB RedditClustering (default) type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: main_score value: 64.30844996368535 - type: v_measure value: 64.30844996368535 - type: v_measure_std value: 4.848791865396891 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P (default) type: mteb/reddit-clustering-p2p config: default split: test revision: 385e3cb46b4cfa89021f56c4380204149d0efe33 metrics: - type: main_score value: 61.463095258322184 - type: v_measure value: 61.463095258322184 - type: v_measure_std value: 13.825524480187179 - task: type: Retrieval dataset: name: MTEB SCIDOCS (default) type: mteb/scidocs config: default split: test revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88 metrics: - type: main_score value: 22.506 - type: map_at_1 value: 5.122999999999999 - type: map_at_10 value: 13.652000000000001 - type: map_at_100 value: 16.012 - type: map_at_1000 value: 16.357 - type: map_at_20 value: 14.832 - type: map_at_3 value: 9.458 - type: map_at_5 value: 11.526 - type: mrr_at_1 value: 25.3 - type: mrr_at_10 value: 37.14178571428571 - type: mrr_at_100 value: 38.2928507804565 - type: mrr_at_1000 value: 38.33371299085187 - type: mrr_at_20 value: 37.87531826532599 - type: mrr_at_3 value: 34.016666666666644 - type: mrr_at_5 value: 35.72666666666662 - type: nauc_map_at_1000_diff1 value: 19.181908239372856 - type: nauc_map_at_1000_max value: 34.45175558983018 - type: nauc_map_at_1000_std value: 18.189319929391996 - type: nauc_map_at_100_diff1 value: 19.19692982484398 - type: nauc_map_at_100_max value: 34.44154432269017 - type: nauc_map_at_100_std value: 17.94122358029364 - type: nauc_map_at_10_diff1 value: 19.291919391054222 - type: nauc_map_at_10_max value: 34.36232526749502 - type: nauc_map_at_10_std value: 14.816004018633237 - type: nauc_map_at_1_diff1 value: 24.558617169487032 - type: nauc_map_at_1_max value: 26.511296106317868 - type: nauc_map_at_1_std value: 7.82772589755214 - type: nauc_map_at_20_diff1 value: 19.348081062276208 - type: nauc_map_at_20_max value: 34.58455151721281 - type: nauc_map_at_20_std value: 16.151710432004197 - type: nauc_map_at_3_diff1 value: 21.5514840413981 - type: nauc_map_at_3_max value: 33.92222109817227 - type: nauc_map_at_3_std value: 10.502576651566999 - type: nauc_map_at_5_diff1 value: 19.967350394527863 - type: nauc_map_at_5_max value: 32.95165629885447 - type: nauc_map_at_5_std value: 11.122330247985717 - type: nauc_mrr_at_1000_diff1 value: 21.829308820999017 - type: nauc_mrr_at_1000_max value: 29.463800052705437 - type: nauc_mrr_at_1000_std value: 13.12696263120055 - type: nauc_mrr_at_100_diff1 value: 21.814298979701803 - type: nauc_mrr_at_100_max value: 29.46773136681717 - type: nauc_mrr_at_100_std value: 13.165157602965758 - type: nauc_mrr_at_10_diff1 value: 21.653878744765784 - type: nauc_mrr_at_10_max value: 29.58483868113361 - type: nauc_mrr_at_10_std value: 12.985515874358859 - type: nauc_mrr_at_1_diff1 value: 24.6715181065629 - type: nauc_mrr_at_1_max value: 26.81252204146797 - type: nauc_mrr_at_1_std value: 8.014680166410344 - type: nauc_mrr_at_20_diff1 value: 21.843640497550087 - type: nauc_mrr_at_20_max value: 29.593176268547204 - type: nauc_mrr_at_20_std value: 13.177395949373 - type: nauc_mrr_at_3_diff1 value: 22.29904422694039 - type: nauc_mrr_at_3_max value: 29.803363880621564 - type: nauc_mrr_at_3_std value: 12.08505345679438 - type: nauc_mrr_at_5_diff1 value: 21.71612843315846 - type: nauc_mrr_at_5_max value: 29.72242599153432 - type: nauc_mrr_at_5_std value: 12.514520503901528 - type: nauc_ndcg_at_1000_diff1 value: 17.812454067736155 - type: nauc_ndcg_at_1000_max value: 32.536786674780096 - type: nauc_ndcg_at_1000_std value: 25.854527875299326 - type: nauc_ndcg_at_100_diff1 value: 17.890769383828005 - type: nauc_ndcg_at_100_max value: 32.83258185170293 - type: nauc_ndcg_at_100_std value: 24.659819111590707 - type: nauc_ndcg_at_10_diff1 value: 18.26938087394905 - type: nauc_ndcg_at_10_max value: 33.78188684219704 - type: nauc_ndcg_at_10_std value: 17.21331892353095 - type: nauc_ndcg_at_1_diff1 value: 24.6715181065629 - type: nauc_ndcg_at_1_max value: 26.81252204146797 - type: nauc_ndcg_at_1_std value: 8.014680166410344 - type: nauc_ndcg_at_20_diff1 value: 18.9423678041254 - type: nauc_ndcg_at_20_max value: 34.22627638045056 - type: nauc_ndcg_at_20_std value: 19.457855372495466 - type: nauc_ndcg_at_3_diff1 value: 21.027029543031645 - type: nauc_ndcg_at_3_max value: 33.7897019283245 - type: nauc_ndcg_at_3_std value: 12.483167034609401 - type: nauc_ndcg_at_5_diff1 value: 18.892039685363475 - type: nauc_ndcg_at_5_max value: 32.45122363892051 - type: nauc_ndcg_at_5_std value: 13.41357877918938 - type: nauc_precision_at_1000_diff1 value: 5.157601529395117 - type: nauc_precision_at_1000_max value: 17.143073555203763 - type: nauc_precision_at_1000_std value: 42.98106178602979 - type: nauc_precision_at_100_diff1 value: 9.794992975326593 - type: nauc_precision_at_100_max value: 23.833302546894338 - type: nauc_precision_at_100_std value: 34.912511583237965 - type: nauc_precision_at_10_diff1 value: 13.798035613654628 - type: nauc_precision_at_10_max value: 33.262730178644546 - type: nauc_precision_at_10_std value: 21.062476706391156 - type: nauc_precision_at_1_diff1 value: 24.6715181065629 - type: nauc_precision_at_1_max value: 26.81252204146797 - type: nauc_precision_at_1_std value: 8.014680166410344 - type: nauc_precision_at_20_diff1 value: 14.663054363753272 - type: nauc_precision_at_20_max value: 32.12409736351085 - type: nauc_precision_at_20_std value: 24.341721044475626 - type: nauc_precision_at_3_diff1 value: 19.061499046645455 - type: nauc_precision_at_3_max value: 36.69387198980568 - type: nauc_precision_at_3_std value: 14.527611053399674 - type: nauc_precision_at_5_diff1 value: 15.122401114437281 - type: nauc_precision_at_5_max value: 32.37345191163627 - type: nauc_precision_at_5_std value: 15.159496915998183 - type: nauc_recall_at_1000_diff1 value: 4.90333908560398 - type: nauc_recall_at_1000_max value: 16.80152272446881 - type: nauc_recall_at_1000_std value: 44.34743343052233 - type: nauc_recall_at_100_diff1 value: 9.162169138584812 - type: nauc_recall_at_100_max value: 23.373784049981932 - type: nauc_recall_at_100_std value: 35.29658840515363 - type: nauc_recall_at_10_diff1 value: 13.255119541826458 - type: nauc_recall_at_10_max value: 32.79473451751747 - type: nauc_recall_at_10_std value: 21.126286808362067 - type: nauc_recall_at_1_diff1 value: 24.558617169487032 - type: nauc_recall_at_1_max value: 26.511296106317868 - type: nauc_recall_at_1_std value: 7.82772589755214 - type: nauc_recall_at_20_diff1 value: 14.336119216887313 - type: nauc_recall_at_20_max value: 31.763323157259993 - type: nauc_recall_at_20_std value: 24.41182092352001 - type: nauc_recall_at_3_diff1 value: 18.633092732140433 - type: nauc_recall_at_3_max value: 36.18581689585463 - type: nauc_recall_at_3_std value: 14.33948600334347 - type: nauc_recall_at_5_diff1 value: 14.679028389399267 - type: nauc_recall_at_5_max value: 31.897317036615412 - type: nauc_recall_at_5_std value: 15.037553354455593 - type: ndcg_at_1 value: 25.3 - type: ndcg_at_10 value: 22.506 - type: ndcg_at_100 value: 31.361 - type: ndcg_at_1000 value: 36.862 - type: ndcg_at_20 value: 25.717000000000002 - type: ndcg_at_3 value: 21.199 - type: ndcg_at_5 value: 18.562 - type: precision_at_1 value: 25.3 - type: precision_at_10 value: 11.81 - type: precision_at_100 value: 2.455 - type: precision_at_1000 value: 0.376 - type: precision_at_20 value: 7.775 - type: precision_at_3 value: 20.033 - type: precision_at_5 value: 16.46 - type: recall_at_1 value: 5.122999999999999 - type: recall_at_10 value: 23.953 - type: recall_at_100 value: 49.805 - type: recall_at_1000 value: 76.423 - type: recall_at_20 value: 31.493 - type: recall_at_3 value: 12.178 - type: recall_at_5 value: 16.682 - task: type: STS dataset: name: MTEB SICK-R (default) type: mteb/sickr-sts config: default split: test revision: 20a6d6f312dd54037fe07a32d58e5e168867909d metrics: - type: cosine_pearson value: 86.24041625333884 - type: cosine_spearman value: 83.72893029404888 - type: euclidean_pearson value: 83.33176396347261 - type: euclidean_spearman value: 83.72893162160777 - type: main_score value: 83.72893029404888 - type: manhattan_pearson value: 83.2951639248276 - type: manhattan_spearman value: 83.70786795927772 - type: pearson value: 86.24041617887043 - type: spearman value: 83.72891772746166 - task: type: STS dataset: name: MTEB STS12 (default) type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cosine_pearson value: 87.17185113605036 - type: cosine_spearman value: 81.39213356797624 - type: euclidean_pearson value: 83.70587654694744 - type: euclidean_spearman value: 81.39213356797624 - type: main_score value: 81.39213356797624 - type: manhattan_pearson value: 83.63386349627461 - type: manhattan_spearman value: 81.35222067791558 - type: pearson value: 87.17185260321112 - type: spearman value: 81.38945351411505 - task: type: STS dataset: name: MTEB STS13 (default) type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cosine_pearson value: 87.34454475793186 - type: cosine_spearman value: 87.96270687409215 - type: euclidean_pearson value: 87.90388791262633 - type: euclidean_spearman value: 87.96270687409215 - type: main_score value: 87.96270687409215 - type: manhattan_pearson value: 87.83677697801643 - type: manhattan_spearman value: 87.86991808368111 - type: pearson value: 87.34454465314778 - type: spearman value: 87.96270679590305 - task: type: STS dataset: name: MTEB STS14 (default) type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cosine_pearson value: 86.3049595329839 - type: cosine_spearman value: 85.57773596099139 - type: euclidean_pearson value: 85.61735029771381 - type: euclidean_spearman value: 85.57774644488029 - type: main_score value: 85.57773596099139 - type: manhattan_pearson value: 85.58315505256886 - type: manhattan_spearman value: 85.55100867169023 - type: pearson value: 86.30495993546997 - type: spearman value: 85.57781195336361 - task: type: STS dataset: name: MTEB STS15 (default) type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cosine_pearson value: 89.43687886823619 - type: cosine_spearman value: 89.77514474209266 - type: euclidean_pearson value: 89.16048792386724 - type: euclidean_spearman value: 89.77514474209266 - type: main_score value: 89.77514474209266 - type: manhattan_pearson value: 89.13664728081469 - type: manhattan_spearman value: 89.75080436431723 - type: pearson value: 89.43687930700762 - type: spearman value: 89.7750799990083 - task: type: STS dataset: name: MTEB STS16 (default) type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cosine_pearson value: 86.6450386341272 - type: cosine_spearman value: 87.89409021728935 - type: euclidean_pearson value: 87.49933167247268 - type: euclidean_spearman value: 87.89409021728935 - type: main_score value: 87.89409021728935 - type: manhattan_pearson value: 87.50687956428204 - type: manhattan_spearman value: 87.9178498829234 - type: pearson value: 86.64503867578216 - type: spearman value: 87.8940895850418 - task: type: STS dataset: name: MTEB STS17 (nl-en) type: mteb/sts17-crosslingual-sts config: nl-en split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: 89.6338656277505 - type: cosine_spearman value: 89.75701751196556 - type: euclidean_pearson value: 90.00670260496013 - type: euclidean_spearman value: 89.75701751196556 - type: main_score value: 89.75701751196556 - type: manhattan_pearson value: 90.02629735900686 - type: manhattan_spearman value: 89.7213070723708 - type: pearson value: 89.63386499936783 - type: spearman value: 89.75701751196556 - task: type: STS dataset: name: MTEB STS17 (en-tr) type: mteb/sts17-crosslingual-sts config: en-tr split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: 83.93143564932134 - type: cosine_spearman value: 82.93577775011765 - type: euclidean_pearson value: 84.34621382409651 - type: euclidean_spearman value: 82.93577775011765 - type: main_score value: 82.93577775011765 - type: manhattan_pearson value: 84.31655977000447 - type: manhattan_spearman value: 82.76584164596149 - type: pearson value: 83.9314336093673 - type: spearman value: 82.93577775011765 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: 91.88424206333126 - type: cosine_spearman value: 92.02107475813581 - type: euclidean_pearson value: 91.9171107672214 - type: euclidean_spearman value: 92.02107475813581 - type: main_score value: 92.02107475813581 - type: manhattan_pearson value: 91.98552208613084 - type: manhattan_spearman value: 92.04844379257229 - type: pearson value: 91.88424111217977 - type: spearman value: 92.02107475813581 - task: type: STS dataset: name: MTEB STS17 (en-de) type: mteb/sts17-crosslingual-sts config: en-de split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: 90.04918981538256 - type: cosine_spearman value: 90.03247608046513 - type: euclidean_pearson value: 90.28411220954267 - type: euclidean_spearman value: 90.03247608046513 - type: main_score value: 90.03247608046513 - type: manhattan_pearson value: 90.3114074165839 - type: manhattan_spearman value: 90.09098123553856 - type: pearson value: 90.04919169584696 - type: spearman value: 90.03247608046513 - task: type: STS dataset: name: MTEB STS17 (fr-en) type: mteb/sts17-crosslingual-sts config: fr-en split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: 88.9930259888809 - type: cosine_spearman value: 88.83327416007984 - type: euclidean_pearson value: 89.39082582616547 - type: euclidean_spearman value: 88.83327416007984 - type: main_score value: 88.83327416007984 - type: manhattan_pearson value: 89.38701261060531 - type: manhattan_spearman value: 88.92998833233004 - type: pearson value: 88.99302543858553 - type: spearman value: 88.83327416007984 - task: type: STS dataset: name: MTEB STS17 (es-en) type: mteb/sts17-crosslingual-sts config: es-en split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: 89.95636875543408 - type: cosine_spearman value: 90.39244260353328 - type: euclidean_pearson value: 90.29925474076606 - type: euclidean_spearman value: 90.39244260353328 - type: main_score value: 90.39244260353328 - type: manhattan_pearson value: 90.37981122989076 - type: manhattan_spearman value: 90.41247149045391 - type: pearson value: 89.95636893306808 - type: spearman value: 90.39244260353328 - task: type: STS dataset: name: MTEB STS17 (it-en) type: mteb/sts17-crosslingual-sts config: it-en split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: 89.53145408873213 - type: cosine_spearman value: 89.64992463283636 - type: euclidean_pearson value: 89.92739726473282 - type: euclidean_spearman value: 89.64992463283636 - type: main_score value: 89.64992463283636 - type: manhattan_pearson value: 89.88973812881389 - type: manhattan_spearman value: 89.66533893453442 - type: pearson value: 89.53145070613068 - type: spearman value: 89.64992463283636 - task: type: STS dataset: name: MTEB STS17 (en-ar) type: mteb/sts17-crosslingual-sts config: en-ar split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: 83.73753567473895 - type: cosine_spearman value: 83.56786342290584 - type: euclidean_pearson value: 84.2185817227647 - type: euclidean_spearman value: 83.56786342290584 - type: main_score value: 83.56786342290584 - type: manhattan_pearson value: 84.138637673995 - type: manhattan_spearman value: 83.5447994878456 - type: pearson value: 83.73753620404003 - type: spearman value: 83.56786342290584 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 metrics: - type: cosine_pearson value: 70.0174654163193 - type: cosine_spearman value: 69.00945960078879 - type: euclidean_pearson value: 70.00006875963157 - type: euclidean_spearman value: 69.00945960078879 - type: main_score value: 69.00945960078879 - type: manhattan_pearson value: 70.003828333656 - type: manhattan_spearman value: 69.18289416785358 - type: pearson value: 70.01746869245112 - type: spearman value: 69.00945960078879 - task: type: STS dataset: name: MTEB STS22 (pl-en) type: mteb/sts22-crosslingual-sts config: pl-en split: test revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 metrics: - type: cosine_pearson value: 74.4773241351154 - type: cosine_spearman value: 72.23771584903263 - type: euclidean_pearson value: 74.91922500307354 - type: euclidean_spearman value: 72.23771584903263 - type: main_score value: 72.23771584903263 - type: manhattan_pearson value: 75.40992669459347 - type: manhattan_spearman value: 72.89930017966125 - type: pearson value: 74.47733091997848 - type: spearman value: 72.23771584903263 - task: type: STS dataset: name: MTEB STS22 (zh-en) type: mteb/sts22-crosslingual-sts config: zh-en split: test revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 metrics: - type: cosine_pearson value: 74.66163345044995 - type: cosine_spearman value: 73.412615202234 - type: euclidean_pearson value: 76.51572664173365 - type: euclidean_spearman value: 73.412615202234 - type: main_score value: 73.412615202234 - type: manhattan_pearson value: 76.44349976731687 - type: manhattan_spearman value: 73.40243152214946 - type: pearson value: 74.66163870997642 - type: spearman value: 73.412615202234 - task: type: STS dataset: name: MTEB STS22 (de-en) type: mteb/sts22-crosslingual-sts config: de-en split: test revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 metrics: - type: cosine_pearson value: 65.14758907269797 - type: cosine_spearman value: 58.30708936630836 - type: euclidean_pearson value: 67.98705996212436 - type: euclidean_spearman value: 58.30708936630836 - type: main_score value: 58.30708936630836 - type: manhattan_pearson value: 68.41525035556984 - type: manhattan_spearman value: 58.879912875433405 - type: pearson value: 65.1475973244717 - type: spearman value: 58.30708936630836 - task: type: STS dataset: name: MTEB STS22 (es-en) type: mteb/sts22-crosslingual-sts config: es-en split: test revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 metrics: - type: cosine_pearson value: 74.73278263018503 - type: cosine_spearman value: 77.18831783868316 - type: euclidean_pearson value: 76.28171718825621 - type: euclidean_spearman value: 77.18831783868316 - type: main_score value: 77.18831783868316 - type: manhattan_pearson value: 76.73656610143712 - type: manhattan_spearman value: 77.45086643213952 - type: pearson value: 74.73278783040479 - type: spearman value: 77.18831783868316 - task: type: STS dataset: name: MTEB STSBenchmark (default) type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cosine_pearson value: 88.25299312156041 - type: cosine_spearman value: 88.82703987306 - type: euclidean_pearson value: 88.42751133294018 - type: euclidean_spearman value: 88.82706405302517 - type: main_score value: 88.82703987306 - type: manhattan_pearson value: 88.41336953833218 - type: manhattan_spearman value: 88.81246784315815 - type: pearson value: 88.25299276543255 - type: spearman value: 88.82706405302517 - task: type: Reranking dataset: name: MTEB SciDocsRR (default) type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: main_score value: 86.15363437294592 - type: map value: 86.15363437294592 - type: mrr value: 96.44521056285762 - type: nAUC_map_diff1 value: -0.7840810005333841 - type: nAUC_map_max value: 54.6142751932947 - type: nAUC_map_std value: 70.13929703223857 - type: nAUC_mrr_diff1 value: 44.20312446414372 - type: nAUC_mrr_max value: 89.20896097421188 - type: nAUC_mrr_std value: 85.57161978228 - task: type: Retrieval dataset: name: MTEB SciFact (default) type: mteb/scifact config: default split: test revision: 0228b52cf27578f30900b9e5271d331663a030d7 metrics: - type: main_score value: 77.962 - type: map_at_1 value: 60.260999999999996 - type: map_at_10 value: 72.719 - type: map_at_100 value: 73.071 - type: map_at_1000 value: 73.08 - type: map_at_20 value: 72.99 - type: map_at_3 value: 69.848 - type: map_at_5 value: 71.61999999999999 - type: mrr_at_1 value: 63.33333333333333 - type: mrr_at_10 value: 73.84933862433863 - type: mrr_at_100 value: 74.10371086565036 - type: mrr_at_1000 value: 74.1131185688027 - type: mrr_at_20 value: 74.05291111394054 - type: mrr_at_3 value: 71.88888888888891 - type: mrr_at_5 value: 73.0388888888889 - type: nauc_map_at_1000_diff1 value: 73.28126828990776 - type: nauc_map_at_1000_max value: 56.022306476443916 - type: nauc_map_at_1000_std value: -13.480460345909812 - type: nauc_map_at_100_diff1 value: 73.27839985062069 - type: nauc_map_at_100_max value: 56.03493565179317 - type: nauc_map_at_100_std value: -13.454356894686633 - type: nauc_map_at_10_diff1 value: 73.12140372590137 - type: nauc_map_at_10_max value: 56.152651928007735 - type: nauc_map_at_10_std value: -13.995855741014923 - type: nauc_map_at_1_diff1 value: 77.99227635266456 - type: nauc_map_at_1_max value: 52.094851365294005 - type: nauc_map_at_1_std value: -19.182145033673432 - type: nauc_map_at_20_diff1 value: 73.20306405935685 - type: nauc_map_at_20_max value: 56.081550664849445 - type: nauc_map_at_20_std value: -13.584356458468374 - type: nauc_map_at_3_diff1 value: 73.19141715738098 - type: nauc_map_at_3_max value: 53.23078378298865 - type: nauc_map_at_3_std value: -16.528167718115526 - type: nauc_map_at_5_diff1 value: 73.24024366833353 - type: nauc_map_at_5_max value: 55.180209378383424 - type: nauc_map_at_5_std value: -14.183250373332857 - type: nauc_mrr_at_1000_diff1 value: 73.71148233352608 - type: nauc_mrr_at_1000_max value: 56.5362956066201 - type: nauc_mrr_at_1000_std value: -11.469817601047366 - type: nauc_mrr_at_100_diff1 value: 73.70835498956635 - type: nauc_mrr_at_100_max value: 56.54902766570207 - type: nauc_mrr_at_100_std value: -11.443909853912619 - type: nauc_mrr_at_10_diff1 value: 73.57665142725747 - type: nauc_mrr_at_10_max value: 56.74885354320269 - type: nauc_mrr_at_10_std value: -11.545364799905967 - type: nauc_mrr_at_1_diff1 value: 77.9008777001104 - type: nauc_mrr_at_1_max value: 56.02354548799079 - type: nauc_mrr_at_1_std value: -12.985757082041294 - type: nauc_mrr_at_20_diff1 value: 73.63832217061234 - type: nauc_mrr_at_20_max value: 56.548555401068235 - type: nauc_mrr_at_20_std value: -11.560701355628849 - type: nauc_mrr_at_3_diff1 value: 72.90361783677665 - type: nauc_mrr_at_3_max value: 54.67757749945108 - type: nauc_mrr_at_3_std value: -11.094286378923906 - type: nauc_mrr_at_5_diff1 value: 73.54194799038713 - type: nauc_mrr_at_5_max value: 56.396527065053235 - type: nauc_mrr_at_5_std value: -10.973412029472279 - type: nauc_ndcg_at_1000_diff1 value: 72.71349972979385 - type: nauc_ndcg_at_1000_max value: 56.72661710430519 - type: nauc_ndcg_at_1000_std value: -12.077139347298028 - type: nauc_ndcg_at_100_diff1 value: 72.6223579041901 - type: nauc_ndcg_at_100_max value: 57.0943772697301 - type: nauc_ndcg_at_100_std value: -11.290467310203653 - type: nauc_ndcg_at_10_diff1 value: 71.58603155055886 - type: nauc_ndcg_at_10_max value: 57.57370636587106 - type: nauc_ndcg_at_10_std value: -13.64915129057672 - type: nauc_ndcg_at_1_diff1 value: 77.9008777001104 - type: nauc_ndcg_at_1_max value: 56.02354548799079 - type: nauc_ndcg_at_1_std value: -12.985757082041294 - type: nauc_ndcg_at_20_diff1 value: 71.862346973291 - type: nauc_ndcg_at_20_max value: 57.14084365257247 - type: nauc_ndcg_at_20_std value: -12.577106212178743 - type: nauc_ndcg_at_3_diff1 value: 70.96798592929984 - type: nauc_ndcg_at_3_max value: 53.31683131655929 - type: nauc_ndcg_at_3_std value: -13.884044044488068 - type: nauc_ndcg_at_5_diff1 value: 71.94439154102938 - type: nauc_ndcg_at_5_max value: 55.87371904515247 - type: nauc_ndcg_at_5_std value: -13.144606786194293 - type: nauc_precision_at_1000_diff1 value: -29.909943379221655 - type: nauc_precision_at_1000_max value: 3.445405323591539 - type: nauc_precision_at_1000_std value: 51.033002414810845 - type: nauc_precision_at_100_diff1 value: -23.37133080871195 - type: nauc_precision_at_100_max value: 9.078385011181364 - type: nauc_precision_at_100_std value: 50.96275222185906 - type: nauc_precision_at_10_diff1 value: -7.8465425644179465 - type: nauc_precision_at_10_max value: 23.93400510157945 - type: nauc_precision_at_10_std value: 33.12529020508839 - type: nauc_precision_at_1_diff1 value: 77.9008777001104 - type: nauc_precision_at_1_max value: 56.02354548799079 - type: nauc_precision_at_1_std value: -12.985757082041294 - type: nauc_precision_at_20_diff1 value: -17.41288362682839 - type: nauc_precision_at_20_max value: 15.35913155626735 - type: nauc_precision_at_20_std value: 41.51207496195062 - type: nauc_precision_at_3_diff1 value: 31.762128480906544 - type: nauc_precision_at_3_max value: 37.91372130404831 - type: nauc_precision_at_3_std value: 9.88524722427369 - type: nauc_precision_at_5_diff1 value: 13.038734315330009 - type: nauc_precision_at_5_max value: 30.94110952724539 - type: nauc_precision_at_5_std value: 24.082971097384295 - type: nauc_recall_at_1000_diff1 value: .nan - type: nauc_recall_at_1000_max value: .nan - type: nauc_recall_at_1000_std value: .nan - type: nauc_recall_at_100_diff1 value: 65.36881419234354 - type: nauc_recall_at_100_max value: 85.52754435107354 - type: nauc_recall_at_100_std value: 50.009337068160995 - type: nauc_recall_at_10_diff1 value: 55.74074309501903 - type: nauc_recall_at_10_max value: 66.4080962692691 - type: nauc_recall_at_10_std value: -23.91596073628651 - type: nauc_recall_at_1_diff1 value: 77.99227635266456 - type: nauc_recall_at_1_max value: 52.094851365294005 - type: nauc_recall_at_1_std value: -19.182145033673432 - type: nauc_recall_at_20_diff1 value: 50.05955137960694 - type: nauc_recall_at_20_max value: 66.30384437239482 - type: nauc_recall_at_20_std value: -17.94959453597701 - type: nauc_recall_at_3_diff1 value: 62.840959206923856 - type: nauc_recall_at_3_max value: 46.242626211644556 - type: nauc_recall_at_3_std value: -17.827394342135214 - type: nauc_recall_at_5_diff1 value: 63.99508486474444 - type: nauc_recall_at_5_max value: 56.50230441658991 - type: nauc_recall_at_5_std value: -15.143300506173162 - type: ndcg_at_1 value: 63.333 - type: ndcg_at_10 value: 77.962 - type: ndcg_at_100 value: 79.29899999999999 - type: ndcg_at_1000 value: 79.521 - type: ndcg_at_20 value: 78.81099999999999 - type: ndcg_at_3 value: 73.387 - type: ndcg_at_5 value: 75.76899999999999 - type: precision_at_1 value: 63.333 - type: precision_at_10 value: 10.433 - type: precision_at_100 value: 1.113 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_20 value: 5.417000000000001 - type: precision_at_3 value: 29.221999999999998 - type: precision_at_5 value: 19.2 - type: recall_at_1 value: 60.260999999999996 - type: recall_at_10 value: 92.656 - type: recall_at_100 value: 98.333 - type: recall_at_1000 value: 100.0 - type: recall_at_20 value: 95.767 - type: recall_at_3 value: 80.561 - type: recall_at_5 value: 86.483 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions (default) type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cosine_accuracy value: 99.83564356435643 - type: cosine_accuracy_threshold value: 82.17267990112305 - type: cosine_ap value: 96.54143761927851 - type: cosine_f1 value: 91.76011701608971 - type: cosine_f1_threshold value: 79.5323133468628 - type: cosine_precision value: 89.5337773549001 - type: cosine_recall value: 94.1 - type: dot_accuracy value: 99.83564356435643 - type: dot_accuracy_threshold value: 82.17268586158752 - type: dot_ap value: 96.54143761927853 - type: dot_f1 value: 91.76011701608971 - type: dot_f1_threshold value: 79.53230142593384 - type: dot_precision value: 89.5337773549001 - type: dot_recall value: 94.1 - type: euclidean_accuracy value: 99.83564356435643 - type: euclidean_accuracy_threshold value: 59.711503982543945 - type: euclidean_ap value: 96.54143761927851 - type: euclidean_f1 value: 91.76011701608971 - type: euclidean_f1_threshold value: 63.98076415061951 - type: euclidean_precision value: 89.5337773549001 - type: euclidean_recall value: 94.1 - type: main_score value: 96.54143761927853 - type: manhattan_accuracy value: 99.83564356435643 - type: manhattan_accuracy_threshold value: 3019.1539764404297 - type: manhattan_ap value: 96.52689339314482 - type: manhattan_f1 value: 91.88138065143411 - type: manhattan_f1_threshold value: 3262.8021240234375 - type: manhattan_precision value: 89.40397350993378 - type: manhattan_recall value: 94.5 - type: max_accuracy value: 99.83564356435643 - type: max_ap value: 96.54143761927853 - type: max_f1 value: 91.88138065143411 - type: max_precision value: 89.5337773549001 - type: max_recall value: 94.5 - type: similarity_accuracy value: 99.83564356435643 - type: similarity_accuracy_threshold value: 1865.63720703125 - type: similarity_ap value: 96.54143761927853 - type: similarity_f1 value: 91.76011701608971 - type: similarity_f1_threshold value: 1805.6900024414062 - type: similarity_precision value: 89.5337773549001 - type: similarity_recall value: 94.1 - task: type: Clustering dataset: name: MTEB StackExchangeClustering (default) type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: main_score value: 70.93157767772517 - type: v_measure value: 70.93157767772517 - type: v_measure_std value: 4.307686954444112 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P (default) type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: main_score value: 33.95011259900505 - type: v_measure value: 33.95011259900505 - type: v_measure_std value: 1.626841664415958 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions (default) type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: main_score value: 53.57599561506372 - type: map value: 53.57599561506372 - type: mrr value: 54.616410958322724 - type: nAUC_map_diff1 value: 36.11395800900299 - type: nAUC_map_max value: 13.023479833306077 - type: nAUC_map_std value: 5.631816933793585 - type: nAUC_mrr_diff1 value: 35.769021835275794 - type: nAUC_mrr_max value: 13.870729016716313 - type: nAUC_mrr_std value: 5.924960324107413 - task: type: Summarization dataset: name: MTEB SummEval (default) type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cosine_pearson value: 30.73610071443582 - type: cosine_spearman value: 31.187718139843245 - type: dot_pearson value: 30.736100295017323 - type: dot_spearman value: 31.187718139843245 - type: main_score value: 31.187718139843245 - type: pearson value: 30.736095662320906 - type: spearman value: 31.187718139843245 - task: type: Retrieval dataset: name: MTEB TRECCOVID (default) type: mteb/trec-covid config: default split: test revision: bb9466bac8153a0349341eb1b22e06409e78ef4e metrics: - type: main_score value: 82.472 - type: map_at_1 value: 0.244 - type: map_at_10 value: 2.046 - type: map_at_100 value: 13.645 - type: map_at_1000 value: 33.754 - type: map_at_20 value: 3.857 - type: map_at_3 value: 0.657 - type: map_at_5 value: 1.088 - type: mrr_at_1 value: 92.0 - type: mrr_at_10 value: 96.0 - type: mrr_at_100 value: 96.0 - type: mrr_at_1000 value: 96.0 - type: mrr_at_20 value: 96.0 - type: mrr_at_3 value: 96.0 - type: mrr_at_5 value: 96.0 - type: nauc_map_at_1000_diff1 value: 7.8754742601956265 - type: nauc_map_at_1000_max value: 34.597938302994095 - type: nauc_map_at_1000_std value: 69.85267631958843 - type: nauc_map_at_100_diff1 value: 34.92265650076894 - type: nauc_map_at_100_max value: 29.634429617158474 - type: nauc_map_at_100_std value: 51.34791604889786 - type: nauc_map_at_10_diff1 value: 28.079719799537035 - type: nauc_map_at_10_max value: 7.68543580943164 - type: nauc_map_at_10_std value: 9.800164176251364 - type: nauc_map_at_1_diff1 value: 26.03047942436184 - type: nauc_map_at_1_max value: -1.8634718917346798 - type: nauc_map_at_1_std value: -2.7881471230811767 - type: nauc_map_at_20_diff1 value: 30.323518515058662 - type: nauc_map_at_20_max value: 14.96919999636421 - type: nauc_map_at_20_std value: 19.064769986247782 - type: nauc_map_at_3_diff1 value: 26.295930355719438 - type: nauc_map_at_3_max value: 2.26895373159702 - type: nauc_map_at_3_std value: 5.6154000225639855 - type: nauc_map_at_5_diff1 value: 29.914803705791954 - type: nauc_map_at_5_max value: 2.3350193956307788 - type: nauc_map_at_5_std value: 7.787294789682958 - type: nauc_mrr_at_1000_diff1 value: 37.92016806722721 - type: nauc_mrr_at_1000_max value: 8.916900093370733 - type: nauc_mrr_at_1000_std value: 22.117180205415533 - type: nauc_mrr_at_100_diff1 value: 37.92016806722721 - type: nauc_mrr_at_100_max value: 8.916900093370733 - type: nauc_mrr_at_100_std value: 22.117180205415533 - type: nauc_mrr_at_10_diff1 value: 37.92016806722721 - type: nauc_mrr_at_10_max value: 8.916900093370733 - type: nauc_mrr_at_10_std value: 22.117180205415533 - type: nauc_mrr_at_1_diff1 value: 37.920168067226776 - type: nauc_mrr_at_1_max value: 8.916900093370584 - type: nauc_mrr_at_1_std value: 22.117180205415405 - type: nauc_mrr_at_20_diff1 value: 37.92016806722721 - type: nauc_mrr_at_20_max value: 8.916900093370733 - type: nauc_mrr_at_20_std value: 22.117180205415533 - type: nauc_mrr_at_3_diff1 value: 37.92016806722721 - type: nauc_mrr_at_3_max value: 8.916900093370733 - type: nauc_mrr_at_3_std value: 22.117180205415533 - type: nauc_mrr_at_5_diff1 value: 37.92016806722721 - type: nauc_mrr_at_5_max value: 8.916900093370733 - type: nauc_mrr_at_5_std value: 22.117180205415533 - type: nauc_ndcg_at_1000_diff1 value: 8.521491123342738 - type: nauc_ndcg_at_1000_max value: 33.00473635451372 - type: nauc_ndcg_at_1000_std value: 68.23856521354314 - type: nauc_ndcg_at_100_diff1 value: 7.579300791205225 - type: nauc_ndcg_at_100_max value: 37.710370018529176 - type: nauc_ndcg_at_100_std value: 64.88287259377455 - type: nauc_ndcg_at_10_diff1 value: 16.222263093560027 - type: nauc_ndcg_at_10_max value: 24.992121096921053 - type: nauc_ndcg_at_10_std value: 38.55100841743667 - type: nauc_ndcg_at_1_diff1 value: 34.26238738738736 - type: nauc_ndcg_at_1_max value: 2.0994208494208912 - type: nauc_ndcg_at_1_std value: 34.78523166023159 - type: nauc_ndcg_at_20_diff1 value: 14.66836044758468 - type: nauc_ndcg_at_20_max value: 37.980897102959254 - type: nauc_ndcg_at_20_std value: 55.088065073193384 - type: nauc_ndcg_at_3_diff1 value: 14.260719289394878 - type: nauc_ndcg_at_3_max value: 9.987815227921645 - type: nauc_ndcg_at_3_std value: 36.89480489474395 - type: nauc_ndcg_at_5_diff1 value: 21.44965758928399 - type: nauc_ndcg_at_5_max value: 16.800339135540813 - type: nauc_ndcg_at_5_std value: 40.970864822902655 - type: nauc_precision_at_1000_diff1 value: -24.790752711798277 - type: nauc_precision_at_1000_max value: 14.744105121143447 - type: nauc_precision_at_1000_std value: 27.330538098341282 - type: nauc_precision_at_100_diff1 value: 5.207401427044022 - type: nauc_precision_at_100_max value: 33.68994458941364 - type: nauc_precision_at_100_std value: 63.67064009046427 - type: nauc_precision_at_10_diff1 value: 9.202327262712192 - type: nauc_precision_at_10_max value: 33.09033635143045 - type: nauc_precision_at_10_std value: 30.090887182841573 - type: nauc_precision_at_1_diff1 value: 37.920168067226776 - type: nauc_precision_at_1_max value: 8.916900093370584 - type: nauc_precision_at_1_std value: 22.117180205415405 - type: nauc_precision_at_20_diff1 value: 9.241778934352723 - type: nauc_precision_at_20_max value: 46.3131480478757 - type: nauc_precision_at_20_std value: 51.48742898243859 - type: nauc_precision_at_3_diff1 value: 8.528129662677928 - type: nauc_precision_at_3_max value: 28.849360755975507 - type: nauc_precision_at_3_std value: 34.75029987419905 - type: nauc_precision_at_5_diff1 value: 19.869351197305594 - type: nauc_precision_at_5_max value: 29.914134497945955 - type: nauc_precision_at_5_std value: 38.28417039860837 - type: nauc_recall_at_1000_diff1 value: 1.4864052347959933 - type: nauc_recall_at_1000_max value: 26.677496064573784 - type: nauc_recall_at_1000_std value: 57.58908006108822 - type: nauc_recall_at_100_diff1 value: 33.936895170931095 - type: nauc_recall_at_100_max value: 21.088704619988643 - type: nauc_recall_at_100_std value: 37.01573558787262 - type: nauc_recall_at_10_diff1 value: 26.69952553674343 - type: nauc_recall_at_10_max value: 5.387440771414719 - type: nauc_recall_at_10_std value: 5.475592115850053 - type: nauc_recall_at_1_diff1 value: 26.03047942436184 - type: nauc_recall_at_1_max value: -1.8634718917346798 - type: nauc_recall_at_1_std value: -2.7881471230811767 - type: nauc_recall_at_20_diff1 value: 29.20290402741169 - type: nauc_recall_at_20_max value: 10.503476589449743 - type: nauc_recall_at_20_std value: 12.909057340580492 - type: nauc_recall_at_3_diff1 value: 26.1874599109351 - type: nauc_recall_at_3_max value: 3.2330495640914725 - type: nauc_recall_at_3_std value: 3.4613897806900646 - type: nauc_recall_at_5_diff1 value: 28.414989541462955 - type: nauc_recall_at_5_max value: 0.2662078485691208 - type: nauc_recall_at_5_std value: 4.924665776620815 - type: ndcg_at_1 value: 88.0 - type: ndcg_at_10 value: 82.472 - type: ndcg_at_100 value: 66.188 - type: ndcg_at_1000 value: 60.05200000000001 - type: ndcg_at_20 value: 79.782 - type: ndcg_at_3 value: 84.939 - type: ndcg_at_5 value: 84.54700000000001 - type: precision_at_1 value: 92.0 - type: precision_at_10 value: 85.8 - type: precision_at_100 value: 68.16 - type: precision_at_1000 value: 26.496 - type: precision_at_20 value: 83.7 - type: precision_at_3 value: 88.0 - type: precision_at_5 value: 88.4 - type: recall_at_1 value: 0.244 - type: recall_at_10 value: 2.22 - type: recall_at_100 value: 16.697 - type: recall_at_1000 value: 57.033 - type: recall_at_20 value: 4.301 - type: recall_at_3 value: 0.685 - type: recall_at_5 value: 1.159 - task: type: Retrieval dataset: name: MTEB Touche2020 (default) type: mteb/touche2020 config: default split: test revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f metrics: - type: main_score value: 24.7 - type: map_at_1 value: 1.8350000000000002 - type: map_at_10 value: 9.385 - type: map_at_100 value: 15.751999999999999 - type: map_at_1000 value: 17.357 - type: map_at_20 value: 11.93 - type: map_at_3 value: 4.455 - type: map_at_5 value: 6.188 - type: mrr_at_1 value: 28.57142857142857 - type: mrr_at_10 value: 43.27340459993521 - type: mrr_at_100 value: 44.588722072878205 - type: mrr_at_1000 value: 44.588722072878205 - type: mrr_at_20 value: 44.17137463324359 - type: mrr_at_3 value: 39.455782312925166 - type: mrr_at_5 value: 41.70068027210883 - type: nauc_map_at_1000_diff1 value: 3.8298195437770537 - type: nauc_map_at_1000_max value: 13.771844732194829 - type: nauc_map_at_1000_std value: 19.29786404290547 - type: nauc_map_at_100_diff1 value: 3.8927604843499735 - type: nauc_map_at_100_max value: 12.769257441869922 - type: nauc_map_at_100_std value: 15.788237027355665 - type: nauc_map_at_10_diff1 value: 5.931490598251997 - type: nauc_map_at_10_max value: 13.680069332373563 - type: nauc_map_at_10_std value: 1.1621360655557855 - type: nauc_map_at_1_diff1 value: 2.5191045547339246 - type: nauc_map_at_1_max value: 26.262684180827723 - type: nauc_map_at_1_std value: 11.673938727339559 - type: nauc_map_at_20_diff1 value: 3.006592455468332 - type: nauc_map_at_20_max value: 11.717797097718446 - type: nauc_map_at_20_std value: 2.2724143319154315 - type: nauc_map_at_3_diff1 value: 12.831129328925664 - type: nauc_map_at_3_max value: 12.405779573598991 - type: nauc_map_at_3_std value: -2.8124187492891264 - type: nauc_map_at_5_diff1 value: 15.98021350983196 - type: nauc_map_at_5_max value: 9.056183547839753 - type: nauc_map_at_5_std value: -3.2826962878341788 - type: nauc_mrr_at_1000_diff1 value: -5.794890738524456 - type: nauc_mrr_at_1000_max value: 13.697185831632897 - type: nauc_mrr_at_1000_std value: 24.631969103480568 - type: nauc_mrr_at_100_diff1 value: -5.794890738524456 - type: nauc_mrr_at_100_max value: 13.697185831632897 - type: nauc_mrr_at_100_std value: 24.631969103480568 - type: nauc_mrr_at_10_diff1 value: -6.386203318843087 - type: nauc_mrr_at_10_max value: 13.244363733609843 - type: nauc_mrr_at_10_std value: 23.85218563666624 - type: nauc_mrr_at_1_diff1 value: -10.071571494502948 - type: nauc_mrr_at_1_max value: 17.28112978017911 - type: nauc_mrr_at_1_std value: 21.458308992920365 - type: nauc_mrr_at_20_diff1 value: -6.003367578570198 - type: nauc_mrr_at_20_max value: 13.911873192715065 - type: nauc_mrr_at_20_std value: 25.477100749222902 - type: nauc_mrr_at_3_diff1 value: -6.867484873134922 - type: nauc_mrr_at_3_max value: 15.64058174587745 - type: nauc_mrr_at_3_std value: 22.712306358508364 - type: nauc_mrr_at_5_diff1 value: -4.809966508264632 - type: nauc_mrr_at_5_max value: 12.095003702970027 - type: nauc_mrr_at_5_std value: 24.442255620074484 - type: nauc_ndcg_at_1000_diff1 value: 11.434789444093685 - type: nauc_ndcg_at_1000_max value: 19.86851237564927 - type: nauc_ndcg_at_1000_std value: 40.75483058183507 - type: nauc_ndcg_at_100_diff1 value: 8.47533044191214 - type: nauc_ndcg_at_100_max value: 13.575094411118432 - type: nauc_ndcg_at_100_std value: 35.56446142831008 - type: nauc_ndcg_at_10_diff1 value: 3.8335346471495133 - type: nauc_ndcg_at_10_max value: 13.604843458587629 - type: nauc_ndcg_at_10_std value: 14.026218004849186 - type: nauc_ndcg_at_1_diff1 value: -7.753442025343042 - type: nauc_ndcg_at_1_max value: 16.06299655203062 - type: nauc_ndcg_at_1_std value: 21.58079636889492 - type: nauc_ndcg_at_20_diff1 value: 7.440981403494449 - type: nauc_ndcg_at_20_max value: 10.960931136182648 - type: nauc_ndcg_at_20_std value: 14.132481758302665 - type: nauc_ndcg_at_3_diff1 value: 8.097829929493612 - type: nauc_ndcg_at_3_max value: 13.50824542271782 - type: nauc_ndcg_at_3_std value: 13.275247050693869 - type: nauc_ndcg_at_5_diff1 value: 11.971002032611313 - type: nauc_ndcg_at_5_max value: 7.246169276334145 - type: nauc_ndcg_at_5_std value: 13.975255468959613 - type: nauc_precision_at_1000_diff1 value: 5.616105475807897 - type: nauc_precision_at_1000_max value: 25.581074402479505 - type: nauc_precision_at_1000_std value: 28.030885522347404 - type: nauc_precision_at_100_diff1 value: 5.563452871367157 - type: nauc_precision_at_100_max value: 20.2742392314572 - type: nauc_precision_at_100_std value: 69.72201297915448 - type: nauc_precision_at_10_diff1 value: 1.3379158842989611 - type: nauc_precision_at_10_max value: 12.076929332870746 - type: nauc_precision_at_10_std value: 19.420680340269207 - type: nauc_precision_at_1_diff1 value: -10.071571494502948 - type: nauc_precision_at_1_max value: 17.28112978017911 - type: nauc_precision_at_1_std value: 21.458308992920365 - type: nauc_precision_at_20_diff1 value: 4.240200829917038 - type: nauc_precision_at_20_max value: 8.993878588160804 - type: nauc_precision_at_20_std value: 30.80491219798138 - type: nauc_precision_at_3_diff1 value: 12.886975175455992 - type: nauc_precision_at_3_max value: 11.298461410464169 - type: nauc_precision_at_3_std value: 10.518238245615933 - type: nauc_precision_at_5_diff1 value: 17.435560313660595 - type: nauc_precision_at_5_max value: 2.2155983021008256 - type: nauc_precision_at_5_std value: 11.998919133184952 - type: nauc_recall_at_1000_diff1 value: 20.776784820989995 - type: nauc_recall_at_1000_max value: 7.82142405608866 - type: nauc_recall_at_1000_std value: 61.814763636984914 - type: nauc_recall_at_100_diff1 value: 3.8928372388427777 - type: nauc_recall_at_100_max value: 0.9218533326334627 - type: nauc_recall_at_100_std value: 37.90057790091917 - type: nauc_recall_at_10_diff1 value: 1.620404946253575 - type: nauc_recall_at_10_max value: 7.657179453157968 - type: nauc_recall_at_10_std value: 1.4727974035045146 - type: nauc_recall_at_1_diff1 value: 2.5191045547339246 - type: nauc_recall_at_1_max value: 26.262684180827723 - type: nauc_recall_at_1_std value: 11.673938727339559 - type: nauc_recall_at_20_diff1 value: 1.7449368151648579 - type: nauc_recall_at_20_max value: 1.7480936001393137 - type: nauc_recall_at_20_std value: 4.712827220834413 - type: nauc_recall_at_3_diff1 value: 11.857687870032672 - type: nauc_recall_at_3_max value: 7.014395082015944 - type: nauc_recall_at_3_std value: -2.9526240248867732 - type: nauc_recall_at_5_diff1 value: 15.486663741149448 - type: nauc_recall_at_5_max value: 2.6981538842605866 - type: nauc_recall_at_5_std value: -2.279294596760028 - type: ndcg_at_1 value: 26.531 - type: ndcg_at_10 value: 24.7 - type: ndcg_at_100 value: 37.21 - type: ndcg_at_1000 value: 48.687999999999995 - type: ndcg_at_20 value: 25.365 - type: ndcg_at_3 value: 26.8 - type: ndcg_at_5 value: 24.618000000000002 - type: precision_at_1 value: 28.571 - type: precision_at_10 value: 22.041 - type: precision_at_100 value: 8.122 - type: precision_at_1000 value: 1.5879999999999999 - type: precision_at_20 value: 16.735 - type: precision_at_3 value: 27.211000000000002 - type: precision_at_5 value: 24.082 - type: recall_at_1 value: 1.8350000000000002 - type: recall_at_10 value: 16.039 - type: recall_at_100 value: 49.82 - type: recall_at_1000 value: 85.979 - type: recall_at_20 value: 24.169999999999998 - type: recall_at_3 value: 5.789 - type: recall_at_5 value: 8.725 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification (default) type: mteb/toxic_conversations_50k config: default split: test revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de metrics: - type: accuracy value: 67.314453125 - type: ap value: 13.15420979399679 - type: ap_weighted value: 13.15420979399679 - type: f1 value: 52.03706668900905 - type: f1_weighted value: 74.55554872499289 - type: main_score value: 67.314453125 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification (default) type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 64.03225806451613 - type: f1 value: 64.06843534121843 - type: f1_weighted value: 62.74796899202356 - type: main_score value: 64.03225806451613 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering (default) type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: main_score value: 53.5182998227727 - type: v_measure value: 53.5182998227727 - type: v_measure_std value: 1.8758215247032688 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 (default) type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cosine_accuracy value: 88.77630088812064 - type: cosine_accuracy_threshold value: 78.23218107223511 - type: cosine_ap value: 82.3910336126561 - type: cosine_f1 value: 74.85029940119759 - type: cosine_f1_threshold value: 74.12334680557251 - type: cosine_precision value: 70.9891150023663 - type: cosine_recall value: 79.15567282321899 - type: dot_accuracy value: 88.77630088812064 - type: dot_accuracy_threshold value: 78.23217511177063 - type: dot_ap value: 82.39102411942014 - type: dot_f1 value: 74.85029940119759 - type: dot_f1_threshold value: 74.12335872650146 - type: dot_precision value: 70.9891150023663 - type: dot_recall value: 79.15567282321899 - type: euclidean_accuracy value: 88.77630088812064 - type: euclidean_accuracy_threshold value: 65.98154306411743 - type: euclidean_ap value: 82.39103194961726 - type: euclidean_f1 value: 74.85029940119759 - type: euclidean_f1_threshold value: 71.93976640701294 - type: euclidean_precision value: 70.9891150023663 - type: euclidean_recall value: 79.15567282321899 - type: main_score value: 82.39103516928604 - type: manhattan_accuracy value: 88.72265601716636 - type: manhattan_accuracy_threshold value: 3392.3141479492188 - type: manhattan_ap value: 82.37303670339044 - type: manhattan_f1 value: 74.91452450297581 - type: manhattan_f1_threshold value: 3631.7977905273438 - type: manhattan_precision value: 72.02337472607742 - type: manhattan_recall value: 78.04749340369393 - type: max_accuracy value: 88.77630088812064 - type: max_ap value: 82.39103516928604 - type: max_f1 value: 74.91452450297581 - type: max_precision value: 72.02337472607742 - type: max_recall value: 79.15567282321899 - type: similarity_accuracy value: 88.77630088812064 - type: similarity_accuracy_threshold value: 1776.1724472045898 - type: similarity_ap value: 82.39103516928604 - type: similarity_f1 value: 74.85029940119759 - type: similarity_f1_threshold value: 1682.8865051269531 - type: similarity_precision value: 70.9891150023663 - type: similarity_recall value: 79.15567282321899 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus (default) type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cosine_accuracy value: 89.86688399891334 - type: cosine_accuracy_threshold value: 72.39344120025635 - type: cosine_ap value: 87.60291936910795 - type: cosine_f1 value: 80.1902671057446 - type: cosine_f1_threshold value: 69.8064923286438 - type: cosine_precision value: 76.40496444010599 - type: cosine_recall value: 84.3701878657222 - type: dot_accuracy value: 89.86688399891334 - type: dot_accuracy_threshold value: 72.39343523979187 - type: dot_ap value: 87.60292256175723 - type: dot_f1 value: 80.1902671057446 - type: dot_f1_threshold value: 69.80648040771484 - type: dot_precision value: 76.40496444010599 - type: dot_recall value: 84.3701878657222 - type: euclidean_accuracy value: 89.86688399891334 - type: euclidean_accuracy_threshold value: 74.30553436279297 - type: euclidean_ap value: 87.60291559616975 - type: euclidean_f1 value: 80.1902671057446 - type: euclidean_f1_threshold value: 77.70909070968628 - type: euclidean_precision value: 76.40496444010599 - type: euclidean_recall value: 84.3701878657222 - type: main_score value: 87.60538403560003 - type: manhattan_accuracy value: 89.88822913028291 - type: manhattan_accuracy_threshold value: 3783.2366943359375 - type: manhattan_ap value: 87.60538403560003 - type: manhattan_f1 value: 80.16710642040458 - type: manhattan_f1_threshold value: 3948.430633544922 - type: manhattan_precision value: 76.4895104895105 - type: manhattan_recall value: 84.21619956883278 - type: max_accuracy value: 89.88822913028291 - type: max_ap value: 87.60538403560003 - type: max_f1 value: 80.1902671057446 - type: max_precision value: 76.4895104895105 - type: max_recall value: 84.3701878657222 - type: similarity_accuracy value: 89.86688399891334 - type: similarity_accuracy_threshold value: 1643.6103820800781 - type: similarity_ap value: 87.6029128769812 - type: similarity_f1 value: 80.1902671057446 - type: similarity_f1_threshold value: 1584.8767280578613 - type: similarity_precision value: 76.40496444010599 - type: similarity_recall value: 84.3701878657222 --- # BinGE: TODO TODO: 2 line summary and link to paper ## Usage ```python import torch from transformers import AutoTokenizer, AutoModel, AutoConfig from peft import PeftModel if __name__ == "__main__": # Loading base Meta-Llama-3 model, along with custom code that enables bidirectional connections in decoder-only LLMs. tokenizer = AutoTokenizer.from_pretrained( "McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp" ) config = AutoConfig.from_pretrained( "McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp", trust_remote_code=True ) model = AutoModel.from_pretrained( "McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp", trust_remote_code=True, config=config, torch_dtype=torch.bfloat16, device_map="cuda" if torch.cuda.is_available() else "cpu", ) # Loading MNTP (Masked Next Token Prediction) model. model = PeftModel.from_pretrained( model, "McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp", ) model = model.merge_and_unload() # This can take several minutes on cpu # Loading BinGSE model. This loads the trained LoRA weights on top of MNTP model. Hence the final weights are -- Base model + MNTP (LoRA) + BinGSE (LoRA). model = PeftModel.from_pretrained( model, model_path ) ``` TODO: initialize wrapper, provide example to check loading happened properly - see https://huggingface.co/McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp-unsup-simcse
[ "BIOSSES", "SCIFACT" ]
medspaner/mdeberta-v3-base-re-ct
medspaner
null
[ "transformers", "safetensors", "deberta-v2", "es", "base_model:microsoft/mdeberta-v3-base", "base_model:finetune:microsoft/mdeberta-v3-base", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
2024-10-28T15:02:16Z
2024-11-13T12:19:20+00:00
14
0
--- base_model: - microsoft/mdeberta-v3-base language: - es library_name: transformers license: cc-by-nc-4.0 metrics: - accuracy - precision - recall - f1 --- # Model Card for mdeberta-v3-base-re-ct This relation extraction model extracts intervention-associated relationships, temporal relations, negation/speculation and others relevant for clinical trials. The model achieves the following results on the test set (when trained with the training and development set; results are averaged over 5 evaluation rounds): - Precision: 0.886 (±0.003) - Recall: 0.857 (±0.007) - F1: 0.869 (±0.005) - Accuracy: 0.911 (±0.003) ## Model description This model adapts the pre-trained model [mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base). It is fine-tuned to conduct relation extraction on Spanish texts about clinical trials. The model is fine-tuned on the [Clinical Trials for Evidence-Based-Medicine in Spanish corpus](http://www.lllf.uam.es/ESP/nlpdata/wp2/). If you use this model, please, cite as follows: ``` @article{campillosetal2025,         title = {{Benchmarking Transformer Models for Relation Extraction and Concept Normalization in a Clinical Trials Corpus}},         author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n and Zakhir-Puig, Sof{\'i}a and Heras-Vicente, J{\'o}nathan},         journal = {(Under review)}, year={2025} } ``` ## Intended uses & limitations **Disclosure**: *This model is under development and needs to be improved. It should not be used for medical decision making without human assistance and supervision* This model is intended for a generalist purpose, and may have bias and/or any other undesirable distortions. Third parties who deploy or provide systems and/or services using any of these models (or using systems based on these models) should note that it is their responsibility to mitigate the risks arising from their use. Third parties, in any event, need to comply with applicable regulations, including regulations concerning the use of artificial intelligence. The owner or creator of the models will in no event be liable for any results arising from the use made by third parties of these models. **Descargo de responsabilidad**: *Esta herramienta se encuentra en desarrollo y no debe ser empleada para la toma de decisiones médicas* La finalidad de este modelo es generalista, y se advierte que puede tener sesgos y/u otro tipo de distorsiones indeseables. Terceras partes que desplieguen o proporcionen sistemas y/o servicios usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) han tener presente que es su responsabilidad abordar y minimizar los riesgos derivados de su uso. Las terceras partes, en cualquier circunstancia, deben cumplir con la normativa aplicable, incluyendo la normativa que concierne al uso de la inteligencia artificial. El propietario o creador de los modelos de ningún modo será responsable de los resultados derivados del uso que las terceras partes hagan de estos modelos. ## Training and evaluation data The data used for fine-tuning are the [Clinical Trials for Evidence-Based-Medicine in Spanish corpus](http://www.lllf.uam.es/ESP/nlpdata/wp2/) version 3 (annotated with semantic relationships). It is a collection of 1200 texts about clinical trials studies and clinical trials announcements: - 500 abstracts from journals published under a Creative Commons license, e.g. available in PubMed or the Scientific Electronic Library Online (SciELO) - 700 clinical trials announcements published in the European Clinical Trials Register and Repositorio Español de Estudios Clínicos The CT-EBM-ES resource (version 1) can be cited as follows: ``` @article{campillosetal-midm2021,         title = {A clinical trials corpus annotated with UMLS© entities to enhance the access to Evidence-Based Medicine},         author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n and Moreno-Sandoval, Antonio},         journal = {BMC Medical Informatics and Decision Making},         volume={21}, number={1}, pages={1--19}, year={2021}, publisher={BioMed Central} } ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: we used different seeds for 5 evaluation rounds, and uploaded the model with the best results - optimizer: AdamW - weight decay: 1e-2 - lr_scheduler_type: linear - num_epochs: 5 epochs. ### Training results (test set; average and standard deviation of 5 rounds with different seeds) | Precision | Recall | F1 | Accuracy | |:--------------:|:--------------:|:--------------:|:--------------:| | 0.886 (±0.003) | 0.857 (±0.007) | 0.869 (±0.005) | 0.911 (±0.003) | **Results per class (test set; best model)** | Class | Precision | Recall | F1 | Support | |:---------------:|:--------------:|:--------------:|:--------------:|:---------:| | Experiences | 0.96 | 0.97 | 0.97 | 2003 | | Has_Age | 0.93 | 0.84 | 0.88 | 152 | | Has_Dose_or_Strength | 0.84 | 0.81 | 0.83 | 189 | | Has_Drug_Form | 0.90 | 0.73 | 0.81 | 64 | | Has_Duration_or_Interval | 0.83 | 0.84 | 0.84 | 365 | | Has_Frequency | 0.79 | 0.86 | 0.82 | 84 | | Has_Quantifier_or_Qualifier | 0.91 | 0.89 | 0.90 | 1040 | | Has_Result_or_Value | 0.92 | 0.87 | 0.89 | 384 | | Has_Route_or_Mode | 0.91 | 0.87 | 0.89 | 221 | | Has_Time_Data | 0.83 | 0.91 | 0.86 | 589 | | Location_of | 0.96 | 0.96 | 0.96 | 1119 | | Used_for | 0.89 | 0.88 | 0.89 | 731 | ### Framework versions - Transformers 4.42.4 - Pytorch 2.0.1+cu117 - Datasets 2.15.0 - Tokenizers 0.19.1
[ "SCIELO" ]
glif-loradex-trainer/maxxd4240_LazyAnimalMemes
glif-loradex-trainer
text-to-image
[ "diffusers", "text-to-image", "template:sd-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:finetune:black-forest-labs/FLUX.1-dev", "license:other", "region:us", "flux", "lora", "base_model:adapter:black-forest-labs/FLUX.1-dev" ]
2024-10-30T13:02:55Z
2024-10-30T13:03:39+00:00
14
0
--- base_model: black-forest-labs/FLUX.1-dev license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md tags: - diffusers - text-to-image - template:sd-lora - base_model:black-forest-labs/FLUX.1-dev - base_model:finetune:black-forest-labs/FLUX.1-dev - license:other - region:us - flux - lora widget: - output: url: samples/1730293263727__000002500_0.jpg text: fat bear saying he eat too much honey laz!! - output: url: samples/1730293288374__000002500_1.jpg text: fat bear in car laz!! - output: url: samples/1730293313150__000002500_2.jpg text: dog eating bananalaz!! - output: url: samples/1730293338006__000002500_3.jpg text: pink cat with rabbitlaz!! - output: url: samples/1730293362933__000002500_4.jpg text: yellow duck floating in bathtublaz!! trigger: laz!! instance_prompt: laz!! --- # LazyAnimalMemes Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `maxxd4240`. <Gallery /> ## Trigger words You should use `laz!!` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/glif-loradex-trainer/maxxd4240_LazyAnimalMemes/tree/main) them in the Files & versions tab. ## License This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
[ "BEAR" ]
humane-intelligence/gemma2-9b-cpt-sealionv3-instruct-endpoint
humane-intelligence
text-generation
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "en", "id", "ta", "th", "vi", "arxiv:2309.06085", "arxiv:2311.07911", "arxiv:2306.05685", "license:gemma", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-11-02T17:22:53Z
2024-11-03T00:53:55+00:00
14
0
--- language: - en - id - ta - th - vi library_name: transformers license: gemma pipeline_tag: text-generation --- # Gemma2 9B CPT SEA-LIONv3 Instruct SEA-LION is a collection of Large Language Models (LLMs) which has been pretrained and instruct-tuned for the Southeast Asia (SEA) region. Gemma2 9B CPT SEA-LIONv3 Instruct is a multilingual model which has been fine-tuned with around **500,000 English instruction-completion pairs** alongside a larger pool of around **1,000,000 instruction-completion pairs** from other ASEAN languages, such as Indonesian, Thai and Vietnamese. SEA-LION stands for _Southeast Asian Languages In One Network_. - **Developed by:** Products Pillar, AI Singapore - **Funded by:** Singapore NRF - **Model type:** Decoder - **Languages:** English, Indonesian, Thai, Vietnamese, Tamil - **License:** [Gemma Community License](https://ai.google.dev/gemma/terms) ## Model Details ### Model Description We performed instruction tuning in English and also in ASEAN languages such as Indonesian, Thai and Vietnamese on our [continued pre-trained Gemma2 9B CPT SEA-LIONv3](https://huggingface.co/aisingapore/gemma2-9b-cpt-sea-lionv3-base), a decoder model using the Gemma2 architecture, to create Gemma2 9B CPT SEA-LIONv3 Instruct. The model has a context length of 8192. ### Benchmark Performance We evaluated Gemma2 9B CPT SEA-LIONv3 Instruct on both general language capabilities and instruction-following capabilities. #### General Language Capabilities For the evaluation of general language capabilities, we employed the [SEA HELM (also known as BHASA) evaluation benchmark](https://arxiv.org/abs/2309.06085v2) across a variety of tasks. These tasks include Question Answering (QA), Sentiment Analysis (Sentiment), Toxicity Detection (Toxicity), Translation in both directions (Eng>Lang & Lang>Eng), Abstractive Summarization (Summ), Causal Reasoning (Causal) and Natural Language Inference (NLI). Note: SEA HELM is implemented using prompts which expect answers in a strict format. For all tasks, the model is expected to provide an answer tag from which the answer would be extracted. For tasks where options are provided, the answer should only include one of the pre-defined options. The weighted accuracy of the answers is calculated and normalisation is performed to account for baseline performance due to random chance. The evaluation was done zero-shot with native prompts and only a sample of 100-1000 instances for each dataset was used as per the setting described in the paper. #### Instruction-following Capabilities Since Gemma2 9B CPT SEA-LIONv3 Instruct is an instruction-following model, we also evaluated it on instruction-following capabilities with two datasets, [IFEval](https://arxiv.org/abs/2311.07911) and [MT-Bench](https://arxiv.org/abs/2306.05685). As these two datasets were originally in English, the linguists and native speakers in the team worked together to filter, localize and translate the datasets into the respective target languages to ensure that the examples remained reasonable, meaningful and natural. **IFEval** IFEval evaluates a model's ability to adhere to constraints provided in the prompt, for example beginning a response with a specific word/phrase or answering with a certain number of sections. The metric used is accuracy normalized by language (if the model performs the task correctly but responds in the wrong language, it is judged to have failed the task). **MT-Bench** MT-Bench evaluates a model's ability to engage in multi-turn (2 turns) conversations and respond in ways that align with human needs. We use `gpt-4-1106-preview` as the judge model and compare against `gpt-3.5-turbo-0125` as the baseline model. The metric used is the weighted win rate against the baseline model (i.e. average win rate across each category (Math, Reasoning, STEM, Humanities, Roleplay, Writing, Extraction)). A tie is given a score of 0.5. For more details on Gemma2 9B CPT SEA-LIONv3 Instruct benchmark performance, please refer to the SEA HELM leaderboard, https://leaderboard.sea-lion.ai/ ### Usage Gemma2 9B CPT SEA-LIONv3 Instruct can be run using the 🤗 Transformers library ```python # Please use transformers==4.45.2 import transformers import torch model_id = "aisingapore/gemma2-9b-cpt-sea-lionv3-instruct" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "user", "content": "Apa sentimen dari kalimat berikut ini?\nKalimat: Buku ini sangat membosankan.\nJawaban: "}, ] outputs = pipeline( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ``` ### Caveats It is important for users to be aware that our model exhibits certain limitations that warrant consideration. Like many LLMs, the model can hallucinate and occasionally generates irrelevant content, introducing fictional elements that are not grounded in the provided context. Users should also exercise caution in interpreting and validating the model's responses due to the potential inconsistencies in its reasoning. ## Limitations ### Safety Current SEA-LION models, including this commercially permissive release, have not been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights and codes. ## Technical Specifications ### Fine-Tuning Details Gemma2 9B CPT SEA-LIONv3 Instruct was built using a combination of a full parameter fine-tune, on-policy alignment, and model merges of the best performing checkpoints. The training process for fine-tuning was approximately 15 hours, with alignment taking 2 hours, both on 8x H100-80GB GPUs. ## Data Gemma2 9B CPT SEA-LIONv3 Instruct was trained on a wide range of synthetic instructions, alongside publicly available instructions hand-curated by the team with the assistance of native speakers. In addition, special care was taken to ensure that the datasets used had commercially permissive licenses through verification with the original data source. ## Call for Contributions We encourage researchers, developers, and language enthusiasts to actively contribute to the enhancement and expansion of SEA-LION. Contributions can involve identifying and reporting bugs, sharing pre-training, instruction, and preference data, improving documentation usability, proposing and implementing new model evaluation tasks and metrics, or training versions of the model in additional Southeast Asian languages. Join us in shaping the future of SEA-LION by sharing your expertise and insights to make these models more accessible, accurate, and versatile. Please check out our GitHub for further information on the call for contributions. ## The Team Chan Adwin, Choa Esther, Cheng Nicholas, Huang Yuli, Lau Wayne, Lee Chwan Ren, Leong Wai Yi, Leong Wei Qi, Limkonchotiwat Peerat, Liu Bing Jie Darius, Montalan Jann Railey, Ng Boon Cheong Raymond, Ngui Jian Gang, Nguyen Thanh Ngan, Ong Brandon, Ong Tat-Wee David, Ong Zhi Hao, Rengarajan Hamsawardhini, Siow Bryan, Susanto Yosephine, Tai Ngee Chia, Tan Choon Meng, Teo Eng Sipp Leslie, Teo Wei Yi, Tjhi William, Teng Walter, Yeo Yeow Tong, Yong Xianbin ## Acknowledgements [AI Singapore](​​https://aisingapore.org/) is a national programme supported by the National Research Foundation, Singapore and hosted by the National University of Singapore. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the National Research Foundation or the National University of Singapore. ## Contact For more info, please contact us using this [SEA-LION Inquiry Form](https://forms.gle/sLCUVb95wmGf43hi6) [Link to SEA-LION's GitHub repository](https://github.com/aisingapore/sealion) ## Disclaimer This is the repository for the commercial instruction-tuned model. The model has _not_ been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claims, damages, or other liabilities arising from the use of the released weights and codes.
[ "CHIA" ]
tomaarsen/bge-small-en-v1.5-copy
tomaarsen
feature-extraction
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "mteb", "en", "arxiv:2401.03462", "arxiv:2312.15503", "arxiv:2311.13534", "arxiv:2310.07554", "arxiv:2309.07597", "license:mit", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-11-13T09:47:58Z
2024-11-13T09:48:07+00:00
14
0
--- language: - en license: mit tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers - mteb model-index: - name: bge-small-en-v1.5 results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 73.79104477611939 - type: ap value: 37.21923821573361 - type: f1 value: 68.0914945617093 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 92.75377499999999 - type: ap value: 89.46766124546022 - type: f1 value: 92.73884001331487 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 46.986 - type: f1 value: 46.55936786727896 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 35.846000000000004 - type: map_at_10 value: 51.388 - type: map_at_100 value: 52.132999999999996 - type: map_at_1000 value: 52.141000000000005 - type: map_at_3 value: 47.037 - type: map_at_5 value: 49.579 - type: mrr_at_1 value: 36.558 - type: mrr_at_10 value: 51.658 - type: mrr_at_100 value: 52.402 - type: mrr_at_1000 value: 52.410000000000004 - type: mrr_at_3 value: 47.345 - type: mrr_at_5 value: 49.797999999999995 - type: ndcg_at_1 value: 35.846000000000004 - type: ndcg_at_10 value: 59.550000000000004 - type: ndcg_at_100 value: 62.596 - type: ndcg_at_1000 value: 62.759 - type: ndcg_at_3 value: 50.666999999999994 - type: ndcg_at_5 value: 55.228 - type: precision_at_1 value: 35.846000000000004 - type: precision_at_10 value: 8.542 - type: precision_at_100 value: 0.984 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 20.389 - type: precision_at_5 value: 14.438 - type: recall_at_1 value: 35.846000000000004 - type: recall_at_10 value: 85.42 - type: recall_at_100 value: 98.43499999999999 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 61.166 - type: recall_at_5 value: 72.191 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 47.402770198163594 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 40.01545436974177 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 62.586465273207196 - type: mrr value: 74.42169019038825 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 85.1891186537969 - type: cos_sim_spearman value: 83.75492046087288 - type: euclidean_pearson value: 84.11766204805357 - type: euclidean_spearman value: 84.01456493126516 - type: manhattan_pearson value: 84.2132950502772 - type: manhattan_spearman value: 83.89227298813377 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 85.74025974025975 - type: f1 value: 85.71493566466381 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 38.467181385006434 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 34.719496037339056 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 29.587000000000003 - type: map_at_10 value: 41.114 - type: map_at_100 value: 42.532 - type: map_at_1000 value: 42.661 - type: map_at_3 value: 37.483 - type: map_at_5 value: 39.652 - type: mrr_at_1 value: 36.338 - type: mrr_at_10 value: 46.763 - type: mrr_at_100 value: 47.393 - type: mrr_at_1000 value: 47.445 - type: mrr_at_3 value: 43.538 - type: mrr_at_5 value: 45.556000000000004 - type: ndcg_at_1 value: 36.338 - type: ndcg_at_10 value: 47.658 - type: ndcg_at_100 value: 52.824000000000005 - type: ndcg_at_1000 value: 54.913999999999994 - type: ndcg_at_3 value: 41.989 - type: ndcg_at_5 value: 44.944 - type: precision_at_1 value: 36.338 - type: precision_at_10 value: 9.156 - type: precision_at_100 value: 1.4789999999999999 - type: precision_at_1000 value: 0.196 - type: precision_at_3 value: 20.076 - type: precision_at_5 value: 14.85 - type: recall_at_1 value: 29.587000000000003 - type: recall_at_10 value: 60.746 - type: recall_at_100 value: 82.157 - type: recall_at_1000 value: 95.645 - type: recall_at_3 value: 44.821 - type: recall_at_5 value: 52.819 - type: map_at_1 value: 30.239 - type: map_at_10 value: 39.989000000000004 - type: map_at_100 value: 41.196 - type: map_at_1000 value: 41.325 - type: map_at_3 value: 37.261 - type: map_at_5 value: 38.833 - type: mrr_at_1 value: 37.516 - type: mrr_at_10 value: 46.177 - type: mrr_at_100 value: 46.806 - type: mrr_at_1000 value: 46.849000000000004 - type: mrr_at_3 value: 44.002 - type: mrr_at_5 value: 45.34 - type: ndcg_at_1 value: 37.516 - type: ndcg_at_10 value: 45.586 - type: ndcg_at_100 value: 49.897000000000006 - type: ndcg_at_1000 value: 51.955 - type: ndcg_at_3 value: 41.684 - type: ndcg_at_5 value: 43.617 - type: precision_at_1 value: 37.516 - type: precision_at_10 value: 8.522 - type: precision_at_100 value: 1.374 - type: precision_at_1000 value: 0.184 - type: precision_at_3 value: 20.105999999999998 - type: precision_at_5 value: 14.152999999999999 - type: recall_at_1 value: 30.239 - type: recall_at_10 value: 55.03 - type: recall_at_100 value: 73.375 - type: recall_at_1000 value: 86.29599999999999 - type: recall_at_3 value: 43.269000000000005 - type: recall_at_5 value: 48.878 - type: map_at_1 value: 38.338 - type: map_at_10 value: 50.468999999999994 - type: map_at_100 value: 51.553000000000004 - type: map_at_1000 value: 51.608 - type: map_at_3 value: 47.107 - type: map_at_5 value: 49.101 - type: mrr_at_1 value: 44.201 - type: mrr_at_10 value: 54.057 - type: mrr_at_100 value: 54.764 - type: mrr_at_1000 value: 54.791000000000004 - type: mrr_at_3 value: 51.56699999999999 - type: mrr_at_5 value: 53.05 - type: ndcg_at_1 value: 44.201 - type: ndcg_at_10 value: 56.379000000000005 - type: ndcg_at_100 value: 60.645 - type: ndcg_at_1000 value: 61.73499999999999 - type: ndcg_at_3 value: 50.726000000000006 - type: ndcg_at_5 value: 53.58500000000001 - type: precision_at_1 value: 44.201 - type: precision_at_10 value: 9.141 - type: precision_at_100 value: 1.216 - type: precision_at_1000 value: 0.135 - type: precision_at_3 value: 22.654 - type: precision_at_5 value: 15.723999999999998 - type: recall_at_1 value: 38.338 - type: recall_at_10 value: 70.30499999999999 - type: recall_at_100 value: 88.77199999999999 - type: recall_at_1000 value: 96.49799999999999 - type: recall_at_3 value: 55.218 - type: recall_at_5 value: 62.104000000000006 - type: map_at_1 value: 25.682 - type: map_at_10 value: 33.498 - type: map_at_100 value: 34.461000000000006 - type: map_at_1000 value: 34.544000000000004 - type: map_at_3 value: 30.503999999999998 - type: map_at_5 value: 32.216 - type: mrr_at_1 value: 27.683999999999997 - type: mrr_at_10 value: 35.467999999999996 - type: mrr_at_100 value: 36.32 - type: mrr_at_1000 value: 36.386 - type: mrr_at_3 value: 32.618 - type: mrr_at_5 value: 34.262 - type: ndcg_at_1 value: 27.683999999999997 - type: ndcg_at_10 value: 38.378 - type: ndcg_at_100 value: 43.288 - type: ndcg_at_1000 value: 45.413 - type: ndcg_at_3 value: 32.586 - type: ndcg_at_5 value: 35.499 - type: precision_at_1 value: 27.683999999999997 - type: precision_at_10 value: 5.864 - type: precision_at_100 value: 0.882 - type: precision_at_1000 value: 0.11 - type: precision_at_3 value: 13.446 - type: precision_at_5 value: 9.718 - type: recall_at_1 value: 25.682 - type: recall_at_10 value: 51.712 - type: recall_at_100 value: 74.446 - type: recall_at_1000 value: 90.472 - type: recall_at_3 value: 36.236000000000004 - type: recall_at_5 value: 43.234 - type: map_at_1 value: 16.073999999999998 - type: map_at_10 value: 24.352999999999998 - type: map_at_100 value: 25.438 - type: map_at_1000 value: 25.545 - type: map_at_3 value: 21.614 - type: map_at_5 value: 23.104 - type: mrr_at_1 value: 19.776 - type: mrr_at_10 value: 28.837000000000003 - type: mrr_at_100 value: 29.755 - type: mrr_at_1000 value: 29.817 - type: mrr_at_3 value: 26.201999999999998 - type: mrr_at_5 value: 27.714 - type: ndcg_at_1 value: 19.776 - type: ndcg_at_10 value: 29.701 - type: ndcg_at_100 value: 35.307 - type: ndcg_at_1000 value: 37.942 - type: ndcg_at_3 value: 24.764 - type: ndcg_at_5 value: 27.025 - type: precision_at_1 value: 19.776 - type: precision_at_10 value: 5.659 - type: precision_at_100 value: 0.971 - type: precision_at_1000 value: 0.133 - type: precision_at_3 value: 12.065 - type: precision_at_5 value: 8.905000000000001 - type: recall_at_1 value: 16.073999999999998 - type: recall_at_10 value: 41.647 - type: recall_at_100 value: 66.884 - type: recall_at_1000 value: 85.91499999999999 - type: recall_at_3 value: 27.916 - type: recall_at_5 value: 33.729 - type: map_at_1 value: 28.444999999999997 - type: map_at_10 value: 38.218999999999994 - type: map_at_100 value: 39.595 - type: map_at_1000 value: 39.709 - type: map_at_3 value: 35.586 - type: map_at_5 value: 36.895 - type: mrr_at_1 value: 34.841 - type: mrr_at_10 value: 44.106 - type: mrr_at_100 value: 44.98 - type: mrr_at_1000 value: 45.03 - type: mrr_at_3 value: 41.979 - type: mrr_at_5 value: 43.047999999999995 - type: ndcg_at_1 value: 34.841 - type: ndcg_at_10 value: 43.922 - type: ndcg_at_100 value: 49.504999999999995 - type: ndcg_at_1000 value: 51.675000000000004 - type: ndcg_at_3 value: 39.858 - type: ndcg_at_5 value: 41.408 - type: precision_at_1 value: 34.841 - type: precision_at_10 value: 7.872999999999999 - type: precision_at_100 value: 1.2449999999999999 - type: precision_at_1000 value: 0.161 - type: precision_at_3 value: 18.993 - type: precision_at_5 value: 13.032 - type: recall_at_1 value: 28.444999999999997 - type: recall_at_10 value: 54.984 - type: recall_at_100 value: 78.342 - type: recall_at_1000 value: 92.77 - type: recall_at_3 value: 42.842999999999996 - type: recall_at_5 value: 47.247 - type: map_at_1 value: 23.072 - type: map_at_10 value: 32.354 - type: map_at_100 value: 33.800000000000004 - type: map_at_1000 value: 33.908 - type: map_at_3 value: 29.232000000000003 - type: map_at_5 value: 31.049 - type: mrr_at_1 value: 29.110000000000003 - type: mrr_at_10 value: 38.03 - type: mrr_at_100 value: 39.032 - type: mrr_at_1000 value: 39.086999999999996 - type: mrr_at_3 value: 35.407 - type: mrr_at_5 value: 36.76 - type: ndcg_at_1 value: 29.110000000000003 - type: ndcg_at_10 value: 38.231 - type: ndcg_at_100 value: 44.425 - type: ndcg_at_1000 value: 46.771 - type: ndcg_at_3 value: 33.095 - type: ndcg_at_5 value: 35.459 - type: precision_at_1 value: 29.110000000000003 - type: precision_at_10 value: 7.215000000000001 - type: precision_at_100 value: 1.2109999999999999 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 16.058 - type: precision_at_5 value: 11.644 - type: recall_at_1 value: 23.072 - type: recall_at_10 value: 50.285999999999994 - type: recall_at_100 value: 76.596 - type: recall_at_1000 value: 92.861 - type: recall_at_3 value: 35.702 - type: recall_at_5 value: 42.152 - type: map_at_1 value: 24.937916666666666 - type: map_at_10 value: 33.755250000000004 - type: map_at_100 value: 34.955999999999996 - type: map_at_1000 value: 35.070499999999996 - type: map_at_3 value: 30.98708333333333 - type: map_at_5 value: 32.51491666666666 - type: mrr_at_1 value: 29.48708333333333 - type: mrr_at_10 value: 37.92183333333334 - type: mrr_at_100 value: 38.76583333333333 - type: mrr_at_1000 value: 38.82466666666667 - type: mrr_at_3 value: 35.45125 - type: mrr_at_5 value: 36.827000000000005 - type: ndcg_at_1 value: 29.48708333333333 - type: ndcg_at_10 value: 39.05225 - type: ndcg_at_100 value: 44.25983333333334 - type: ndcg_at_1000 value: 46.568333333333335 - type: ndcg_at_3 value: 34.271583333333325 - type: ndcg_at_5 value: 36.483916666666666 - type: precision_at_1 value: 29.48708333333333 - type: precision_at_10 value: 6.865749999999999 - type: precision_at_100 value: 1.1195833333333332 - type: precision_at_1000 value: 0.15058333333333335 - type: precision_at_3 value: 15.742083333333333 - type: precision_at_5 value: 11.221916666666667 - type: recall_at_1 value: 24.937916666666666 - type: recall_at_10 value: 50.650416666666665 - type: recall_at_100 value: 73.55383333333334 - type: recall_at_1000 value: 89.61691666666667 - type: recall_at_3 value: 37.27808333333334 - type: recall_at_5 value: 42.99475 - type: map_at_1 value: 23.947 - type: map_at_10 value: 30.575000000000003 - type: map_at_100 value: 31.465 - type: map_at_1000 value: 31.558000000000003 - type: map_at_3 value: 28.814 - type: map_at_5 value: 29.738999999999997 - type: mrr_at_1 value: 26.994 - type: mrr_at_10 value: 33.415 - type: mrr_at_100 value: 34.18 - type: mrr_at_1000 value: 34.245 - type: mrr_at_3 value: 31.621 - type: mrr_at_5 value: 32.549 - type: ndcg_at_1 value: 26.994 - type: ndcg_at_10 value: 34.482 - type: ndcg_at_100 value: 38.915 - type: ndcg_at_1000 value: 41.355 - type: ndcg_at_3 value: 31.139 - type: ndcg_at_5 value: 32.589 - type: precision_at_1 value: 26.994 - type: precision_at_10 value: 5.322 - type: precision_at_100 value: 0.8160000000000001 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_3 value: 13.344000000000001 - type: precision_at_5 value: 8.988 - type: recall_at_1 value: 23.947 - type: recall_at_10 value: 43.647999999999996 - type: recall_at_100 value: 63.851 - type: recall_at_1000 value: 82.0 - type: recall_at_3 value: 34.288000000000004 - type: recall_at_5 value: 38.117000000000004 - type: map_at_1 value: 16.197 - type: map_at_10 value: 22.968 - type: map_at_100 value: 24.095 - type: map_at_1000 value: 24.217 - type: map_at_3 value: 20.771 - type: map_at_5 value: 21.995 - type: mrr_at_1 value: 19.511 - type: mrr_at_10 value: 26.55 - type: mrr_at_100 value: 27.500999999999998 - type: mrr_at_1000 value: 27.578999999999997 - type: mrr_at_3 value: 24.421 - type: mrr_at_5 value: 25.604 - type: ndcg_at_1 value: 19.511 - type: ndcg_at_10 value: 27.386 - type: ndcg_at_100 value: 32.828 - type: ndcg_at_1000 value: 35.739 - type: ndcg_at_3 value: 23.405 - type: ndcg_at_5 value: 25.255 - type: precision_at_1 value: 19.511 - type: precision_at_10 value: 5.017 - type: precision_at_100 value: 0.91 - type: precision_at_1000 value: 0.133 - type: precision_at_3 value: 11.023 - type: precision_at_5 value: 8.025 - type: recall_at_1 value: 16.197 - type: recall_at_10 value: 37.09 - type: recall_at_100 value: 61.778 - type: recall_at_1000 value: 82.56599999999999 - type: recall_at_3 value: 26.034000000000002 - type: recall_at_5 value: 30.762 - type: map_at_1 value: 25.41 - type: map_at_10 value: 33.655 - type: map_at_100 value: 34.892 - type: map_at_1000 value: 34.995 - type: map_at_3 value: 30.94 - type: map_at_5 value: 32.303 - type: mrr_at_1 value: 29.477999999999998 - type: mrr_at_10 value: 37.443 - type: mrr_at_100 value: 38.383 - type: mrr_at_1000 value: 38.440000000000005 - type: mrr_at_3 value: 34.949999999999996 - type: mrr_at_5 value: 36.228 - type: ndcg_at_1 value: 29.477999999999998 - type: ndcg_at_10 value: 38.769 - type: ndcg_at_100 value: 44.245000000000005 - type: ndcg_at_1000 value: 46.593 - type: ndcg_at_3 value: 33.623 - type: ndcg_at_5 value: 35.766 - type: precision_at_1 value: 29.477999999999998 - type: precision_at_10 value: 6.455 - type: precision_at_100 value: 1.032 - type: precision_at_1000 value: 0.135 - type: precision_at_3 value: 14.893999999999998 - type: precision_at_5 value: 10.485 - type: recall_at_1 value: 25.41 - type: recall_at_10 value: 50.669 - type: recall_at_100 value: 74.084 - type: recall_at_1000 value: 90.435 - type: recall_at_3 value: 36.679 - type: recall_at_5 value: 41.94 - type: map_at_1 value: 23.339 - type: map_at_10 value: 31.852000000000004 - type: map_at_100 value: 33.411 - type: map_at_1000 value: 33.62 - type: map_at_3 value: 28.929 - type: map_at_5 value: 30.542 - type: mrr_at_1 value: 28.063 - type: mrr_at_10 value: 36.301 - type: mrr_at_100 value: 37.288 - type: mrr_at_1000 value: 37.349 - type: mrr_at_3 value: 33.663 - type: mrr_at_5 value: 35.165 - type: ndcg_at_1 value: 28.063 - type: ndcg_at_10 value: 37.462 - type: ndcg_at_100 value: 43.620999999999995 - type: ndcg_at_1000 value: 46.211 - type: ndcg_at_3 value: 32.68 - type: ndcg_at_5 value: 34.981 - type: precision_at_1 value: 28.063 - type: precision_at_10 value: 7.1739999999999995 - type: precision_at_100 value: 1.486 - type: precision_at_1000 value: 0.23500000000000001 - type: precision_at_3 value: 15.217 - type: precision_at_5 value: 11.265 - type: recall_at_1 value: 23.339 - type: recall_at_10 value: 48.376999999999995 - type: recall_at_100 value: 76.053 - type: recall_at_1000 value: 92.455 - type: recall_at_3 value: 34.735 - type: recall_at_5 value: 40.71 - type: map_at_1 value: 18.925 - type: map_at_10 value: 26.017000000000003 - type: map_at_100 value: 27.034000000000002 - type: map_at_1000 value: 27.156000000000002 - type: map_at_3 value: 23.604 - type: map_at_5 value: 24.75 - type: mrr_at_1 value: 20.333000000000002 - type: mrr_at_10 value: 27.915 - type: mrr_at_100 value: 28.788000000000004 - type: mrr_at_1000 value: 28.877999999999997 - type: mrr_at_3 value: 25.446999999999996 - type: mrr_at_5 value: 26.648 - type: ndcg_at_1 value: 20.333000000000002 - type: ndcg_at_10 value: 30.673000000000002 - type: ndcg_at_100 value: 35.618 - type: ndcg_at_1000 value: 38.517 - type: ndcg_at_3 value: 25.71 - type: ndcg_at_5 value: 27.679 - type: precision_at_1 value: 20.333000000000002 - type: precision_at_10 value: 4.9910000000000005 - type: precision_at_100 value: 0.8130000000000001 - type: precision_at_1000 value: 0.117 - type: precision_at_3 value: 11.029 - type: precision_at_5 value: 7.8740000000000006 - type: recall_at_1 value: 18.925 - type: recall_at_10 value: 43.311 - type: recall_at_100 value: 66.308 - type: recall_at_1000 value: 87.49 - type: recall_at_3 value: 29.596 - type: recall_at_5 value: 34.245 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 13.714 - type: map_at_10 value: 23.194 - type: map_at_100 value: 24.976000000000003 - type: map_at_1000 value: 25.166 - type: map_at_3 value: 19.709 - type: map_at_5 value: 21.523999999999997 - type: mrr_at_1 value: 30.619000000000003 - type: mrr_at_10 value: 42.563 - type: mrr_at_100 value: 43.386 - type: mrr_at_1000 value: 43.423 - type: mrr_at_3 value: 39.555 - type: mrr_at_5 value: 41.268 - type: ndcg_at_1 value: 30.619000000000003 - type: ndcg_at_10 value: 31.836 - type: ndcg_at_100 value: 38.652 - type: ndcg_at_1000 value: 42.088 - type: ndcg_at_3 value: 26.733 - type: ndcg_at_5 value: 28.435 - type: precision_at_1 value: 30.619000000000003 - type: precision_at_10 value: 9.751999999999999 - type: precision_at_100 value: 1.71 - type: precision_at_1000 value: 0.23500000000000001 - type: precision_at_3 value: 19.935 - type: precision_at_5 value: 14.984 - type: recall_at_1 value: 13.714 - type: recall_at_10 value: 37.26 - type: recall_at_100 value: 60.546 - type: recall_at_1000 value: 79.899 - type: recall_at_3 value: 24.325 - type: recall_at_5 value: 29.725 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 8.462 - type: map_at_10 value: 18.637 - type: map_at_100 value: 26.131999999999998 - type: map_at_1000 value: 27.607 - type: map_at_3 value: 13.333 - type: map_at_5 value: 15.654000000000002 - type: mrr_at_1 value: 66.25 - type: mrr_at_10 value: 74.32600000000001 - type: mrr_at_100 value: 74.60900000000001 - type: mrr_at_1000 value: 74.62 - type: mrr_at_3 value: 72.667 - type: mrr_at_5 value: 73.817 - type: ndcg_at_1 value: 53.87499999999999 - type: ndcg_at_10 value: 40.028999999999996 - type: ndcg_at_100 value: 44.199 - type: ndcg_at_1000 value: 51.629999999999995 - type: ndcg_at_3 value: 44.113 - type: ndcg_at_5 value: 41.731 - type: precision_at_1 value: 66.25 - type: precision_at_10 value: 31.900000000000002 - type: precision_at_100 value: 10.043000000000001 - type: precision_at_1000 value: 1.926 - type: precision_at_3 value: 47.417 - type: precision_at_5 value: 40.65 - type: recall_at_1 value: 8.462 - type: recall_at_10 value: 24.293 - type: recall_at_100 value: 50.146 - type: recall_at_1000 value: 74.034 - type: recall_at_3 value: 14.967 - type: recall_at_5 value: 18.682000000000002 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 47.84499999999999 - type: f1 value: 42.48106691979349 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 74.034 - type: map_at_10 value: 82.76 - type: map_at_100 value: 82.968 - type: map_at_1000 value: 82.98299999999999 - type: map_at_3 value: 81.768 - type: map_at_5 value: 82.418 - type: mrr_at_1 value: 80.048 - type: mrr_at_10 value: 87.64999999999999 - type: mrr_at_100 value: 87.712 - type: mrr_at_1000 value: 87.713 - type: mrr_at_3 value: 87.01100000000001 - type: mrr_at_5 value: 87.466 - type: ndcg_at_1 value: 80.048 - type: ndcg_at_10 value: 86.643 - type: ndcg_at_100 value: 87.361 - type: ndcg_at_1000 value: 87.606 - type: ndcg_at_3 value: 85.137 - type: ndcg_at_5 value: 86.016 - type: precision_at_1 value: 80.048 - type: precision_at_10 value: 10.372 - type: precision_at_100 value: 1.093 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 32.638 - type: precision_at_5 value: 20.177 - type: recall_at_1 value: 74.034 - type: recall_at_10 value: 93.769 - type: recall_at_100 value: 96.569 - type: recall_at_1000 value: 98.039 - type: recall_at_3 value: 89.581 - type: recall_at_5 value: 91.906 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 20.5 - type: map_at_10 value: 32.857 - type: map_at_100 value: 34.589 - type: map_at_1000 value: 34.778 - type: map_at_3 value: 29.160999999999998 - type: map_at_5 value: 31.033 - type: mrr_at_1 value: 40.123 - type: mrr_at_10 value: 48.776 - type: mrr_at_100 value: 49.495 - type: mrr_at_1000 value: 49.539 - type: mrr_at_3 value: 46.605000000000004 - type: mrr_at_5 value: 47.654 - type: ndcg_at_1 value: 40.123 - type: ndcg_at_10 value: 40.343 - type: ndcg_at_100 value: 46.56 - type: ndcg_at_1000 value: 49.777 - type: ndcg_at_3 value: 37.322 - type: ndcg_at_5 value: 37.791000000000004 - type: precision_at_1 value: 40.123 - type: precision_at_10 value: 11.08 - type: precision_at_100 value: 1.752 - type: precision_at_1000 value: 0.232 - type: precision_at_3 value: 24.897 - type: precision_at_5 value: 17.809 - type: recall_at_1 value: 20.5 - type: recall_at_10 value: 46.388 - type: recall_at_100 value: 69.552 - type: recall_at_1000 value: 89.011 - type: recall_at_3 value: 33.617999999999995 - type: recall_at_5 value: 38.211 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 39.135999999999996 - type: map_at_10 value: 61.673 - type: map_at_100 value: 62.562 - type: map_at_1000 value: 62.62 - type: map_at_3 value: 58.467999999999996 - type: map_at_5 value: 60.463 - type: mrr_at_1 value: 78.271 - type: mrr_at_10 value: 84.119 - type: mrr_at_100 value: 84.29299999999999 - type: mrr_at_1000 value: 84.299 - type: mrr_at_3 value: 83.18900000000001 - type: mrr_at_5 value: 83.786 - type: ndcg_at_1 value: 78.271 - type: ndcg_at_10 value: 69.935 - type: ndcg_at_100 value: 73.01299999999999 - type: ndcg_at_1000 value: 74.126 - type: ndcg_at_3 value: 65.388 - type: ndcg_at_5 value: 67.906 - type: precision_at_1 value: 78.271 - type: precision_at_10 value: 14.562 - type: precision_at_100 value: 1.6969999999999998 - type: precision_at_1000 value: 0.184 - type: precision_at_3 value: 41.841 - type: precision_at_5 value: 27.087 - type: recall_at_1 value: 39.135999999999996 - type: recall_at_10 value: 72.809 - type: recall_at_100 value: 84.86200000000001 - type: recall_at_1000 value: 92.208 - type: recall_at_3 value: 62.76199999999999 - type: recall_at_5 value: 67.718 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 90.60600000000001 - type: ap value: 86.6579587804335 - type: f1 value: 90.5938853929307 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 21.852 - type: map_at_10 value: 33.982 - type: map_at_100 value: 35.116 - type: map_at_1000 value: 35.167 - type: map_at_3 value: 30.134 - type: map_at_5 value: 32.340999999999994 - type: mrr_at_1 value: 22.479 - type: mrr_at_10 value: 34.594 - type: mrr_at_100 value: 35.672 - type: mrr_at_1000 value: 35.716 - type: mrr_at_3 value: 30.84 - type: mrr_at_5 value: 32.998 - type: ndcg_at_1 value: 22.493 - type: ndcg_at_10 value: 40.833000000000006 - type: ndcg_at_100 value: 46.357 - type: ndcg_at_1000 value: 47.637 - type: ndcg_at_3 value: 32.995999999999995 - type: ndcg_at_5 value: 36.919000000000004 - type: precision_at_1 value: 22.493 - type: precision_at_10 value: 6.465999999999999 - type: precision_at_100 value: 0.9249999999999999 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 14.030999999999999 - type: precision_at_5 value: 10.413 - type: recall_at_1 value: 21.852 - type: recall_at_10 value: 61.934999999999995 - type: recall_at_100 value: 87.611 - type: recall_at_1000 value: 97.441 - type: recall_at_3 value: 40.583999999999996 - type: recall_at_5 value: 49.992999999999995 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 93.36069311445507 - type: f1 value: 93.16456330371453 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 74.74692202462381 - type: f1 value: 58.17903579421599 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 74.80833893745796 - type: f1 value: 72.70786592684664 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 78.69872225958305 - type: f1 value: 78.61626934504731 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 33.058658628717694 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 30.85561739360599 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.290259910144385 - type: mrr value: 32.44223046102856 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 5.288 - type: map_at_10 value: 12.267999999999999 - type: map_at_100 value: 15.557000000000002 - type: map_at_1000 value: 16.98 - type: map_at_3 value: 8.866 - type: map_at_5 value: 10.418 - type: mrr_at_1 value: 43.653 - type: mrr_at_10 value: 52.681 - type: mrr_at_100 value: 53.315999999999995 - type: mrr_at_1000 value: 53.357 - type: mrr_at_3 value: 51.393 - type: mrr_at_5 value: 51.903999999999996 - type: ndcg_at_1 value: 42.415000000000006 - type: ndcg_at_10 value: 34.305 - type: ndcg_at_100 value: 30.825999999999997 - type: ndcg_at_1000 value: 39.393 - type: ndcg_at_3 value: 39.931 - type: ndcg_at_5 value: 37.519999999999996 - type: precision_at_1 value: 43.653 - type: precision_at_10 value: 25.728 - type: precision_at_100 value: 7.932 - type: precision_at_1000 value: 2.07 - type: precision_at_3 value: 38.184000000000005 - type: precision_at_5 value: 32.879000000000005 - type: recall_at_1 value: 5.288 - type: recall_at_10 value: 16.195 - type: recall_at_100 value: 31.135 - type: recall_at_1000 value: 61.531000000000006 - type: recall_at_3 value: 10.313 - type: recall_at_5 value: 12.754999999999999 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 28.216 - type: map_at_10 value: 42.588 - type: map_at_100 value: 43.702999999999996 - type: map_at_1000 value: 43.739 - type: map_at_3 value: 38.177 - type: map_at_5 value: 40.754000000000005 - type: mrr_at_1 value: 31.866 - type: mrr_at_10 value: 45.189 - type: mrr_at_100 value: 46.056000000000004 - type: mrr_at_1000 value: 46.081 - type: mrr_at_3 value: 41.526999999999994 - type: mrr_at_5 value: 43.704 - type: ndcg_at_1 value: 31.837 - type: ndcg_at_10 value: 50.178 - type: ndcg_at_100 value: 54.98800000000001 - type: ndcg_at_1000 value: 55.812 - type: ndcg_at_3 value: 41.853 - type: ndcg_at_5 value: 46.153 - type: precision_at_1 value: 31.837 - type: precision_at_10 value: 8.43 - type: precision_at_100 value: 1.1119999999999999 - type: precision_at_1000 value: 0.11900000000000001 - type: precision_at_3 value: 19.023 - type: precision_at_5 value: 13.911000000000001 - type: recall_at_1 value: 28.216 - type: recall_at_10 value: 70.8 - type: recall_at_100 value: 91.857 - type: recall_at_1000 value: 97.941 - type: recall_at_3 value: 49.196 - type: recall_at_5 value: 59.072 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 71.22800000000001 - type: map_at_10 value: 85.115 - type: map_at_100 value: 85.72 - type: map_at_1000 value: 85.737 - type: map_at_3 value: 82.149 - type: map_at_5 value: 84.029 - type: mrr_at_1 value: 81.96 - type: mrr_at_10 value: 88.00200000000001 - type: mrr_at_100 value: 88.088 - type: mrr_at_1000 value: 88.089 - type: mrr_at_3 value: 87.055 - type: mrr_at_5 value: 87.715 - type: ndcg_at_1 value: 82.01 - type: ndcg_at_10 value: 88.78 - type: ndcg_at_100 value: 89.91 - type: ndcg_at_1000 value: 90.013 - type: ndcg_at_3 value: 85.957 - type: ndcg_at_5 value: 87.56 - type: precision_at_1 value: 82.01 - type: precision_at_10 value: 13.462 - type: precision_at_100 value: 1.528 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.553 - type: precision_at_5 value: 24.732000000000003 - type: recall_at_1 value: 71.22800000000001 - type: recall_at_10 value: 95.69 - type: recall_at_100 value: 99.531 - type: recall_at_1000 value: 99.98 - type: recall_at_3 value: 87.632 - type: recall_at_5 value: 92.117 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 52.31768034366916 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 60.640266772723606 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 4.7780000000000005 - type: map_at_10 value: 12.299 - type: map_at_100 value: 14.363000000000001 - type: map_at_1000 value: 14.71 - type: map_at_3 value: 8.738999999999999 - type: map_at_5 value: 10.397 - type: mrr_at_1 value: 23.599999999999998 - type: mrr_at_10 value: 34.845 - type: mrr_at_100 value: 35.916 - type: mrr_at_1000 value: 35.973 - type: mrr_at_3 value: 31.7 - type: mrr_at_5 value: 33.535 - type: ndcg_at_1 value: 23.599999999999998 - type: ndcg_at_10 value: 20.522000000000002 - type: ndcg_at_100 value: 28.737000000000002 - type: ndcg_at_1000 value: 34.596 - type: ndcg_at_3 value: 19.542 - type: ndcg_at_5 value: 16.958000000000002 - type: precision_at_1 value: 23.599999999999998 - type: precision_at_10 value: 10.67 - type: precision_at_100 value: 2.259 - type: precision_at_1000 value: 0.367 - type: precision_at_3 value: 18.333 - type: precision_at_5 value: 14.879999999999999 - type: recall_at_1 value: 4.7780000000000005 - type: recall_at_10 value: 21.617 - type: recall_at_100 value: 45.905 - type: recall_at_1000 value: 74.42 - type: recall_at_3 value: 11.148 - type: recall_at_5 value: 15.082999999999998 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 83.22372750297885 - type: cos_sim_spearman value: 79.40972617119405 - type: euclidean_pearson value: 80.6101072020434 - type: euclidean_spearman value: 79.53844217225202 - type: manhattan_pearson value: 80.57265975286111 - type: manhattan_spearman value: 79.46335611792958 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 85.43713315520749 - type: cos_sim_spearman value: 77.44128693329532 - type: euclidean_pearson value: 81.63869928101123 - type: euclidean_spearman value: 77.29512977961515 - type: manhattan_pearson value: 81.63704185566183 - type: manhattan_spearman value: 77.29909412738657 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 81.59451537860527 - type: cos_sim_spearman value: 82.97994638856723 - type: euclidean_pearson value: 82.89478688288412 - type: euclidean_spearman value: 83.58740751053104 - type: manhattan_pearson value: 82.69140840941608 - type: manhattan_spearman value: 83.33665956040555 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 82.00756527711764 - type: cos_sim_spearman value: 81.83560996841379 - type: euclidean_pearson value: 82.07684151976518 - type: euclidean_spearman value: 82.00913052060511 - type: manhattan_pearson value: 82.05690778488794 - type: manhattan_spearman value: 82.02260252019525 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 86.13710262895447 - type: cos_sim_spearman value: 87.26412811156248 - type: euclidean_pearson value: 86.94151453230228 - type: euclidean_spearman value: 87.5363796699571 - type: manhattan_pearson value: 86.86989424083748 - type: manhattan_spearman value: 87.47315940781353 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 83.0230597603627 - type: cos_sim_spearman value: 84.93344499318864 - type: euclidean_pearson value: 84.23754743431141 - type: euclidean_spearman value: 85.09707376597099 - type: manhattan_pearson value: 84.04325160987763 - type: manhattan_spearman value: 84.89353071339909 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 86.75620824563921 - type: cos_sim_spearman value: 87.15065513706398 - type: euclidean_pearson value: 88.26281533633521 - type: euclidean_spearman value: 87.51963738643983 - type: manhattan_pearson value: 88.25599267618065 - type: manhattan_spearman value: 87.58048736047483 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 64.74645319195137 - type: cos_sim_spearman value: 65.29996325037214 - type: euclidean_pearson value: 67.04297794086443 - type: euclidean_spearman value: 65.43841726694343 - type: manhattan_pearson value: 67.39459955690904 - type: manhattan_spearman value: 65.92864704413651 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 84.31291020270801 - type: cos_sim_spearman value: 85.86473738688068 - type: euclidean_pearson value: 85.65537275064152 - type: euclidean_spearman value: 86.13087454209642 - type: manhattan_pearson value: 85.43946955047609 - type: manhattan_spearman value: 85.91568175344916 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 85.93798118350695 - type: mrr value: 95.93536274908824 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 57.594 - type: map_at_10 value: 66.81899999999999 - type: map_at_100 value: 67.368 - type: map_at_1000 value: 67.4 - type: map_at_3 value: 64.061 - type: map_at_5 value: 65.47 - type: mrr_at_1 value: 60.667 - type: mrr_at_10 value: 68.219 - type: mrr_at_100 value: 68.655 - type: mrr_at_1000 value: 68.684 - type: mrr_at_3 value: 66.22200000000001 - type: mrr_at_5 value: 67.289 - type: ndcg_at_1 value: 60.667 - type: ndcg_at_10 value: 71.275 - type: ndcg_at_100 value: 73.642 - type: ndcg_at_1000 value: 74.373 - type: ndcg_at_3 value: 66.521 - type: ndcg_at_5 value: 68.581 - type: precision_at_1 value: 60.667 - type: precision_at_10 value: 9.433 - type: precision_at_100 value: 1.0699999999999998 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 25.556 - type: precision_at_5 value: 16.8 - type: recall_at_1 value: 57.594 - type: recall_at_10 value: 83.622 - type: recall_at_100 value: 94.167 - type: recall_at_1000 value: 99.667 - type: recall_at_3 value: 70.64399999999999 - type: recall_at_5 value: 75.983 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.85841584158416 - type: cos_sim_ap value: 96.66996142314342 - type: cos_sim_f1 value: 92.83208020050125 - type: cos_sim_precision value: 93.06532663316584 - type: cos_sim_recall value: 92.60000000000001 - type: dot_accuracy value: 99.85841584158416 - type: dot_ap value: 96.6775307676576 - type: dot_f1 value: 92.69289729177312 - type: dot_precision value: 94.77533960292581 - type: dot_recall value: 90.7 - type: euclidean_accuracy value: 99.86138613861387 - type: euclidean_ap value: 96.6338454403108 - type: euclidean_f1 value: 92.92214357937311 - type: euclidean_precision value: 93.96728016359918 - type: euclidean_recall value: 91.9 - type: manhattan_accuracy value: 99.86237623762376 - type: manhattan_ap value: 96.60370449645053 - type: manhattan_f1 value: 92.91177970423253 - type: manhattan_precision value: 94.7970863683663 - type: manhattan_recall value: 91.10000000000001 - type: max_accuracy value: 99.86237623762376 - type: max_ap value: 96.6775307676576 - type: max_f1 value: 92.92214357937311 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 60.77977058695198 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 35.2725272535638 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 53.64052466362125 - type: mrr value: 54.533067014684654 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.677624219206578 - type: cos_sim_spearman value: 30.121368518123447 - type: dot_pearson value: 30.69870088041608 - type: dot_spearman value: 29.61284927093751 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.22 - type: map_at_10 value: 1.855 - type: map_at_100 value: 9.885 - type: map_at_1000 value: 23.416999999999998 - type: map_at_3 value: 0.637 - type: map_at_5 value: 1.024 - type: mrr_at_1 value: 88.0 - type: mrr_at_10 value: 93.067 - type: mrr_at_100 value: 93.067 - type: mrr_at_1000 value: 93.067 - type: mrr_at_3 value: 92.667 - type: mrr_at_5 value: 93.067 - type: ndcg_at_1 value: 82.0 - type: ndcg_at_10 value: 75.899 - type: ndcg_at_100 value: 55.115 - type: ndcg_at_1000 value: 48.368 - type: ndcg_at_3 value: 79.704 - type: ndcg_at_5 value: 78.39699999999999 - type: precision_at_1 value: 88.0 - type: precision_at_10 value: 79.60000000000001 - type: precision_at_100 value: 56.06 - type: precision_at_1000 value: 21.206 - type: precision_at_3 value: 84.667 - type: precision_at_5 value: 83.2 - type: recall_at_1 value: 0.22 - type: recall_at_10 value: 2.078 - type: recall_at_100 value: 13.297 - type: recall_at_1000 value: 44.979 - type: recall_at_3 value: 0.6689999999999999 - type: recall_at_5 value: 1.106 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.258 - type: map_at_10 value: 10.439 - type: map_at_100 value: 16.89 - type: map_at_1000 value: 18.407999999999998 - type: map_at_3 value: 5.668 - type: map_at_5 value: 7.718 - type: mrr_at_1 value: 32.653 - type: mrr_at_10 value: 51.159 - type: mrr_at_100 value: 51.714000000000006 - type: mrr_at_1000 value: 51.714000000000006 - type: mrr_at_3 value: 47.959 - type: mrr_at_5 value: 50.407999999999994 - type: ndcg_at_1 value: 29.592000000000002 - type: ndcg_at_10 value: 26.037 - type: ndcg_at_100 value: 37.924 - type: ndcg_at_1000 value: 49.126999999999995 - type: ndcg_at_3 value: 30.631999999999998 - type: ndcg_at_5 value: 28.571 - type: precision_at_1 value: 32.653 - type: precision_at_10 value: 22.857 - type: precision_at_100 value: 7.754999999999999 - type: precision_at_1000 value: 1.529 - type: precision_at_3 value: 34.014 - type: precision_at_5 value: 29.796 - type: recall_at_1 value: 2.258 - type: recall_at_10 value: 16.554 - type: recall_at_100 value: 48.439 - type: recall_at_1000 value: 82.80499999999999 - type: recall_at_3 value: 7.283 - type: recall_at_5 value: 10.732 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 69.8858 - type: ap value: 13.835684144362109 - type: f1 value: 53.803351693244586 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 60.50650820599886 - type: f1 value: 60.84357825979259 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 48.52131044852134 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 85.59337187816654 - type: cos_sim_ap value: 73.23925826533437 - type: cos_sim_f1 value: 67.34693877551021 - type: cos_sim_precision value: 62.40432237730752 - type: cos_sim_recall value: 73.13984168865434 - type: dot_accuracy value: 85.31322644096085 - type: dot_ap value: 72.30723963807422 - type: dot_f1 value: 66.47051612112296 - type: dot_precision value: 62.0792305930845 - type: dot_recall value: 71.53034300791556 - type: euclidean_accuracy value: 85.61125350181797 - type: euclidean_ap value: 73.32843720487845 - type: euclidean_f1 value: 67.36549633745895 - type: euclidean_precision value: 64.60755813953489 - type: euclidean_recall value: 70.36939313984169 - type: manhattan_accuracy value: 85.63509566668654 - type: manhattan_ap value: 73.16658488311325 - type: manhattan_f1 value: 67.20597386434349 - type: manhattan_precision value: 63.60424028268551 - type: manhattan_recall value: 71.2401055408971 - type: max_accuracy value: 85.63509566668654 - type: max_ap value: 73.32843720487845 - type: max_f1 value: 67.36549633745895 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.33779640625606 - type: cos_sim_ap value: 84.83868375898157 - type: cos_sim_f1 value: 77.16506154017773 - type: cos_sim_precision value: 74.62064005753327 - type: cos_sim_recall value: 79.88912842623961 - type: dot_accuracy value: 88.02732176815307 - type: dot_ap value: 83.95089283763002 - type: dot_f1 value: 76.29635101196631 - type: dot_precision value: 73.31771720613288 - type: dot_recall value: 79.52725592854944 - type: euclidean_accuracy value: 88.44452206310397 - type: euclidean_ap value: 84.98384576824827 - type: euclidean_f1 value: 77.29311047696697 - type: euclidean_precision value: 74.51232583065381 - type: euclidean_recall value: 80.28949799815214 - type: manhattan_accuracy value: 88.47362906042613 - type: manhattan_ap value: 84.91421462218432 - type: manhattan_f1 value: 77.05107637204792 - type: manhattan_precision value: 74.74484256243214 - type: manhattan_recall value: 79.50415768401602 - type: max_accuracy value: 88.47362906042613 - type: max_ap value: 84.98384576824827 - type: max_f1 value: 77.29311047696697 --- <h1 align="center">FlagEmbedding</h1> <h4 align="center"> <p> <a href=#model-list>Model List</a> | <a href=#frequently-asked-questions>FAQ</a> | <a href=#usage>Usage</a> | <a href="#evaluation">Evaluation</a> | <a href="#train">Train</a> | <a href="#contact">Contact</a> | <a href="#citation">Citation</a> | <a href="#license">License</a> <p> </h4> More details please refer to our Github: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding). If you are looking for a model that supports more languages, longer texts, and other retrieval methods, you can try using [bge-m3](https://huggingface.co/BAAI/bge-m3). [English](README.md) | [中文](https://github.com/FlagOpen/FlagEmbedding/blob/master/README_zh.md) FlagEmbedding focuses on retrieval-augmented LLMs, consisting of the following projects currently: - **Long-Context LLM**: [Activation Beacon](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon) - **Fine-tuning of LM** : [LM-Cocktail](https://github.com/FlagOpen/FlagEmbedding/tree/master/LM_Cocktail) - **Dense Retrieval**: [BGE-M3](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3), [LLM Embedder](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_embedder), [BGE Embedding](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/baai_general_embedding) - **Reranker Model**: [BGE Reranker](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker) - **Benchmark**: [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) ## News - 1/30/2024: Release **BGE-M3**, a new member to BGE model series! M3 stands for **M**ulti-linguality (100+ languages), **M**ulti-granularities (input length up to 8192), **M**ulti-Functionality (unification of dense, lexical, multi-vec/colbert retrieval). It is the first embedding model which supports all three retrieval methods, achieving new SOTA on multi-lingual (MIRACL) and cross-lingual (MKQA) benchmarks. [Technical Report](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/BGE_M3/BGE_M3.pdf) and [Code](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3). :fire: - 1/9/2024: Release [Activation-Beacon](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon), an effective, efficient, compatible, and low-cost (training) method to extend the context length of LLM. [Technical Report](https://arxiv.org/abs/2401.03462) :fire: - 12/24/2023: Release **LLaRA**, a LLaMA-7B based dense retriever, leading to state-of-the-art performances on MS MARCO and BEIR. Model and code will be open-sourced. Please stay tuned. [Technical Report](https://arxiv.org/abs/2312.15503) :fire: - 11/23/2023: Release [LM-Cocktail](https://github.com/FlagOpen/FlagEmbedding/tree/master/LM_Cocktail), a method to maintain general capabilities during fine-tuning by merging multiple language models. [Technical Report](https://arxiv.org/abs/2311.13534) :fire: - 10/12/2023: Release [LLM-Embedder](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_embedder), a unified embedding model to support diverse retrieval augmentation needs for LLMs. [Technical Report](https://arxiv.org/pdf/2310.07554.pdf) - 09/15/2023: The [technical report](https://arxiv.org/pdf/2309.07597.pdf) of BGE has been released - 09/15/2023: The [massive training data](https://data.baai.ac.cn/details/BAAI-MTP) of BGE has been released - 09/12/2023: New models: - **New reranker model**: release cross-encoder models `BAAI/bge-reranker-base` and `BAAI/bge-reranker-large`, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models. - **update embedding model**: release `bge-*-v1.5` embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction. <details> <summary>More</summary> <!-- ### More --> - 09/07/2023: Update [fine-tune code](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md): Add script to mine hard negatives and support adding instruction during fine-tuning. - 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [this](#using-langchain); C-MTEB **leaderboard** is [available](https://huggingface.co/spaces/mteb/leaderboard). - 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗** - 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada: - 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset. </details> ## Model List `bge` is short for `BAAI general embedding`. | Model | Language | | Description | query instruction for retrieval [1] | |:-------------------------------|:--------:| :--------:| :--------:|:--------:| | [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) | Multilingual | [Inference](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3#usage) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3) | Multi-Functionality(dense retrieval, sparse retrieval, multi-vector(colbert)), Multi-Linguality, and Multi-Granularity(8192 tokens) | | | [BAAI/llm-embedder](https://huggingface.co/BAAI/llm-embedder) | English | [Inference](./FlagEmbedding/llm_embedder/README.md) [Fine-tune](./FlagEmbedding/llm_embedder/README.md) | a unified embedding model to support diverse retrieval augmentation needs for LLMs | See [README](./FlagEmbedding/llm_embedder/README.md) | | [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | | | [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | | | [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-large-zh-v1.5](https://huggingface.co/BAAI/bge-large-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-en` | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) |a small-scale model but with competitive performance | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-zh` | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` | [1\]: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** needs to be added to passages. [2\]: Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models. For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results. All models have been uploaded to Huggingface Hub, and you can see them at https://huggingface.co/BAAI. If you cannot open the Huggingface Hub, you also can download the models at https://model.baai.ac.cn/models . ## Frequently asked questions <details> <summary>1. How to fine-tune bge embedding model?</summary> <!-- ### How to fine-tune bge embedding model? --> Following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) to prepare data and fine-tune your model. Some suggestions: - Mine hard negatives following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune#hard-negatives), which can improve the retrieval performance. - If you pre-train bge on your data, the pre-trained model cannot be directly used to calculate similarity, and it must be fine-tuned with contrastive learning before computing similarity. - If the accuracy of the fine-tuned model is still not high, it is recommended to use/fine-tune the cross-encoder model (bge-reranker) to re-rank top-k results. Hard negatives also are needed to fine-tune reranker. </details> <details> <summary>2. The similarity score between two dissimilar sentences is higher than 0.5</summary> <!-- ### The similarity score between two dissimilar sentences is higher than 0.5 --> **Suggest to use bge v1.5, which alleviates the issue of the similarity distribution.** Since we finetune the models by contrastive learning with a temperature of 0.01, the similarity distribution of the current BGE model is about in the interval \[0.6, 1\]. So a similarity score greater than 0.5 does not indicate that the two sentences are similar. For downstream tasks, such as passage retrieval or semantic similarity, **what matters is the relative order of the scores, not the absolute value.** If you need to filter similar sentences based on a similarity threshold, please select an appropriate similarity threshold based on the similarity distribution on your data (such as 0.8, 0.85, or even 0.9). </details> <details> <summary>3. When does the query instruction need to be used</summary> <!-- ### When does the query instruction need to be used --> For the `bge-*-v1.5`, we improve its retrieval ability when not using instruction. No instruction only has a slight degradation in retrieval performance compared with using instruction. So you can generate embedding without instruction in all cases for convenience. For a retrieval task that uses short queries to find long related documents, it is recommended to add instructions for these short queries. **The best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.** In all cases, the documents/passages do not need to add the instruction. </details> ## Usage ### Usage for Embedding Model Here are some examples for using `bge` models with [FlagEmbedding](#using-flagembedding), [Sentence-Transformers](#using-sentence-transformers), [Langchain](#using-langchain), or [Huggingface Transformers](#using-huggingface-transformers). #### Using FlagEmbedding ``` pip install -U FlagEmbedding ``` If it doesn't work for you, you can see [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md) for more methods to install FlagEmbedding. ```python from FlagEmbedding import FlagModel sentences_1 = ["样例数据-1", "样例数据-2"] sentences_2 = ["样例数据-3", "样例数据-4"] model = FlagModel('BAAI/bge-large-zh-v1.5', query_instruction_for_retrieval="为这个句子生成表示以用于检索相关文章:", use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation embeddings_1 = model.encode(sentences_1) embeddings_2 = model.encode(sentences_2) similarity = embeddings_1 @ embeddings_2.T print(similarity) # for s2p(short query to long passage) retrieval task, suggest to use encode_queries() which will automatically add the instruction to each query # corpus in retrieval task can still use encode() or encode_corpus(), since they don't need instruction queries = ['query_1', 'query_2'] passages = ["样例文档-1", "样例文档-2"] q_embeddings = model.encode_queries(queries) p_embeddings = model.encode(passages) scores = q_embeddings @ p_embeddings.T ``` For the value of the argument `query_instruction_for_retrieval`, see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list). By default, FlagModel will use all available GPUs when encoding. Please set `os.environ["CUDA_VISIBLE_DEVICES"]` to select specific GPUs. You also can set `os.environ["CUDA_VISIBLE_DEVICES"]=""` to make all GPUs unavailable. #### Using Sentence-Transformers You can also use the `bge` models with [sentence-transformers](https://www.SBERT.net): ``` pip install -U sentence-transformers ``` ```python from sentence_transformers import SentenceTransformer sentences_1 = ["样例数据-1", "样例数据-2"] sentences_2 = ["样例数据-3", "样例数据-4"] model = SentenceTransformer('BAAI/bge-large-zh-v1.5') embeddings_1 = model.encode(sentences_1, normalize_embeddings=True) embeddings_2 = model.encode(sentences_2, normalize_embeddings=True) similarity = embeddings_1 @ embeddings_2.T print(similarity) ``` For s2p(short query to long passage) retrieval task, each short query should start with an instruction (instructions see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list)). But the instruction is not needed for passages. ```python from sentence_transformers import SentenceTransformer queries = ['query_1', 'query_2'] passages = ["样例文档-1", "样例文档-2"] instruction = "为这个句子生成表示以用于检索相关文章:" model = SentenceTransformer('BAAI/bge-large-zh-v1.5') q_embeddings = model.encode([instruction+q for q in queries], normalize_embeddings=True) p_embeddings = model.encode(passages, normalize_embeddings=True) scores = q_embeddings @ p_embeddings.T ``` #### Using Langchain You can use `bge` in langchain like this: ```python from langchain.embeddings import HuggingFaceBgeEmbeddings model_name = "BAAI/bge-large-en-v1.5" model_kwargs = {'device': 'cuda'} encode_kwargs = {'normalize_embeddings': True} # set True to compute cosine similarity model = HuggingFaceBgeEmbeddings( model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs, query_instruction="为这个句子生成表示以用于检索相关文章:" ) model.query_instruction = "为这个句子生成表示以用于检索相关文章:" ``` #### Using HuggingFace Transformers With the transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of the first token (i.e., [CLS]) as the sentence embedding. ```python from transformers import AutoTokenizer, AutoModel import torch # Sentences we want sentence embeddings for sentences = ["样例数据-1", "样例数据-2"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-zh-v1.5') model = AutoModel.from_pretrained('BAAI/bge-large-zh-v1.5') model.eval() # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages) # encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = model_output[0][:, 0] # normalize embeddings sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:", sentence_embeddings) ``` ### Usage for Reranker Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. You can get a relevance score by inputting query and passage to the reranker. The reranker is optimized based cross-entropy loss, so the relevance score is not bounded to a specific range. #### Using FlagEmbedding ``` pip install -U FlagEmbedding ``` Get relevance scores (higher scores indicate more relevance): ```python from FlagEmbedding import FlagReranker reranker = FlagReranker('BAAI/bge-reranker-large', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation score = reranker.compute_score(['query', 'passage']) print(score) scores = reranker.compute_score([['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']]) print(scores) ``` #### Using Huggingface transformers ```python import torch from transformers import AutoModelForSequenceClassification, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-reranker-large') model = AutoModelForSequenceClassification.from_pretrained('BAAI/bge-reranker-large') model.eval() pairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']] with torch.no_grad(): inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512) scores = model(**inputs, return_dict=True).logits.view(-1, ).float() print(scores) ``` #### Usage of the ONNX files ```python from optimum.onnxruntime import ORTModelForFeatureExtraction # type: ignore import torch from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-small-en-v1.5') model = AutoModel.from_pretrained('BAAI/bge-small-en-v1.5') model_ort = ORTModelForFeatureExtraction.from_pretrained('BAAI/bge-small-en-v1.5', file_name="onnx/model.onnx") # Sentences we want sentence embeddings for sentences = ["样例数据-1", "样例数据-2"] # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages) # encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt') model_output_ort = model_ort(**encoded_input) # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # model_output and model_output_ort are identical ``` #### Usage via infinity Its also possible to deploy the onnx files with the [infinity_emb](https://github.com/michaelfeil/infinity) pip package. Recommended is `device="cuda", engine="torch"` with flash attention on gpu, and `device="cpu", engine="optimum"` for onnx inference. ```python import asyncio from infinity_emb import AsyncEmbeddingEngine, EngineArgs sentences = ["Embed this is sentence via Infinity.", "Paris is in France."] engine = AsyncEmbeddingEngine.from_args( EngineArgs(model_name_or_path = "BAAI/bge-small-en-v1.5", device="cpu", engine="optimum" # or engine="torch" )) async def main(): async with engine: embeddings, usage = await engine.embed(sentences=sentences) asyncio.run(main()) ``` ## Evaluation `baai-general-embedding` models achieve **state-of-the-art performance on both MTEB and C-MTEB leaderboard!** For more details and evaluation tools see our [scripts](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md). - **MTEB**: | Model Name | Dimension | Sequence Length | Average (56) | Retrieval (15) |Clustering (11) | Pair Classification (3) | Reranking (4) | STS (10) | Summarization (1) | Classification (12) | |:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | **64.23** | **54.29** | 46.08 | 87.12 | 60.03 | 83.11 | 31.61 | 75.97 | | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | 63.55 | 53.25 | 45.77 | 86.55 | 58.86 | 82.4 | 31.07 | 75.53 | | [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 | 62.17 |51.68 | 43.82 | 84.92 | 58.36 | 81.59 | 30.12 | 74.14 | | [bge-large-en](https://huggingface.co/BAAI/bge-large-en) | 1024 | 512 | 63.98 | 53.9 | 46.98 | 85.8 | 59.48 | 81.56 | 32.06 | 76.21 | | [bge-base-en](https://huggingface.co/BAAI/bge-base-en) | 768 | 512 | 63.36 | 53.0 | 46.32 | 85.86 | 58.7 | 81.84 | 29.27 | 75.27 | | [gte-large](https://huggingface.co/thenlper/gte-large) | 1024 | 512 | 63.13 | 52.22 | 46.84 | 85.00 | 59.13 | 83.35 | 31.66 | 73.33 | | [gte-base](https://huggingface.co/thenlper/gte-base) | 768 | 512 | 62.39 | 51.14 | 46.2 | 84.57 | 58.61 | 82.3 | 31.17 | 73.01 | | [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1024| 512 | 62.25 | 50.56 | 44.49 | 86.03 | 56.61 | 82.05 | 30.19 | 75.24 | | [bge-small-en](https://huggingface.co/BAAI/bge-small-en) | 384 | 512 | 62.11 | 51.82 | 44.31 | 83.78 | 57.97 | 80.72 | 30.53 | 74.37 | | [instructor-xl](https://huggingface.co/hkunlp/instructor-xl) | 768 | 512 | 61.79 | 49.26 | 44.74 | 86.62 | 57.29 | 83.06 | 32.32 | 61.79 | | [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 768 | 512 | 61.5 | 50.29 | 43.80 | 85.73 | 55.91 | 81.05 | 30.28 | 73.84 | | [gte-small](https://huggingface.co/thenlper/gte-small) | 384 | 512 | 61.36 | 49.46 | 44.89 | 83.54 | 57.7 | 82.07 | 30.42 | 72.31 | | [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | 1536 | 8192 | 60.99 | 49.25 | 45.9 | 84.89 | 56.32 | 80.97 | 30.8 | 70.93 | | [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 384 | 512 | 59.93 | 49.04 | 39.92 | 84.67 | 54.32 | 80.39 | 31.16 | 72.94 | | [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 768 | 512 | 59.51 | 42.24 | 43.72 | 85.06 | 56.42 | 82.63 | 30.08 | 73.42 | | [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 768 | 514 | 57.78 | 43.81 | 43.69 | 83.04 | 59.36 | 80.28 | 27.49 | 65.07 | | [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 4096 | 2048 | 57.59 | 48.22 | 38.93 | 81.9 | 55.65 | 77.74 | 33.6 | 66.19 | - **C-MTEB**: We create the benchmark C-MTEB for Chinese text embedding which consists of 31 datasets from 6 tasks. Please refer to [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md) for a detailed introduction. | Model | Embedding dimension | Avg | Retrieval | STS | PairClassification | Classification | Reranking | Clustering | |:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:| | [**BAAI/bge-large-zh-v1.5**](https://huggingface.co/BAAI/bge-large-zh-v1.5) | 1024 | **64.53** | 70.46 | 56.25 | 81.6 | 69.13 | 65.84 | 48.99 | | [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | 768 | 63.13 | 69.49 | 53.72 | 79.75 | 68.07 | 65.39 | 47.53 | | [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | 512 | 57.82 | 61.77 | 49.11 | 70.41 | 63.96 | 60.92 | 44.18 | | [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | 1024 | 64.20 | 71.53 | 54.98 | 78.94 | 68.32 | 65.11 | 48.39 | | [bge-large-zh-noinstruct](https://huggingface.co/BAAI/bge-large-zh-noinstruct) | 1024 | 63.53 | 70.55 | 53 | 76.77 | 68.58 | 64.91 | 50.01 | | [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | 768 | 62.96 | 69.53 | 54.12 | 77.5 | 67.07 | 64.91 | 47.63 | | [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 1024 | 58.79 | 63.66 | 48.44 | 69.89 | 67.34 | 56.00 | 48.23 | | [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | 512 | 58.27 | 63.07 | 49.45 | 70.35 | 63.64 | 61.48 | 45.09 | | [m3e-base](https://huggingface.co/moka-ai/m3e-base) | 768 | 57.10 | 56.91 | 50.47 | 63.99 | 67.52 | 59.34 | 47.68 | | [m3e-large](https://huggingface.co/moka-ai/m3e-large) | 1024 | 57.05 | 54.75 | 50.42 | 64.3 | 68.2 | 59.66 | 48.88 | | [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 768 | 55.48 | 61.63 | 46.49 | 67.07 | 65.35 | 54.35 | 40.68 | | [multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | 384 | 55.38 | 59.95 | 45.27 | 66.45 | 65.85 | 53.86 | 45.26 | | [text-embedding-ada-002(OpenAI)](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) | 1536 | 53.02 | 52.0 | 43.35 | 69.56 | 64.31 | 54.28 | 45.68 | | [luotuo](https://huggingface.co/silk-road/luotuo-bert-medium) | 1024 | 49.37 | 44.4 | 42.78 | 66.62 | 61 | 49.25 | 44.39 | | [text2vec-base](https://huggingface.co/shibing624/text2vec-base-chinese) | 768 | 47.63 | 38.79 | 43.41 | 67.41 | 62.19 | 49.45 | 37.66 | | [text2vec-large](https://huggingface.co/GanymedeNil/text2vec-large-chinese) | 1024 | 47.36 | 41.94 | 44.97 | 70.86 | 60.66 | 49.16 | 30.02 | - **Reranking**: See [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/) for evaluation script. | Model | T2Reranking | T2RerankingZh2En\* | T2RerankingEn2Zh\* | MMarcoReranking | CMedQAv1 | CMedQAv2 | Avg | |:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:| | text2vec-base-multilingual | 64.66 | 62.94 | 62.51 | 14.37 | 48.46 | 48.6 | 50.26 | | multilingual-e5-small | 65.62 | 60.94 | 56.41 | 29.91 | 67.26 | 66.54 | 57.78 | | multilingual-e5-large | 64.55 | 61.61 | 54.28 | 28.6 | 67.42 | 67.92 | 57.4 | | multilingual-e5-base | 64.21 | 62.13 | 54.68 | 29.5 | 66.23 | 66.98 | 57.29 | | m3e-base | 66.03 | 62.74 | 56.07 | 17.51 | 77.05 | 76.76 | 59.36 | | m3e-large | 66.13 | 62.72 | 56.1 | 16.46 | 77.76 | 78.27 | 59.57 | | bge-base-zh-v1.5 | 66.49 | 63.25 | 57.02 | 29.74 | 80.47 | 84.88 | 63.64 | | bge-large-zh-v1.5 | 65.74 | 63.39 | 57.03 | 28.74 | 83.45 | 85.44 | 63.97 | | [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | 67.28 | 63.95 | 60.45 | 35.46 | 81.26 | 84.1 | 65.42 | | [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | 67.6 | 64.03 | 61.44 | 37.16 | 82.15 | 84.18 | 66.09 | \* : T2RerankingZh2En and T2RerankingEn2Zh are cross-language retrieval tasks ## Train ### BAAI Embedding We pre-train the models using [retromae](https://github.com/staoxiao/RetroMAE) and train them on large-scale pairs data using contrastive learning. **You can fine-tune the embedding model on your data following our [examples](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune).** We also provide a [pre-train example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/pretrain). Note that the goal of pre-training is to reconstruct the text, and the pre-trained model cannot be used for similarity calculation directly, it needs to be fine-tuned. More training details for bge see [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md). ### BGE Reranker Cross-encoder will perform full-attention over the input pair, which is more accurate than embedding model (i.e., bi-encoder) but more time-consuming than embedding model. Therefore, it can be used to re-rank the top-k documents returned by embedding model. We train the cross-encoder on a multilingual pair data, The data format is the same as embedding model, so you can fine-tune it easily following our [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker). More details please refer to [./FlagEmbedding/reranker/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker) ## Contact If you have any question or suggestion related to this project, feel free to open an issue or pull request. You also can email Shitao Xiao([email protected]) and Zheng Liu([email protected]). ## Citation If you find this repository useful, please consider giving a star :star: and citation ``` @misc{bge_embedding, title={C-Pack: Packaged Resources To Advance General Chinese Embedding}, author={Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighoff}, year={2023}, eprint={2309.07597}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## License FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
[ "BEAR", "BIOSSES", "SCIFACT" ]
JunxiongWang/Llama3.1-Mamba2-8B-dpo
JunxiongWang
null
[ "pytorch", "llama", "arxiv:2408.15237", "license:apache-2.0", "region:us" ]
2024-11-17T04:00:18Z
2024-11-17T04:21:04+00:00
14
0
--- license: apache-2.0 --- Zero-shot results when using the [Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct) as the teacher model, and the [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) as the initialized model | Task | Llama-3.1-8B-Instruct | Llama3.1-Mamba-8B-distill | Llama3.1-Mamba-8B-dpo | Llama3.1-Mamba2-8B-distill | Llama3.1-Mamba2-8B-dpo | |---------------------|-----------------------|--------------------------|-----------------------|---------------------------|-----------------------| | arc_challenge | 0.552 | 0.5384 | 0.5657 | 0.5265 | 0.5973 | | arc_easy | 0.8178 | 0.8224 | 0.8401 | 0.822 | 0.8481 | | hellaswag | 0.7921 | 0.7591 | 0.7736 | 0.7536 | 0.7969 | | mmlu (0 shot) | 0.6812 | 0.6213 | 0.636 | 0.6101 | 0.5974 | | openbookqa | 0.432 | 0.428 | 0.442 | 0.416 | 0.44 | | piqa | 0.8079 | 0.7933 | 0.8041 | 0.7889 | 0.8003 | | pubmedqa | 0.752 | 0.72 | 0.744 | 0.726 | 0.746 | | race | 0.4478 | 0.4211 | 0.4344 | 0.4211 | 0.4612 | | winogrande | 0.7388 | 0.7277 | 0.738 | 0.7174 | 0.7411 | | truthful | 0.4267 | 0.4002 | 0.4607 | 0.4031 | 0.5022 | ``` @article{junxiongdaniele2024mambainllama, title = {The Mamba in the Llama: Distilling and Accelerating Hybrid Models}, author = {Junxiong Wang and Daniele Paliotta and Avner May and Alexander M. Rush and Tri Dao}, journal = {arXiv preprint arXiv:2408.15237}, year = {2024} } ```
[ "PUBMEDQA" ]
TranVanTri352/MCQ_Paragraph_AI_Model
TranVanTri352
question-answering
[ "transformers", "tf", "t5", "text2text-generation", "code", "question-answering", "vi", "en", "dataset:rajpurkar/squad", "dataset:ehovy/race", "dataset:mandarjoshi/trivia_qa", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-11-23T13:25:33Z
2024-12-31T01:27:34+00:00
14
1
--- base_model: - google-t5/t5-small datasets: - rajpurkar/squad - ehovy/race - mandarjoshi/trivia_qa language: - vi - en library_name: transformers license: apache-2.0 pipeline_tag: question-answering tags: - code --- ```python !pip install flask transformers pyngrok --quiet # install library from flask import Flask, request, jsonify from transformers import T5Tokenizer, T5ForConditionalGeneration import tensorflow from pyngrok import ngrok import json import torch import requests # format output json def parse_questions(raw_json): import re questions = [] question_blocks = re.split(r"Q:\s", raw_json["generated_text"]) for idx, block in enumerate(question_blocks[1:], start=1): # Skip the first part of the question try: question_match = re.search(r"(.+?)\sA:", block) options_match = re.search(r"A:\s(.+?)\sCorrect:", block, re.DOTALL) correct_match = re.search(r"Correct:\s(.+)", block) question = question_match.group(1).strip() if question_match else None options_raw = options_match.group(1).strip() if options_match else None correct_answer = correct_match.group(1).strip() if correct_match else None options = {} if options_raw: option_list = re.split(r"\d\)", options_raw) for i, option in enumerate(option_list[1:], start=1): options[chr(64 + i)] = option.strip() questions.append({ "id": f"Q{idx}", "Question": question, "options": options, "correct_answer": correct_answer }) except Exception as e: print(f"Error parsing block {idx}: {e}") return questions app = Flask(__name__) ngrok.set_auth_token("Ngrok_Auth_Token") public_url = ngrok.connect(5000) print("Ngrok URL:", public_url) model_name = "TranVanTri352/MCQ_Paragraph_AI_Model" tokenizer = T5Tokenizer.from_pretrained(model_name) model = T5ForConditionalGeneration.from_pretrained(model_name, from_tf=True) @app.route('/status', methods=['GET']) def model_status(): try: # Check if the model is loaded if model and tokenizer: return jsonify({ 'status': 'ready', 'model_name': model_name, 'framework': 'transformers', 'device': 'cuda' if torch.cuda.is_available() else 'cpu', 'message': 'Model is loaded and ready for inference.' }), 200 else: return jsonify({ 'status': 'not_ready', 'message': 'Model or tokenizer is not loaded.' }), 500 except Exception as e: return jsonify({ 'status': 'error', 'message': f'Error occurred while checking model status: {str(e)}' }), 500 @app.route('/generate', methods=['POST']) def generate_text(): try: data = request.json if not data or 'text' not in data: return jsonify({'error': 'Invalid input, "text" is required'}), 400 input_text = "Generate a question and multiple answers based on this article: " + data['text'] inputs = tokenizer(input_text, return_tensors="pt", truncation=True, max_length=512) all_outputs = [] # Loop to generate 5 outputs for i in range(5): torch.manual_seed(i) # Set different seeds to increase randomness outputs = model.generate( input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], max_length=128, do_sample=True, # Turn on random mode temperature=0.9, # Increase randomness top_k=30, # Choose only the word with the highest probability in the top 30 top_p=0.9, # Nucleus sampling repetition_penalty=1.5, # Limit repetition ) output_text = tokenizer.decode(outputs[0], skip_special_tokens=True) all_outputs.append(output_text) final_output = " ".join(all_outputs) # Parse the final output into formatted questions parsed_questions = parse_questions({"generated_text": final_output}) json_data = json.dumps(parsed_questions) return jsonify({'questions': parsed_questions}), 200 except Exception as e: return jsonify({'error': str(e)}), 500 @app.route('/health', methods=['GET']) def health_check(): return jsonify({'status': 'Service is healthy'}), 200 print(f"Public URL: {public_url}") # Flask if __name__ == "__main__": app.run(debug=False) ``` #Test Result #Request /generate ```json { "text": "Originally from Gangseo District, Seoul, Faker was signed by SKT in 2013, and quickly established himself as one of the league's top players. In his debut year, he achieved both an LCK title and a World Championship victory with SKT. From 2014 to 2017, Faker added five more LCK titles to his name, along with two MSI titles in 2016 and 2017, and two additional World Championships in 2015 and 2016. During this time, he also emerged victorious in the All-Star Paris 2014 and the IEM World Championship in 2016. Between 2019 and 2022, Faker secured four more LCK titles, becoming the first player to reach a total of 10. He also represented the South Korean national team at the 2018 Asian Games, earning a silver medal, and the 2022 Asian Games, earning a gold." } ```
[ "MEDAL" ]
avsolatorio/all-MiniLM-L6-v2-MEDI-MTEB-triplet-randproj-64-final
avsolatorio
sentence-similarity
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:1821475", "loss:MultipleNegativesRankingLoss", "en", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:sentence-transformers/all-MiniLM-L6-v2", "base_model:finetune:sentence-transformers/all-MiniLM-L6-v2", "license:apache-2.0", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-11-25T20:04:32Z
2024-11-25T20:04:40+00:00
14
0
--- base_model: sentence-transformers/all-MiniLM-L6-v2 language: - en library_name: sentence-transformers license: apache-2.0 metrics: - cosine_accuracy pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:1821475 - loss:MultipleNegativesRankingLoss widget: - source_sentence: Estimating User Location in Social Media with Stacked Denoising Auto-encoders sentences: - 'Domain Adaptation for Large-Scale Sentiment Classification: A Deep Learning Approach' - Conventional sphygmomanometers are being replaced by automated devices; can they be used to accurately calculate ABPI?Thirty-six volunteers (72 legs) attending a vascular clinic had their ankle, brachial blood pressure and ABPIs calculated using each of these 3 methods. (1) Conventional aneuroid BP cuff with hand held doppler. (2) OMRON HEM 705CP portable automated BP monitor. (3) The hand held doppler to determine systolic BP measured by the OMRON.Conventional doppler readings for brachial and ankle pressures were generally higher than those obtained digitally by less than 3 mmHg but this was not statistically significant. This did not translate into a significant difference in ABPIs obtained using all 3 techniques; the correlation coefficient of conventional ABPI with automated ABPI (method 2) was 0.746, this was improved to 0.899 using method 3. The OMRON failed to detect a signal in 16 of the 72 legs, 11 of these legs had ABPIs<0.66. - Deep neural networks based user interface detection for mobile applications using symbol marker - source_sentence: 'Central mesenteric lymph node BER-Ep4+ cells in colorectal cancer: challenge to sentinel node concept?' sentences: - The Lovely Bones (film) the film had many positive messages about life." The Lovely Bones (film) The Lovely Bones is a 2009 supernatural drama film directed by Peter Jackson, and starring Mark Wahlberg, Rachel Weisz, Susan Sarandon, Stanley Tucci, Michael Imperioli, and Saoirse Ronan. The screenplay by Fran Walsh, Philippa Boyens, and Jackson was based on Alice Sebold’s award-winning and bestselling 2002 novel of the same name. It follows a girl who is murdered and watches over her family from the in-between, and is torn between seeking vengeance on her killer and allowing her family to heal. An international co-production between the United States, - Postoperative intracranial hematoma (POIH) is a frequent sequela secondary to cranial surgery. The role of routine early postoperative computed tomography (CT) scanning in the detection of POIH remains controversial. The study was aimed at analyzing the effect of routine early CT scanning after craniotomy for the early detection of POIH.Routine early postoperative CT scanning was performed at our institute, and a retrospective study was conducted to analyze the data. POIH was defined as an intracranial hematoma requiring surgical management.A total of 1,148 patients undergoing craniotomy were included in this study; 28 of these patients developed POIH. The majority of POIH cases (15/28, 54 %) were detected during the first 6 h following craniotomy. A routine CT scan was performed on all included patients but two; however, CT scans detected only 16 POIH cases. During the first 6 h, the rate at which CT scans detected POIH was 1.9 % (15/786); subsequently, the rate decreased to only 0.3 % (1/360; p < 0.05, compared with the rate during the first 6 h). Among patients without clinical manifestations, the rate at which the routine post-craniotomy CT scan detected POIH was only 0.7 % (5/721) (p < 0.05, compared with the incidence of POIH). Finally, among high-risk POIH patients, the POIH-positive rate of routine CT scanning was elevated. - The role of sentinel lymph nodes in colorectal cancer remains unclear.Cryosections from central para-aortic mesenterial lymph nodes were stained using mAb BER-Ep4. Overall survival and distant recurrence were calculated using Kaplan-Meier plots.All patients (n = 48) were free of distant metastases and curatively resected (R0). 23 pN0, 13 pN1 and 12 pN2 stages were found. 21/48 patients (44%) showed BER-Ep4+ cells in their central lymph nodes (7/23 pN0, 8/13 pN1, 6/12 pN2). In 6/23 pN0 patients, BER-Ep4+ cells were also found in locoregional nodes (p = 0.03, Fisher's exact test). pN status predicted overall survival (p = 0.006, Kaplan-Meier curve, log-rank test). An impact was exerted by central mesenteric BER-Ep4+ cells on overall survival (p = 0.009 in pN0 patients, p = 0.07 for all pN) and distant recurrence-free survival (p = 0.001 in pN0 patients, p = 0.007 for all pN). Multivariate analysis showed an independent prognostic effect on overall survival in pN0 patients (p = 0.022). - source_sentence: when did the samsung galaxy s8 come out sentences: - Samsung Galaxy S8 support for Daydream. The Galaxy S8 was one of the first Android phones to support ARCore, Google's augmented reality engine. In February 2018, the official Android 8.0 Oreo update began rolling out to the Samsung Galaxy S8, Samsung Galaxy S8+, and Samsung Galaxy S8 Active. Besides the phone's protective case reportedly cracking and peeling away in under 2 months of use, Dan Seifert of "The Verge" praised the design of the Galaxy S8, describing it as a "stunning device to look at and hold" that was "refined and polished to a literal shine", and adding that it "truly doesn't look - British Raj British Raj The British Raj (; from "rāj", literally, "rule" in Hindustani) was the rule by the British Crown in the Indian subcontinent between 1858 and 1947. The rule is also called Crown rule in India, or direct rule in India. The region under British control was commonly called British India or simply India in contemporaneous usage, and included areas directly administered by the United Kingdom, which were collectively called British India, and those ruled by indigenous rulers, but under British tutelage or paramountcy, and called the princely states. The whole was also informally called the Indian Empire. As India, - Samsung Galaxy S8 Samsung Galaxy S8 The Samsung Galaxy S8, Samsung Galaxy S8+ (shortened to S8 and S8+, respectively) and Samsung Galaxy S8 Active are Android smartphones (with the S8+ being the phablet smartphone) produced by Samsung Electronics as the eighth generation of the Samsung Galaxy S series. The S8 and S8+ were unveiled on 29 March 2017 and directly succeeded the Samsung Galaxy S7 and S7 edge, with a North American release on 21 April 2017 and international rollout throughout April and May. The S8 Active was announced on 8 August 2017 and is exclusive to certain U.S. cellular carriers. The S8 - source_sentence: Can Carrier-Mediated Delivery System Promote the Development of Antisense Imaging? sentences: - 8-track tape month of the vinyl release. The eight-track format became by far the most popular and offered the largest music library of all the tape systems. Eight-track players were fitted as standard equipment in most Rolls-Royce and Bentley cars of the period for sale in Great Britain and worldwide. Optional 8-track players were available in many cars and trucks through the early 1980s. Ampex, based in Elk Grove Village, Illinois, set up a European operation (Ampex Stereo Tapes) in London, England, in 1970 under general manager Gerry Hall, with manufacturing in Nivelles, Belgium, to promote 8-track product (as well as musicassettes) - Heterotopic heart transplantation (HHTx) is a therapeutic option in heart failure patients with fixed elevated pulmonary hypertension. However, survival is poorer in HHTx recipients, and with improving results in continuous flow ventricular assist devices (VADs), many patients can be bridged to allow normalization of pulmonary artery pressures, making them orthotopic heart transplant (OHTx) candidates. Thus, the aim of this study was to analyse the survival of our HHTx cohort and compare them with our VAD bridge patients.A retrospective review of 342 heart transplant patients (315 OHTx and 27 HHTx) performed at our institution over 15 years was compared with 124 bridge-to-transplant VAD patients over the same time period, of whom 69 received an OHTx. Pulmonary artery pressures before and after VAD implant were analysed. Survival was analysed using both univariate and multivariate analyses.HHTx recipients were significantly older, and the donor allografts were older, smaller and had longer ischaemic times than the OHTx cohort. Comparison of the VAD types implanted (pulsatile vs continuous) showed significantly longer time supported on the continuous devices with significantly fewer deaths than the pulsatile devices. The continuous devices were successful in reducing pulmonary artery pressures pretransplant. The HHTx cohort had a significantly poorer survival than the OHTx cohort (P=0.002). Survival on a continuous device and then OHTx was significantly better than either HHTx or pulsatile device support. - We aimed to explore the feasibility of transfection methods for antisense imaging.Antisense oligonucleotides (ASON) targeted to the mRNA of hTERT gene were synthesized and labeled with Technetium-99m and fluorescein isothiocyanate (FITC), respectively. Then, ASON was combined with transfection reagent Lipofectamine 2000 and Xfect(TM), named Lipo-ASON and Xfect-ASON, respectively. After transfection, the labeled ASON was characterized in hNPCs-G3 and hRPE cells. Reverse transcription polymerase chain reaction (RT-PCR) and Western blotting were performed to assay the hTERT mRNA and protein levels after hNPCs-G3 cells were incubated with Lipo-ASON, Xfect-ASON, and naked ASON. In addition, Lipo-ASON, Xfect-ASON, and naked ASON were injected into tumor-bearing mice, and the biodistribution in vivo was performed.The presence of two transfection reagents significantly increased intracellular uptake of radiolabeled ASON in both cell lines compared with naked ASON (p < 0.05). However, there was no significant difference in cellular uptake rates of Lipo-ASON and Xfect-ASON between hNPCs-G3 and hRPE cells. In comparison with naked ASON, the fluorescence intensity was strongly enhanced after binding to transfection reagents. Furthermore, the levels of hTERT mRNA and protein were significantly reduced in cells treated with Lipo-ASON and Xfect-ASON (p < 0.05), but naked ASON had no significant effect on hTERT expression level. The biodistribution study indicated that tumor radioactivity uptake of radiolabeled ASON for naked ASON, Lipo-ASON, and Xfect-ASON group was low and shown no significant difference in vivo. - source_sentence: Does early second-trimester sonography predict adverse perinatal outcomes in monochorionic diamniotic twin pregnancies? sentences: - Calcium and vitamin D are essential nutrients for bone metabolism Vitamin D can either be obtained from dietary sources or cutaneous synthesis. The study was conducted in subtropic weather; therefore, some might believe that the levels of solar radiation would be sufficient in this area.To evaluate calcium and vitamin D supplementation in postmenopausal women with osteoporosis living in a sunny country.A 3-month controlled clinical trial with 64 postmenopausal women with osteoporosis, mean age 62 + or - 8 years. They were randomly assigned to either the supplement group, who received 1,200 mg of calcium carbonate and 400 IU (10 microg) of vitamin D(3,) or the control group. Dietary intake assessment was performed, bone mineral density and body composition were measured, and biochemical markers of bone metabolism were analyzed.Considering all participants at baseline, serum vitamin D was under 75 nmol/l in 91.4% of the participants. The concentration of serum 25(OH)D increased significantly (p = 0.023) after 3 months of supplementation from 46.67 + or - 13.97 to 59.47 + or - 17.50 nmol/l. However, the dose given was limited in effect, and 86.2% of the supplement group did not reach optimal levels of 25(OH)D. Parathyroid hormone was elevated in 22.4% of the study group. After the intervention period, mean parathyroid hormone tended to decrease in the supplement group (p = 0.063). - 'To determine whether intertwin discordant abdominal circumference, femur length, head circumference, and estimated fetal weight sonographic measurements in early second-trimester monochorionic diamniotic twins predict adverse obstetric and neonatal outcomes.We conducted a multicenter retrospective cohort study involving 9 regional perinatal centers in the United States. We examined the records of all monochorionic diamniotic twin pregnancies with two live fetuses at the 16- to 18-week sonographic examination who had serial follow-up sonography until delivery. The intertwin discordance in abdominal circumference, femur length, head circumference, and estimated fetal weight was calculated as the difference between the two fetuses, expressed as a percentage of the larger using the 16- to 18-week sonographic measurements. An adverse composite obstetric outcome was defined as the occurrence of 1 or more of the following in either fetus: intrauterine growth restriction, twin-twin transfusion syndrome, intrauterine fetal death, abnormal growth discordance (≥20% difference), and very preterm birth at or before 28 weeks. An adverse composite neonatal outcome was defined as the occurrence of 1 or more of the following: respiratory distress syndrome, any stage of intraventricular hemorrhage, 5-minute Apgar score less than 7, necrotizing enterocolitis, culture-proven early-onset sepsis, and neonatal death. Receiver operating characteristic and logistic regression-with-generalized estimating equation analyses were constructed.Among the 177 monochorionic diamniotic twin pregnancies analyzed, intertwin abdominal circumference and estimated fetal weight discordances were only predictive of adverse composite obstetric outcomes (areas under the curve, 79% and 80%, respectively). Receiver operating characteristic curves showed that intertwin discordances in abdominal circumference, femur length, head circumference, and estimated fetal weight were not acceptable predictors of twin-twin transfusion syndrome or adverse neonatal outcomes.' - We aimed to investigate our results of carotid endarterectomy operations in symptomatic patients operated by using an intraluminal shunt and without use of an intraluminal shunt in patients with contralateral carotid artery stenosis.We reviewed the results of 144 carotid endarterectomy operations in patients with contralateral carotid artery stenosis from January 2007 to December 2012. These patients were allocated in 2 groups. Group 1 (n = 70) consisted of the patients operated by using an intraluminal shunt and Group 2 (n = 74) consisted of the patients operated without use of an intraluminal shunt. Postoperative neurologic complications were recorded.Temporary neurologic impairment developed in 3 (4.3%) patients postoperatively in group 1 and in 2 (2.7%) patients postoperatively in group 2. This difference was not statistically significant between groups (p = 0.675). None of the patients returned to operation theatre due to excessive bleeding postoperatively. The stroke/death rate was 0.7% in the study group. model-index: - name: all-MiniLM-L6-v2 trained on MEDI-MTEB triplets results: - task: type: triplet name: Triplet dataset: name: medi mteb dev type: medi-mteb-dev metrics: - type: cosine_accuracy value: 0.9152662981006076 name: Cosine Accuracy --- # all-MiniLM-L6-v2 trained on MEDI-MTEB triplets This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) on the NQ, pubmed, specter_train_triples, S2ORC_citations_abstracts, fever, gooaq_pairs, codesearchnet, wikihow, WikiAnswers, eli5_question_answer, amazon-qa, medmcqa, zeroshot, TriviaQA_pairs, PAQ_pairs, stackexchange_duplicate_questions_title-body_title-body, trex, flickr30k_captions, hotpotqa, task671_ambigqa_text_generation, task061_ropes_answer_generation, task285_imdb_answer_generation, task905_hate_speech_offensive_classification, task566_circa_classification, task184_snli_entailment_to_neutral_text_modification, task280_stereoset_classification_stereotype_type, task1599_smcalflow_classification, task1384_deal_or_no_dialog_classification, task591_sciq_answer_generation, task823_peixian-rtgender_sentiment_analysis, task023_cosmosqa_question_generation, task900_freebase_qa_category_classification, task924_event2mind_word_generation, task152_tomqa_find_location_easy_noise, task1368_healthfact_sentence_generation, task1661_super_glue_classification, task1187_politifact_classification, task1728_web_nlg_data_to_text, task112_asset_simple_sentence_identification, task1340_msr_text_compression_compression, task072_abductivenli_answer_generation, task1504_hatexplain_answer_generation, task684_online_privacy_policy_text_information_type_generation, task1290_xsum_summarization, task075_squad1.1_answer_generation, task1587_scifact_classification, task384_socialiqa_question_classification, task1555_scitail_answer_generation, task1532_daily_dialog_emotion_classification, task239_tweetqa_answer_generation, task596_mocha_question_generation, task1411_dart_subject_identification, task1359_numer_sense_answer_generation, task329_gap_classification, task220_rocstories_title_classification, task316_crows-pairs_classification_stereotype, task495_semeval_headline_classification, task1168_brown_coarse_pos_tagging, task348_squad2.0_unanswerable_question_generation, task049_multirc_questions_needed_to_answer, task1534_daily_dialog_question_classification, task322_jigsaw_classification_threat, task295_semeval_2020_task4_commonsense_reasoning, task186_snli_contradiction_to_entailment_text_modification, task034_winogrande_question_modification_object, task160_replace_letter_in_a_sentence, task469_mrqa_answer_generation, task105_story_cloze-rocstories_sentence_generation, task649_race_blank_question_generation, task1536_daily_dialog_happiness_classification, task683_online_privacy_policy_text_purpose_answer_generation, task024_cosmosqa_answer_generation, task584_udeps_eng_fine_pos_tagging, task066_timetravel_binary_consistency_classification, task413_mickey_en_sentence_perturbation_generation, task182_duorc_question_generation, task028_drop_answer_generation, task1601_webquestions_answer_generation, task1295_adversarial_qa_question_answering, task201_mnli_neutral_classification, task038_qasc_combined_fact, task293_storycommonsense_emotion_text_generation, task572_recipe_nlg_text_generation, task517_emo_classify_emotion_of_dialogue, task382_hybridqa_answer_generation, task176_break_decompose_questions, task1291_multi_news_summarization, task155_count_nouns_verbs, task031_winogrande_question_generation_object, task279_stereoset_classification_stereotype, task1336_peixian_equity_evaluation_corpus_gender_classifier, task508_scruples_dilemmas_more_ethical_isidentifiable, task518_emo_different_dialogue_emotions, task077_splash_explanation_to_sql, task923_event2mind_classifier, task470_mrqa_question_generation, task638_multi_woz_classification, task1412_web_questions_question_answering, task847_pubmedqa_question_generation, task678_ollie_actual_relationship_answer_generation, task290_tellmewhy_question_answerability, task575_air_dialogue_classification, task189_snli_neutral_to_contradiction_text_modification, task026_drop_question_generation, task162_count_words_starting_with_letter, task079_conala_concat_strings, task610_conllpp_ner, task046_miscellaneous_question_typing, task197_mnli_domain_answer_generation, task1325_qa_zre_question_generation_on_subject_relation, task430_senteval_subject_count, task672_nummersense, task402_grailqa_paraphrase_generation, task904_hate_speech_offensive_classification, task192_hotpotqa_sentence_generation, task069_abductivenli_classification, task574_air_dialogue_sentence_generation, task187_snli_entailment_to_contradiction_text_modification, task749_glucose_reverse_cause_emotion_detection, task1552_scitail_question_generation, task750_aqua_multiple_choice_answering, task327_jigsaw_classification_toxic, task1502_hatexplain_classification, task328_jigsaw_classification_insult, task304_numeric_fused_head_resolution, task1293_kilt_tasks_hotpotqa_question_answering, task216_rocstories_correct_answer_generation, task1326_qa_zre_question_generation_from_answer, task1338_peixian_equity_evaluation_corpus_sentiment_classifier, task1729_personachat_generate_next, task1202_atomic_classification_xneed, task400_paws_paraphrase_classification, task502_scruples_anecdotes_whoiswrong_verification, task088_identify_typo_verification, task221_rocstories_two_choice_classification, task200_mnli_entailment_classification, task074_squad1.1_question_generation, task581_socialiqa_question_generation, task1186_nne_hrngo_classification, task898_freebase_qa_answer_generation, task1408_dart_similarity_classification, task168_strategyqa_question_decomposition, task1357_xlsum_summary_generation, task390_torque_text_span_selection, task165_mcscript_question_answering_commonsense, task1533_daily_dialog_formal_classification, task002_quoref_answer_generation, task1297_qasc_question_answering, task305_jeopardy_answer_generation_normal, task029_winogrande_full_object, task1327_qa_zre_answer_generation_from_question, task326_jigsaw_classification_obscene, task1542_every_ith_element_from_starting, task570_recipe_nlg_ner_generation, task1409_dart_text_generation, task401_numeric_fused_head_reference, task846_pubmedqa_classification, task1712_poki_classification, task344_hybridqa_answer_generation, task875_emotion_classification, task1214_atomic_classification_xwant, task106_scruples_ethical_judgment, task238_iirc_answer_from_passage_answer_generation, task1391_winogrande_easy_answer_generation, task195_sentiment140_classification, task163_count_words_ending_with_letter, task579_socialiqa_classification, task569_recipe_nlg_text_generation, task1602_webquestion_question_genreation, task747_glucose_cause_emotion_detection, task219_rocstories_title_answer_generation, task178_quartz_question_answering, task103_facts2story_long_text_generation, task301_record_question_generation, task1369_healthfact_sentence_generation, task515_senteval_odd_word_out, task496_semeval_answer_generation, task1658_billsum_summarization, task1204_atomic_classification_hinderedby, task1392_superglue_multirc_answer_verification, task306_jeopardy_answer_generation_double, task1286_openbookqa_question_answering, task159_check_frequency_of_words_in_sentence_pair, task151_tomqa_find_location_easy_clean, task323_jigsaw_classification_sexually_explicit, task037_qasc_generate_related_fact, task027_drop_answer_type_generation, task1596_event2mind_text_generation_2, task141_odd-man-out_classification_category, task194_duorc_answer_generation, task679_hope_edi_english_text_classification, task246_dream_question_generation, task1195_disflqa_disfluent_to_fluent_conversion, task065_timetravel_consistent_sentence_classification, task351_winomt_classification_gender_identifiability_anti, task580_socialiqa_answer_generation, task583_udeps_eng_coarse_pos_tagging, task202_mnli_contradiction_classification, task222_rocstories_two_chioce_slotting_classification, task498_scruples_anecdotes_whoiswrong_classification, task067_abductivenli_answer_generation, task616_cola_classification, task286_olid_offense_judgment, task188_snli_neutral_to_entailment_text_modification, task223_quartz_explanation_generation, task820_protoqa_answer_generation, task196_sentiment140_answer_generation, task1678_mathqa_answer_selection, task349_squad2.0_answerable_unanswerable_question_classification, task154_tomqa_find_location_hard_noise, task333_hateeval_classification_hate_en, task235_iirc_question_from_subtext_answer_generation, task1554_scitail_classification, task210_logic2text_structured_text_generation, task035_winogrande_question_modification_person, task230_iirc_passage_classification, task1356_xlsum_title_generation, task1726_mathqa_correct_answer_generation, task302_record_classification, task380_boolq_yes_no_question, task212_logic2text_classification, task748_glucose_reverse_cause_event_detection, task834_mathdataset_classification, task350_winomt_classification_gender_identifiability_pro, task191_hotpotqa_question_generation, task236_iirc_question_from_passage_answer_generation, task217_rocstories_ordering_answer_generation, task568_circa_question_generation, task614_glucose_cause_event_detection, task361_spolin_yesand_prompt_response_classification, task421_persent_sentence_sentiment_classification, task203_mnli_sentence_generation, task420_persent_document_sentiment_classification, task153_tomqa_find_location_hard_clean, task346_hybridqa_classification, task1211_atomic_classification_hassubevent, task360_spolin_yesand_response_generation, task510_reddit_tifu_title_summarization, task511_reddit_tifu_long_text_summarization, task345_hybridqa_answer_generation, task270_csrg_counterfactual_context_generation, task307_jeopardy_answer_generation_final, task001_quoref_question_generation, task089_swap_words_verification, task1196_atomic_classification_oeffect, task080_piqa_answer_generation, task1598_nyc_long_text_generation, task240_tweetqa_question_generation, task615_moviesqa_answer_generation, task1347_glue_sts-b_similarity_classification, task114_is_the_given_word_longest, task292_storycommonsense_character_text_generation, task115_help_advice_classification, task431_senteval_object_count, task1360_numer_sense_multiple_choice_qa_generation, task177_para-nmt_paraphrasing, task132_dais_text_modification, task269_csrg_counterfactual_story_generation, task233_iirc_link_exists_classification, task161_count_words_containing_letter, task1205_atomic_classification_isafter, task571_recipe_nlg_ner_generation, task1292_yelp_review_full_text_categorization, task428_senteval_inversion, task311_race_question_generation, task429_senteval_tense, task403_creak_commonsense_inference, task929_products_reviews_classification, task582_naturalquestion_answer_generation, task237_iirc_answer_from_subtext_answer_generation, task050_multirc_answerability, task184_break_generate_question, task669_ambigqa_answer_generation, task169_strategyqa_sentence_generation, task500_scruples_anecdotes_title_generation, task241_tweetqa_classification, task1345_glue_qqp_question_paraprashing, task218_rocstories_swap_order_answer_generation, task613_politifact_text_generation, task1167_penn_treebank_coarse_pos_tagging, task1422_mathqa_physics, task247_dream_answer_generation, task199_mnli_classification, task164_mcscript_question_answering_text, task1541_agnews_classification, task516_senteval_conjoints_inversion, task294_storycommonsense_motiv_text_generation, task501_scruples_anecdotes_post_type_verification, task213_rocstories_correct_ending_classification, task821_protoqa_question_generation, task493_review_polarity_classification, task308_jeopardy_answer_generation_all, task1595_event2mind_text_generation_1, task040_qasc_question_generation, task231_iirc_link_classification, task1727_wiqa_what_is_the_effect, task578_curiosity_dialogs_answer_generation, task310_race_classification, task309_race_answer_generation, task379_agnews_topic_classification, task030_winogrande_full_person, task1540_parsed_pdfs_summarization, task039_qasc_find_overlapping_words, task1206_atomic_classification_isbefore, task157_count_vowels_and_consonants, task339_record_answer_generation, task453_swag_answer_generation, task848_pubmedqa_classification, task673_google_wellformed_query_classification, task676_ollie_relationship_answer_generation, task268_casehold_legal_answer_generation, task844_financial_phrasebank_classification, task330_gap_answer_generation, task595_mocha_answer_generation, task1285_kpa_keypoint_matching, task234_iirc_passage_line_answer_generation, task494_review_polarity_answer_generation, task670_ambigqa_question_generation, task289_gigaword_summarization, npr, nli, SimpleWiki, amazon_review_2018, ccnews_title_text, agnews, xsum, msmarco, yahoo_answers_title_answer, squad_pairs, wow, mteb-amazon_counterfactual-avs_triplets, mteb-amazon_massive_intent-avs_triplets, mteb-amazon_massive_scenario-avs_triplets, mteb-amazon_reviews_multi-avs_triplets, mteb-banking77-avs_triplets, mteb-emotion-avs_triplets, mteb-imdb-avs_triplets, mteb-mtop_domain-avs_triplets, mteb-mtop_intent-avs_triplets, mteb-toxic_conversations_50k-avs_triplets, mteb-tweet_sentiment_extraction-avs_triplets and covid-bing-query-gpt4-avs_triplets datasets. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision fa97f6e7cb1a59073dff9e6b13e2715cf7475ac9 --> - **Maximum Sequence Length:** 256 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity - **Training Datasets:** - NQ - pubmed - specter_train_triples - S2ORC_citations_abstracts - fever - gooaq_pairs - codesearchnet - wikihow - WikiAnswers - eli5_question_answer - amazon-qa - medmcqa - zeroshot - TriviaQA_pairs - PAQ_pairs - stackexchange_duplicate_questions_title-body_title-body - trex - flickr30k_captions - hotpotqa - task671_ambigqa_text_generation - task061_ropes_answer_generation - task285_imdb_answer_generation - task905_hate_speech_offensive_classification - task566_circa_classification - task184_snli_entailment_to_neutral_text_modification - task280_stereoset_classification_stereotype_type - task1599_smcalflow_classification - task1384_deal_or_no_dialog_classification - task591_sciq_answer_generation - task823_peixian-rtgender_sentiment_analysis - task023_cosmosqa_question_generation - task900_freebase_qa_category_classification - task924_event2mind_word_generation - task152_tomqa_find_location_easy_noise - task1368_healthfact_sentence_generation - task1661_super_glue_classification - task1187_politifact_classification - task1728_web_nlg_data_to_text - task112_asset_simple_sentence_identification - task1340_msr_text_compression_compression - task072_abductivenli_answer_generation - task1504_hatexplain_answer_generation - task684_online_privacy_policy_text_information_type_generation - task1290_xsum_summarization - task075_squad1.1_answer_generation - task1587_scifact_classification - task384_socialiqa_question_classification - task1555_scitail_answer_generation - task1532_daily_dialog_emotion_classification - task239_tweetqa_answer_generation - task596_mocha_question_generation - task1411_dart_subject_identification - task1359_numer_sense_answer_generation - task329_gap_classification - task220_rocstories_title_classification - task316_crows-pairs_classification_stereotype - task495_semeval_headline_classification - task1168_brown_coarse_pos_tagging - task348_squad2.0_unanswerable_question_generation - task049_multirc_questions_needed_to_answer - task1534_daily_dialog_question_classification - task322_jigsaw_classification_threat - task295_semeval_2020_task4_commonsense_reasoning - task186_snli_contradiction_to_entailment_text_modification - task034_winogrande_question_modification_object - task160_replace_letter_in_a_sentence - task469_mrqa_answer_generation - task105_story_cloze-rocstories_sentence_generation - task649_race_blank_question_generation - task1536_daily_dialog_happiness_classification - task683_online_privacy_policy_text_purpose_answer_generation - task024_cosmosqa_answer_generation - task584_udeps_eng_fine_pos_tagging - task066_timetravel_binary_consistency_classification - task413_mickey_en_sentence_perturbation_generation - task182_duorc_question_generation - task028_drop_answer_generation - task1601_webquestions_answer_generation - task1295_adversarial_qa_question_answering - task201_mnli_neutral_classification - task038_qasc_combined_fact - task293_storycommonsense_emotion_text_generation - task572_recipe_nlg_text_generation - task517_emo_classify_emotion_of_dialogue - task382_hybridqa_answer_generation - task176_break_decompose_questions - task1291_multi_news_summarization - task155_count_nouns_verbs - task031_winogrande_question_generation_object - task279_stereoset_classification_stereotype - task1336_peixian_equity_evaluation_corpus_gender_classifier - task508_scruples_dilemmas_more_ethical_isidentifiable - task518_emo_different_dialogue_emotions - task077_splash_explanation_to_sql - task923_event2mind_classifier - task470_mrqa_question_generation - task638_multi_woz_classification - task1412_web_questions_question_answering - task847_pubmedqa_question_generation - task678_ollie_actual_relationship_answer_generation - task290_tellmewhy_question_answerability - task575_air_dialogue_classification - task189_snli_neutral_to_contradiction_text_modification - task026_drop_question_generation - task162_count_words_starting_with_letter - task079_conala_concat_strings - task610_conllpp_ner - task046_miscellaneous_question_typing - task197_mnli_domain_answer_generation - task1325_qa_zre_question_generation_on_subject_relation - task430_senteval_subject_count - task672_nummersense - task402_grailqa_paraphrase_generation - task904_hate_speech_offensive_classification - task192_hotpotqa_sentence_generation - task069_abductivenli_classification - task574_air_dialogue_sentence_generation - task187_snli_entailment_to_contradiction_text_modification - task749_glucose_reverse_cause_emotion_detection - task1552_scitail_question_generation - task750_aqua_multiple_choice_answering - task327_jigsaw_classification_toxic - task1502_hatexplain_classification - task328_jigsaw_classification_insult - task304_numeric_fused_head_resolution - task1293_kilt_tasks_hotpotqa_question_answering - task216_rocstories_correct_answer_generation - task1326_qa_zre_question_generation_from_answer - task1338_peixian_equity_evaluation_corpus_sentiment_classifier - task1729_personachat_generate_next - task1202_atomic_classification_xneed - task400_paws_paraphrase_classification - task502_scruples_anecdotes_whoiswrong_verification - task088_identify_typo_verification - task221_rocstories_two_choice_classification - task200_mnli_entailment_classification - task074_squad1.1_question_generation - task581_socialiqa_question_generation - task1186_nne_hrngo_classification - task898_freebase_qa_answer_generation - task1408_dart_similarity_classification - task168_strategyqa_question_decomposition - task1357_xlsum_summary_generation - task390_torque_text_span_selection - task165_mcscript_question_answering_commonsense - task1533_daily_dialog_formal_classification - task002_quoref_answer_generation - task1297_qasc_question_answering - task305_jeopardy_answer_generation_normal - task029_winogrande_full_object - task1327_qa_zre_answer_generation_from_question - task326_jigsaw_classification_obscene - task1542_every_ith_element_from_starting - task570_recipe_nlg_ner_generation - task1409_dart_text_generation - task401_numeric_fused_head_reference - task846_pubmedqa_classification - task1712_poki_classification - task344_hybridqa_answer_generation - task875_emotion_classification - task1214_atomic_classification_xwant - task106_scruples_ethical_judgment - task238_iirc_answer_from_passage_answer_generation - task1391_winogrande_easy_answer_generation - task195_sentiment140_classification - task163_count_words_ending_with_letter - task579_socialiqa_classification - task569_recipe_nlg_text_generation - task1602_webquestion_question_genreation - task747_glucose_cause_emotion_detection - task219_rocstories_title_answer_generation - task178_quartz_question_answering - task103_facts2story_long_text_generation - task301_record_question_generation - task1369_healthfact_sentence_generation - task515_senteval_odd_word_out - task496_semeval_answer_generation - task1658_billsum_summarization - task1204_atomic_classification_hinderedby - task1392_superglue_multirc_answer_verification - task306_jeopardy_answer_generation_double - task1286_openbookqa_question_answering - task159_check_frequency_of_words_in_sentence_pair - task151_tomqa_find_location_easy_clean - task323_jigsaw_classification_sexually_explicit - task037_qasc_generate_related_fact - task027_drop_answer_type_generation - task1596_event2mind_text_generation_2 - task141_odd-man-out_classification_category - task194_duorc_answer_generation - task679_hope_edi_english_text_classification - task246_dream_question_generation - task1195_disflqa_disfluent_to_fluent_conversion - task065_timetravel_consistent_sentence_classification - task351_winomt_classification_gender_identifiability_anti - task580_socialiqa_answer_generation - task583_udeps_eng_coarse_pos_tagging - task202_mnli_contradiction_classification - task222_rocstories_two_chioce_slotting_classification - task498_scruples_anecdotes_whoiswrong_classification - task067_abductivenli_answer_generation - task616_cola_classification - task286_olid_offense_judgment - task188_snli_neutral_to_entailment_text_modification - task223_quartz_explanation_generation - task820_protoqa_answer_generation - task196_sentiment140_answer_generation - task1678_mathqa_answer_selection - task349_squad2.0_answerable_unanswerable_question_classification - task154_tomqa_find_location_hard_noise - task333_hateeval_classification_hate_en - task235_iirc_question_from_subtext_answer_generation - task1554_scitail_classification - task210_logic2text_structured_text_generation - task035_winogrande_question_modification_person - task230_iirc_passage_classification - task1356_xlsum_title_generation - task1726_mathqa_correct_answer_generation - task302_record_classification - task380_boolq_yes_no_question - task212_logic2text_classification - task748_glucose_reverse_cause_event_detection - task834_mathdataset_classification - task350_winomt_classification_gender_identifiability_pro - task191_hotpotqa_question_generation - task236_iirc_question_from_passage_answer_generation - task217_rocstories_ordering_answer_generation - task568_circa_question_generation - task614_glucose_cause_event_detection - task361_spolin_yesand_prompt_response_classification - task421_persent_sentence_sentiment_classification - task203_mnli_sentence_generation - task420_persent_document_sentiment_classification - task153_tomqa_find_location_hard_clean - task346_hybridqa_classification - task1211_atomic_classification_hassubevent - task360_spolin_yesand_response_generation - task510_reddit_tifu_title_summarization - task511_reddit_tifu_long_text_summarization - task345_hybridqa_answer_generation - task270_csrg_counterfactual_context_generation - task307_jeopardy_answer_generation_final - task001_quoref_question_generation - task089_swap_words_verification - task1196_atomic_classification_oeffect - task080_piqa_answer_generation - task1598_nyc_long_text_generation - task240_tweetqa_question_generation - task615_moviesqa_answer_generation - task1347_glue_sts-b_similarity_classification - task114_is_the_given_word_longest - task292_storycommonsense_character_text_generation - task115_help_advice_classification - task431_senteval_object_count - task1360_numer_sense_multiple_choice_qa_generation - task177_para-nmt_paraphrasing - task132_dais_text_modification - task269_csrg_counterfactual_story_generation - task233_iirc_link_exists_classification - task161_count_words_containing_letter - task1205_atomic_classification_isafter - task571_recipe_nlg_ner_generation - task1292_yelp_review_full_text_categorization - task428_senteval_inversion - task311_race_question_generation - task429_senteval_tense - task403_creak_commonsense_inference - task929_products_reviews_classification - task582_naturalquestion_answer_generation - task237_iirc_answer_from_subtext_answer_generation - task050_multirc_answerability - task184_break_generate_question - task669_ambigqa_answer_generation - task169_strategyqa_sentence_generation - task500_scruples_anecdotes_title_generation - task241_tweetqa_classification - task1345_glue_qqp_question_paraprashing - task218_rocstories_swap_order_answer_generation - task613_politifact_text_generation - task1167_penn_treebank_coarse_pos_tagging - task1422_mathqa_physics - task247_dream_answer_generation - task199_mnli_classification - task164_mcscript_question_answering_text - task1541_agnews_classification - task516_senteval_conjoints_inversion - task294_storycommonsense_motiv_text_generation - task501_scruples_anecdotes_post_type_verification - task213_rocstories_correct_ending_classification - task821_protoqa_question_generation - task493_review_polarity_classification - task308_jeopardy_answer_generation_all - task1595_event2mind_text_generation_1 - task040_qasc_question_generation - task231_iirc_link_classification - task1727_wiqa_what_is_the_effect - task578_curiosity_dialogs_answer_generation - task310_race_classification - task309_race_answer_generation - task379_agnews_topic_classification - task030_winogrande_full_person - task1540_parsed_pdfs_summarization - task039_qasc_find_overlapping_words - task1206_atomic_classification_isbefore - task157_count_vowels_and_consonants - task339_record_answer_generation - task453_swag_answer_generation - task848_pubmedqa_classification - task673_google_wellformed_query_classification - task676_ollie_relationship_answer_generation - task268_casehold_legal_answer_generation - task844_financial_phrasebank_classification - task330_gap_answer_generation - task595_mocha_answer_generation - task1285_kpa_keypoint_matching - task234_iirc_passage_line_answer_generation - task494_review_polarity_answer_generation - task670_ambigqa_question_generation - task289_gigaword_summarization - npr - nli - SimpleWiki - amazon_review_2018 - ccnews_title_text - agnews - xsum - msmarco - yahoo_answers_title_answer - squad_pairs - wow - mteb-amazon_counterfactual-avs_triplets - mteb-amazon_massive_intent-avs_triplets - mteb-amazon_massive_scenario-avs_triplets - mteb-amazon_reviews_multi-avs_triplets - mteb-banking77-avs_triplets - mteb-emotion-avs_triplets - mteb-imdb-avs_triplets - mteb-mtop_domain-avs_triplets - mteb-mtop_intent-avs_triplets - mteb-toxic_conversations_50k-avs_triplets - mteb-tweet_sentiment_extraction-avs_triplets - covid-bing-query-gpt4-avs_triplets - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): RandomProjection({'in_features': 384, 'out_features': 768, 'seed': 42}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("avsolatorio/all-MiniLM-L6-v2-MEDI-MTEB-triplet-randproj-64-final") # Run inference sentences = [ 'Does early second-trimester sonography predict adverse perinatal outcomes in monochorionic diamniotic twin pregnancies?', 'To determine whether intertwin discordant abdominal circumference, femur length, head circumference, and estimated fetal weight sonographic measurements in early second-trimester monochorionic diamniotic twins predict adverse obstetric and neonatal outcomes.We conducted a multicenter retrospective cohort study involving 9 regional perinatal centers in the United States. We examined the records of all monochorionic diamniotic twin pregnancies with two live fetuses at the 16- to 18-week sonographic examination who had serial follow-up sonography until delivery. The intertwin discordance in abdominal circumference, femur length, head circumference, and estimated fetal weight was calculated as the difference between the two fetuses, expressed as a percentage of the larger using the 16- to 18-week sonographic measurements. An adverse composite obstetric outcome was defined as the occurrence of 1 or more of the following in either fetus: intrauterine growth restriction, twin-twin transfusion syndrome, intrauterine fetal death, abnormal growth discordance (≥20% difference), and very preterm birth at or before 28 weeks. An adverse composite neonatal outcome was defined as the occurrence of 1 or more of the following: respiratory distress syndrome, any stage of intraventricular hemorrhage, 5-minute Apgar score less than 7, necrotizing enterocolitis, culture-proven early-onset sepsis, and neonatal death. Receiver operating characteristic and logistic regression-with-generalized estimating equation analyses were constructed.Among the 177 monochorionic diamniotic twin pregnancies analyzed, intertwin abdominal circumference and estimated fetal weight discordances were only predictive of adverse composite obstetric outcomes (areas under the curve, 79% and 80%, respectively). Receiver operating characteristic curves showed that intertwin discordances in abdominal circumference, femur length, head circumference, and estimated fetal weight were not acceptable predictors of twin-twin transfusion syndrome or adverse neonatal outcomes.', 'Calcium and vitamin D are essential nutrients for bone metabolism Vitamin D can either be obtained from dietary sources or cutaneous synthesis. The study was conducted in subtropic weather; therefore, some might believe that the levels of solar radiation would be sufficient in this area.To evaluate calcium and vitamin D supplementation in postmenopausal women with osteoporosis living in a sunny country.A 3-month controlled clinical trial with 64 postmenopausal women with osteoporosis, mean age 62 + or - 8 years. They were randomly assigned to either the supplement group, who received 1,200 mg of calcium carbonate and 400 IU (10 microg) of vitamin D(3,) or the control group. Dietary intake assessment was performed, bone mineral density and body composition were measured, and biochemical markers of bone metabolism were analyzed.Considering all participants at baseline, serum vitamin D was under 75 nmol/l in 91.4% of the participants. The concentration of serum 25(OH)D increased significantly (p = 0.023) after 3 months of supplementation from 46.67 + or - 13.97 to 59.47 + or - 17.50 nmol/l. However, the dose given was limited in effect, and 86.2% of the supplement group did not reach optimal levels of 25(OH)D. Parathyroid hormone was elevated in 22.4% of the study group. After the intervention period, mean parathyroid hormone tended to decrease in the supplement group (p = 0.063).', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Triplet * Dataset: `medi-mteb-dev` * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) | Metric | Value | |:--------------------|:-----------| | **cosine_accuracy** | **0.9153** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Datasets #### NQ * Dataset: NQ * Size: 49,548 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 10 tokens</li><li>mean: 11.77 tokens</li><li>max: 22 tokens</li></ul> | <ul><li>min: 113 tokens</li><li>mean: 137.23 tokens</li><li>max: 220 tokens</li></ul> | <ul><li>min: 110 tokens</li><li>mean: 138.25 tokens</li><li>max: 239 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### pubmed * Dataset: pubmed * Size: 29,716 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 22.99 tokens</li><li>max: 62 tokens</li></ul> | <ul><li>min: 78 tokens</li><li>mean: 240.63 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 50 tokens</li><li>mean: 239.04 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### specter_train_triples * Dataset: specter_train_triples * Size: 49,548 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 15.21 tokens</li><li>max: 42 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 13.87 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 16.01 tokens</li><li>max: 70 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### S2ORC_citations_abstracts * Dataset: S2ORC_citations_abstracts * Size: 99,032 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 22 tokens</li><li>mean: 198.64 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 30 tokens</li><li>mean: 203.8 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 203.03 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### fever * Dataset: fever * Size: 74,258 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 12.23 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>min: 37 tokens</li><li>mean: 111.79 tokens</li><li>max: 150 tokens</li></ul> | <ul><li>min: 42 tokens</li><li>mean: 113.24 tokens</li><li>max: 179 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### gooaq_pairs * Dataset: gooaq_pairs * Size: 24,774 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 11.86 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 59.94 tokens</li><li>max: 138 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 63.35 tokens</li><li>max: 149 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### codesearchnet * Dataset: codesearchnet * Size: 14,890 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 29.54 tokens</li><li>max: 124 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 132.91 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 163.79 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### wikihow * Dataset: wikihow * Size: 5,006 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 3 tokens</li><li>mean: 8.16 tokens</li><li>max: 21 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 44.62 tokens</li><li>max: 117 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 36.33 tokens</li><li>max: 100 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### WikiAnswers * Dataset: WikiAnswers * Size: 24,774 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 12.83 tokens</li><li>max: 34 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 12.7 tokens</li><li>max: 36 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 13.12 tokens</li><li>max: 42 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### eli5_question_answer * Dataset: eli5_question_answer * Size: 24,774 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 20.98 tokens</li><li>max: 75 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 103.88 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 111.38 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### amazon-qa * Dataset: amazon-qa * Size: 99,032 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 23.07 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 54.48 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 61.35 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### medmcqa * Dataset: medmcqa * Size: 29,716 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 19.86 tokens</li><li>max: 176 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 113.43 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 108.04 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### zeroshot * Dataset: zeroshot * Size: 14,890 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 8.65 tokens</li><li>max: 21 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 112.61 tokens</li><li>max: 163 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 117.07 tokens</li><li>max: 214 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### TriviaQA_pairs * Dataset: TriviaQA_pairs * Size: 49,548 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 19.79 tokens</li><li>max: 83 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 245.73 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 26 tokens</li><li>mean: 231.5 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### PAQ_pairs * Dataset: PAQ_pairs * Size: 24,774 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 12.67 tokens</li><li>max: 21 tokens</li></ul> | <ul><li>min: 110 tokens</li><li>mean: 135.61 tokens</li><li>max: 223 tokens</li></ul> | <ul><li>min: 111 tokens</li><li>mean: 135.86 tokens</li><li>max: 254 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### stackexchange_duplicate_questions_title-body_title-body * Dataset: stackexchange_duplicate_questions_title-body_title-body * Size: 24,774 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 21 tokens</li><li>mean: 146.64 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 141.12 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 200.51 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### trex * Dataset: trex * Size: 29,716 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 9.43 tokens</li><li>max: 20 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 102.9 tokens</li><li>max: 166 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 118.59 tokens</li><li>max: 236 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### flickr30k_captions * Dataset: flickr30k_captions * Size: 24,774 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 15.87 tokens</li><li>max: 61 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.83 tokens</li><li>max: 48 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 17.13 tokens</li><li>max: 61 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### hotpotqa * Dataset: hotpotqa * Size: 39,600 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 9 tokens</li><li>mean: 24.46 tokens</li><li>max: 97 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 113.58 tokens</li><li>max: 176 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 114.85 tokens</li><li>max: 167 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task671_ambigqa_text_generation * Dataset: task671_ambigqa_text_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 11 tokens</li><li>mean: 12.64 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 12.44 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 12.2 tokens</li><li>max: 19 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task061_ropes_answer_generation * Dataset: task061_ropes_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 117 tokens</li><li>mean: 209.31 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 117 tokens</li><li>mean: 208.62 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 119 tokens</li><li>mean: 211.39 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task285_imdb_answer_generation * Dataset: task285_imdb_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 46 tokens</li><li>mean: 209.96 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 49 tokens</li><li>mean: 205.18 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 46 tokens</li><li>mean: 209.96 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task905_hate_speech_offensive_classification * Dataset: task905_hate_speech_offensive_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 15 tokens</li><li>mean: 41.48 tokens</li><li>max: 164 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 40.59 tokens</li><li>max: 198 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 32.37 tokens</li><li>max: 135 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task566_circa_classification * Dataset: task566_circa_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 20 tokens</li><li>mean: 27.85 tokens</li><li>max: 48 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 27.3 tokens</li><li>max: 44 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 27.5 tokens</li><li>max: 47 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task184_snli_entailment_to_neutral_text_modification * Dataset: task184_snli_entailment_to_neutral_text_modification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 17 tokens</li><li>mean: 29.79 tokens</li><li>max: 72 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 28.88 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 30.16 tokens</li><li>max: 100 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task280_stereoset_classification_stereotype_type * Dataset: task280_stereoset_classification_stereotype_type * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 18.4 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 16.82 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 16.81 tokens</li><li>max: 51 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1599_smcalflow_classification * Dataset: task1599_smcalflow_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 3 tokens</li><li>mean: 11.32 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 10.48 tokens</li><li>max: 38 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 16.23 tokens</li><li>max: 45 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1384_deal_or_no_dialog_classification * Dataset: task1384_deal_or_no_dialog_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 14 tokens</li><li>mean: 59.18 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 58.75 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 58.81 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task591_sciq_answer_generation * Dataset: task591_sciq_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 17.64 tokens</li><li>max: 70 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 17.17 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 16.76 tokens</li><li>max: 75 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task823_peixian-rtgender_sentiment_analysis * Dataset: task823_peixian-rtgender_sentiment_analysis * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 16 tokens</li><li>mean: 57.03 tokens</li><li>max: 129 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 59.85 tokens</li><li>max: 153 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 60.39 tokens</li><li>max: 169 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task023_cosmosqa_question_generation * Dataset: task023_cosmosqa_question_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 35 tokens</li><li>mean: 79.22 tokens</li><li>max: 159 tokens</li></ul> | <ul><li>min: 34 tokens</li><li>mean: 80.25 tokens</li><li>max: 165 tokens</li></ul> | <ul><li>min: 35 tokens</li><li>mean: 79.05 tokens</li><li>max: 161 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task900_freebase_qa_category_classification * Dataset: task900_freebase_qa_category_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 20.33 tokens</li><li>max: 88 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 18.3 tokens</li><li>max: 62 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 19.08 tokens</li><li>max: 69 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task924_event2mind_word_generation * Dataset: task924_event2mind_word_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 17 tokens</li><li>mean: 32.19 tokens</li><li>max: 64 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 32.09 tokens</li><li>max: 70 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 31.45 tokens</li><li>max: 68 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task152_tomqa_find_location_easy_noise * Dataset: task152_tomqa_find_location_easy_noise * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 37 tokens</li><li>mean: 52.67 tokens</li><li>max: 79 tokens</li></ul> | <ul><li>min: 37 tokens</li><li>mean: 52.21 tokens</li><li>max: 78 tokens</li></ul> | <ul><li>min: 37 tokens</li><li>mean: 52.78 tokens</li><li>max: 82 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1368_healthfact_sentence_generation * Dataset: task1368_healthfact_sentence_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 91 tokens</li><li>mean: 240.92 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 84 tokens</li><li>mean: 239.86 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 97 tokens</li><li>mean: 245.16 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1661_super_glue_classification * Dataset: task1661_super_glue_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 35 tokens</li><li>mean: 140.96 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 31 tokens</li><li>mean: 144.29 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 31 tokens</li><li>mean: 143.59 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1187_politifact_classification * Dataset: task1187_politifact_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 14 tokens</li><li>mean: 33.19 tokens</li><li>max: 79 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 31.7 tokens</li><li>max: 75 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 31.87 tokens</li><li>max: 71 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1728_web_nlg_data_to_text * Dataset: task1728_web_nlg_data_to_text * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 42.96 tokens</li><li>max: 152 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 46.52 tokens</li><li>max: 152 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 42.39 tokens</li><li>max: 152 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task112_asset_simple_sentence_identification * Dataset: task112_asset_simple_sentence_identification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 18 tokens</li><li>mean: 51.98 tokens</li><li>max: 136 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 51.84 tokens</li><li>max: 144 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 51.97 tokens</li><li>max: 114 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1340_msr_text_compression_compression * Dataset: task1340_msr_text_compression_compression * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 14 tokens</li><li>mean: 42.15 tokens</li><li>max: 116 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 44.46 tokens</li><li>max: 133 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 40.14 tokens</li><li>max: 141 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task072_abductivenli_answer_generation * Dataset: task072_abductivenli_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 17 tokens</li><li>mean: 26.9 tokens</li><li>max: 56 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 26.28 tokens</li><li>max: 47 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 26.46 tokens</li><li>max: 55 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1504_hatexplain_answer_generation * Dataset: task1504_hatexplain_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 29.09 tokens</li><li>max: 72 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 24.67 tokens</li><li>max: 86 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 27.96 tokens</li><li>max: 67 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task684_online_privacy_policy_text_information_type_generation * Dataset: task684_online_privacy_policy_text_information_type_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 10 tokens</li><li>mean: 30.02 tokens</li><li>max: 68 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 30.19 tokens</li><li>max: 61 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 30.18 tokens</li><li>max: 68 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1290_xsum_summarization * Dataset: task1290_xsum_summarization * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 39 tokens</li><li>mean: 226.27 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 50 tokens</li><li>mean: 228.93 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 34 tokens</li><li>mean: 229.41 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task075_squad1.1_answer_generation * Dataset: task075_squad1.1_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 48 tokens</li><li>mean: 168.58 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 45 tokens</li><li>mean: 172.1 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 46 tokens</li><li>mean: 181.15 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1587_scifact_classification * Dataset: task1587_scifact_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 88 tokens</li><li>mean: 242.35 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 90 tokens</li><li>mean: 246.75 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 86 tokens</li><li>mean: 244.87 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task384_socialiqa_question_classification * Dataset: task384_socialiqa_question_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 24 tokens</li><li>mean: 35.44 tokens</li><li>max: 78 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 34.35 tokens</li><li>max: 59 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 34.51 tokens</li><li>max: 57 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1555_scitail_answer_generation * Dataset: task1555_scitail_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 18 tokens</li><li>mean: 36.72 tokens</li><li>max: 90 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 36.31 tokens</li><li>max: 80 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 36.73 tokens</li><li>max: 92 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1532_daily_dialog_emotion_classification * Dataset: task1532_daily_dialog_emotion_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 16 tokens</li><li>mean: 137.07 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 140.81 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 132.89 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task239_tweetqa_answer_generation * Dataset: task239_tweetqa_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 28 tokens</li><li>mean: 55.78 tokens</li><li>max: 85 tokens</li></ul> | <ul><li>min: 29 tokens</li><li>mean: 56.32 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 55.92 tokens</li><li>max: 81 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task596_mocha_question_generation * Dataset: task596_mocha_question_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 34 tokens</li><li>mean: 80.49 tokens</li><li>max: 163 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 95.93 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 44.93 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1411_dart_subject_identification * Dataset: task1411_dart_subject_identification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 14.86 tokens</li><li>max: 74 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 14.02 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 14.25 tokens</li><li>max: 38 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1359_numer_sense_answer_generation * Dataset: task1359_numer_sense_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 10 tokens</li><li>mean: 18.67 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 18.43 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 18.34 tokens</li><li>max: 30 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task329_gap_classification * Dataset: task329_gap_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 40 tokens</li><li>mean: 122.88 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 62 tokens</li><li>mean: 127.47 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 58 tokens</li><li>mean: 127.71 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task220_rocstories_title_classification * Dataset: task220_rocstories_title_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 53 tokens</li><li>mean: 80.81 tokens</li><li>max: 116 tokens</li></ul> | <ul><li>min: 51 tokens</li><li>mean: 81.08 tokens</li><li>max: 108 tokens</li></ul> | <ul><li>min: 55 tokens</li><li>mean: 79.99 tokens</li><li>max: 115 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task316_crows-pairs_classification_stereotype * Dataset: task316_crows-pairs_classification_stereotype * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 19.78 tokens</li><li>max: 51 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 18.31 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 19.87 tokens</li><li>max: 52 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task495_semeval_headline_classification * Dataset: task495_semeval_headline_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 17 tokens</li><li>mean: 24.57 tokens</li><li>max: 42 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 24.29 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 24.14 tokens</li><li>max: 38 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1168_brown_coarse_pos_tagging * Dataset: task1168_brown_coarse_pos_tagging * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 13 tokens</li><li>mean: 43.61 tokens</li><li>max: 142 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 42.6 tokens</li><li>max: 197 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 44.23 tokens</li><li>max: 197 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task348_squad2.0_unanswerable_question_generation * Dataset: task348_squad2.0_unanswerable_question_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 30 tokens</li><li>mean: 153.88 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 38 tokens</li><li>mean: 161.26 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 33 tokens</li><li>mean: 166.13 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task049_multirc_questions_needed_to_answer * Dataset: task049_multirc_questions_needed_to_answer * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 174 tokens</li><li>mean: 252.7 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 169 tokens</li><li>mean: 252.85 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 178 tokens</li><li>mean: 252.93 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1534_daily_dialog_question_classification * Dataset: task1534_daily_dialog_question_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 17 tokens</li><li>mean: 124.7 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 130.68 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 135.16 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task322_jigsaw_classification_threat * Dataset: task322_jigsaw_classification_threat * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 54.9 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 62.74 tokens</li><li>max: 249 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 61.92 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task295_semeval_2020_task4_commonsense_reasoning * Dataset: task295_semeval_2020_task4_commonsense_reasoning * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 25 tokens</li><li>mean: 45.35 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 44.74 tokens</li><li>max: 95 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 44.53 tokens</li><li>max: 88 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task186_snli_contradiction_to_entailment_text_modification * Dataset: task186_snli_contradiction_to_entailment_text_modification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 18 tokens</li><li>mean: 31.09 tokens</li><li>max: 102 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 30.26 tokens</li><li>max: 65 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 32.22 tokens</li><li>max: 67 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task034_winogrande_question_modification_object * Dataset: task034_winogrande_question_modification_object * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 29 tokens</li><li>mean: 36.26 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 29 tokens</li><li>mean: 35.64 tokens</li><li>max: 54 tokens</li></ul> | <ul><li>min: 29 tokens</li><li>mean: 34.85 tokens</li><li>max: 55 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task160_replace_letter_in_a_sentence * Dataset: task160_replace_letter_in_a_sentence * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 29 tokens</li><li>mean: 32.03 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 31.76 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 29 tokens</li><li>mean: 31.77 tokens</li><li>max: 48 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task469_mrqa_answer_generation * Dataset: task469_mrqa_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 27 tokens</li><li>mean: 182.13 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 180.78 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 183.72 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task105_story_cloze-rocstories_sentence_generation * Dataset: task105_story_cloze-rocstories_sentence_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 36 tokens</li><li>mean: 55.65 tokens</li><li>max: 75 tokens</li></ul> | <ul><li>min: 35 tokens</li><li>mean: 55.02 tokens</li><li>max: 76 tokens</li></ul> | <ul><li>min: 36 tokens</li><li>mean: 55.88 tokens</li><li>max: 76 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task649_race_blank_question_generation * Dataset: task649_race_blank_question_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 36 tokens</li><li>mean: 252.95 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 36 tokens</li><li>mean: 252.78 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 157 tokens</li><li>mean: 253.91 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1536_daily_dialog_happiness_classification * Dataset: task1536_daily_dialog_happiness_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 13 tokens</li><li>mean: 127.91 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 134.02 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 143.7 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task683_online_privacy_policy_text_purpose_answer_generation * Dataset: task683_online_privacy_policy_text_purpose_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 10 tokens</li><li>mean: 30.09 tokens</li><li>max: 68 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 30.5 tokens</li><li>max: 64 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 30.07 tokens</li><li>max: 68 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task024_cosmosqa_answer_generation * Dataset: task024_cosmosqa_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 45 tokens</li><li>mean: 92.62 tokens</li><li>max: 176 tokens</li></ul> | <ul><li>min: 47 tokens</li><li>mean: 93.35 tokens</li><li>max: 174 tokens</li></ul> | <ul><li>min: 42 tokens</li><li>mean: 94.9 tokens</li><li>max: 183 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task584_udeps_eng_fine_pos_tagging * Dataset: task584_udeps_eng_fine_pos_tagging * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 12 tokens</li><li>mean: 40.09 tokens</li><li>max: 120 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 39.35 tokens</li><li>max: 186 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 40.38 tokens</li><li>max: 148 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task066_timetravel_binary_consistency_classification * Dataset: task066_timetravel_binary_consistency_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 42 tokens</li><li>mean: 66.69 tokens</li><li>max: 93 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 67.34 tokens</li><li>max: 94 tokens</li></ul> | <ul><li>min: 45 tokens</li><li>mean: 67.19 tokens</li><li>max: 92 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task413_mickey_en_sentence_perturbation_generation * Dataset: task413_mickey_en_sentence_perturbation_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 13.71 tokens</li><li>max: 21 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 13.75 tokens</li><li>max: 21 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 13.29 tokens</li><li>max: 20 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task182_duorc_question_generation * Dataset: task182_duorc_question_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 99 tokens</li><li>mean: 242.77 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 120 tokens</li><li>mean: 246.47 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 99 tokens</li><li>mean: 246.38 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task028_drop_answer_generation * Dataset: task028_drop_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 76 tokens</li><li>mean: 230.94 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 86 tokens</li><li>mean: 234.89 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 81 tokens</li><li>mean: 235.48 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1601_webquestions_answer_generation * Dataset: task1601_webquestions_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 9 tokens</li><li>mean: 16.49 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 16.71 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 16.76 tokens</li><li>max: 27 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1295_adversarial_qa_question_answering * Dataset: task1295_adversarial_qa_question_answering * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 45 tokens</li><li>mean: 163.69 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 54 tokens</li><li>mean: 166.23 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 48 tokens</li><li>mean: 166.52 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task201_mnli_neutral_classification * Dataset: task201_mnli_neutral_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 24 tokens</li><li>mean: 72.97 tokens</li><li>max: 218 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 73.29 tokens</li><li>max: 170 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 72.24 tokens</li><li>max: 205 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task038_qasc_combined_fact * Dataset: task038_qasc_combined_fact * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 18 tokens</li><li>mean: 31.25 tokens</li><li>max: 57 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 30.61 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 30.86 tokens</li><li>max: 53 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task293_storycommonsense_emotion_text_generation * Dataset: task293_storycommonsense_emotion_text_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 14 tokens</li><li>mean: 40.25 tokens</li><li>max: 86 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 40.27 tokens</li><li>max: 86 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 38.11 tokens</li><li>max: 86 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task572_recipe_nlg_text_generation * Dataset: task572_recipe_nlg_text_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 24 tokens</li><li>mean: 115.66 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 122.27 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 124.11 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task517_emo_classify_emotion_of_dialogue * Dataset: task517_emo_classify_emotion_of_dialogue * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 18.13 tokens</li><li>max: 78 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 17.07 tokens</li><li>max: 59 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 18.5 tokens</li><li>max: 67 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task382_hybridqa_answer_generation * Dataset: task382_hybridqa_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 29 tokens</li><li>mean: 42.28 tokens</li><li>max: 70 tokens</li></ul> | <ul><li>min: 29 tokens</li><li>mean: 41.56 tokens</li><li>max: 74 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 41.74 tokens</li><li>max: 75 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task176_break_decompose_questions * Dataset: task176_break_decompose_questions * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 9 tokens</li><li>mean: 17.48 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 17.2 tokens</li><li>max: 39 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 15.6 tokens</li><li>max: 38 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1291_multi_news_summarization * Dataset: task1291_multi_news_summarization * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 116 tokens</li><li>mean: 255.49 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 146 tokens</li><li>mean: 255.55 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 68 tokens</li><li>mean: 251.87 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task155_count_nouns_verbs * Dataset: task155_count_nouns_verbs * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 23 tokens</li><li>mean: 27.05 tokens</li><li>max: 56 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 26.81 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 26.98 tokens</li><li>max: 46 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task031_winogrande_question_generation_object * Dataset: task031_winogrande_question_generation_object * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 7.42 tokens</li><li>max: 11 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 7.3 tokens</li><li>max: 11 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 7.25 tokens</li><li>max: 11 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task279_stereoset_classification_stereotype * Dataset: task279_stereoset_classification_stereotype * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 17.85 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 15.47 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 17.28 tokens</li><li>max: 50 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1336_peixian_equity_evaluation_corpus_gender_classifier * Dataset: task1336_peixian_equity_evaluation_corpus_gender_classifier * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 9.66 tokens</li><li>max: 17 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 9.61 tokens</li><li>max: 16 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 9.72 tokens</li><li>max: 16 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task508_scruples_dilemmas_more_ethical_isidentifiable * Dataset: task508_scruples_dilemmas_more_ethical_isidentifiable * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 12 tokens</li><li>mean: 29.84 tokens</li><li>max: 94 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 28.5 tokens</li><li>max: 94 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 28.66 tokens</li><li>max: 86 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task518_emo_different_dialogue_emotions * Dataset: task518_emo_different_dialogue_emotions * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 28 tokens</li><li>mean: 47.9 tokens</li><li>max: 106 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 45.44 tokens</li><li>max: 116 tokens</li></ul> | <ul><li>min: 26 tokens</li><li>mean: 46.17 tokens</li><li>max: 123 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task077_splash_explanation_to_sql * Dataset: task077_splash_explanation_to_sql * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 39.24 tokens</li><li>max: 126 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 39.15 tokens</li><li>max: 126 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 35.65 tokens</li><li>max: 111 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task923_event2mind_classifier * Dataset: task923_event2mind_classifier * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 10 tokens</li><li>mean: 20.63 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 18.75 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 19.63 tokens</li><li>max: 46 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task470_mrqa_question_generation * Dataset: task470_mrqa_question_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 13 tokens</li><li>mean: 173.13 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 175.67 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 181.16 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task638_multi_woz_classification * Dataset: task638_multi_woz_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 78 tokens</li><li>mean: 223.5 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 76 tokens</li><li>mean: 220.15 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 64 tokens</li><li>mean: 220.29 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1412_web_questions_question_answering * Dataset: task1412_web_questions_question_answering * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 10.32 tokens</li><li>max: 17 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 10.23 tokens</li><li>max: 17 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 10.06 tokens</li><li>max: 16 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task847_pubmedqa_question_generation * Dataset: task847_pubmedqa_question_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 21 tokens</li><li>mean: 249.15 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 248.61 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 248.86 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task678_ollie_actual_relationship_answer_generation * Dataset: task678_ollie_actual_relationship_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 20 tokens</li><li>mean: 40.63 tokens</li><li>max: 95 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 38.38 tokens</li><li>max: 102 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 40.99 tokens</li><li>max: 104 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task290_tellmewhy_question_answerability * Dataset: task290_tellmewhy_question_answerability * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 37 tokens</li><li>mean: 62.58 tokens</li><li>max: 95 tokens</li></ul> | <ul><li>min: 36 tokens</li><li>mean: 62.21 tokens</li><li>max: 94 tokens</li></ul> | <ul><li>min: 37 tokens</li><li>mean: 62.91 tokens</li><li>max: 95 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task575_air_dialogue_classification * Dataset: task575_air_dialogue_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 14.18 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 13.6 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 12.3 tokens</li><li>max: 42 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task189_snli_neutral_to_contradiction_text_modification * Dataset: task189_snli_neutral_to_contradiction_text_modification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 18 tokens</li><li>mean: 31.89 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 30.66 tokens</li><li>max: 57 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 33.29 tokens</li><li>max: 105 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task026_drop_question_generation * Dataset: task026_drop_question_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 82 tokens</li><li>mean: 219.82 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 57 tokens</li><li>mean: 222.71 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 96 tokens</li><li>mean: 232.56 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task162_count_words_starting_with_letter * Dataset: task162_count_words_starting_with_letter * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 28 tokens</li><li>mean: 32.16 tokens</li><li>max: 56 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 31.77 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 31.65 tokens</li><li>max: 46 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task079_conala_concat_strings * Dataset: task079_conala_concat_strings * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 11 tokens</li><li>mean: 39.94 tokens</li><li>max: 76 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 34.24 tokens</li><li>max: 80 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 33.86 tokens</li><li>max: 76 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task610_conllpp_ner * Dataset: task610_conllpp_ner * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 19.74 tokens</li><li>max: 62 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 20.71 tokens</li><li>max: 62 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 14.24 tokens</li><li>max: 54 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task046_miscellaneous_question_typing * Dataset: task046_miscellaneous_question_typing * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 16 tokens</li><li>mean: 25.26 tokens</li><li>max: 70 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 24.84 tokens</li><li>max: 70 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 25.2 tokens</li><li>max: 57 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task197_mnli_domain_answer_generation * Dataset: task197_mnli_domain_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 15 tokens</li><li>mean: 44.08 tokens</li><li>max: 197 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 44.95 tokens</li><li>max: 211 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 39.27 tokens</li><li>max: 115 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1325_qa_zre_question_generation_on_subject_relation * Dataset: task1325_qa_zre_question_generation_on_subject_relation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 18 tokens</li><li>mean: 50.63 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 49.26 tokens</li><li>max: 180 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 54.42 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task430_senteval_subject_count * Dataset: task430_senteval_subject_count * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 17.26 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.37 tokens</li><li>max: 34 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 16.07 tokens</li><li>max: 34 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task672_nummersense * Dataset: task672_nummersense * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 15.66 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.43 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.25 tokens</li><li>max: 30 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task402_grailqa_paraphrase_generation * Dataset: task402_grailqa_paraphrase_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 23 tokens</li><li>mean: 129.84 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 139.54 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 136.75 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task904_hate_speech_offensive_classification * Dataset: task904_hate_speech_offensive_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 34.35 tokens</li><li>max: 157 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 34.38 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 27.8 tokens</li><li>max: 148 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task192_hotpotqa_sentence_generation * Dataset: task192_hotpotqa_sentence_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 37 tokens</li><li>mean: 124.56 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 35 tokens</li><li>mean: 123.35 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 33 tokens</li><li>mean: 132.67 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task069_abductivenli_classification * Dataset: task069_abductivenli_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 33 tokens</li><li>mean: 52.03 tokens</li><li>max: 86 tokens</li></ul> | <ul><li>min: 33 tokens</li><li>mean: 51.87 tokens</li><li>max: 95 tokens</li></ul> | <ul><li>min: 33 tokens</li><li>mean: 52.01 tokens</li><li>max: 95 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task574_air_dialogue_sentence_generation * Dataset: task574_air_dialogue_sentence_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 54 tokens</li><li>mean: 144.28 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 57 tokens</li><li>mean: 144.0 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 66 tokens</li><li>mean: 148.22 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task187_snli_entailment_to_contradiction_text_modification * Dataset: task187_snli_entailment_to_contradiction_text_modification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 16 tokens</li><li>mean: 30.35 tokens</li><li>max: 69 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 29.87 tokens</li><li>max: 104 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 29.47 tokens</li><li>max: 71 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task749_glucose_reverse_cause_emotion_detection * Dataset: task749_glucose_reverse_cause_emotion_detection * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 38 tokens</li><li>mean: 67.51 tokens</li><li>max: 106 tokens</li></ul> | <ul><li>min: 37 tokens</li><li>mean: 67.07 tokens</li><li>max: 104 tokens</li></ul> | <ul><li>min: 39 tokens</li><li>mean: 68.56 tokens</li><li>max: 107 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1552_scitail_question_generation * Dataset: task1552_scitail_question_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 18.34 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 17.5 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.81 tokens</li><li>max: 54 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task750_aqua_multiple_choice_answering * Dataset: task750_aqua_multiple_choice_answering * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 33 tokens</li><li>mean: 69.8 tokens</li><li>max: 194 tokens</li></ul> | <ul><li>min: 32 tokens</li><li>mean: 68.34 tokens</li><li>max: 194 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 68.21 tokens</li><li>max: 165 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task327_jigsaw_classification_toxic * Dataset: task327_jigsaw_classification_toxic * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 36.99 tokens</li><li>max: 234 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 41.72 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 44.88 tokens</li><li>max: 244 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1502_hatexplain_classification * Dataset: task1502_hatexplain_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 28.7 tokens</li><li>max: 73 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 26.89 tokens</li><li>max: 110 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 26.9 tokens</li><li>max: 90 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task328_jigsaw_classification_insult * Dataset: task328_jigsaw_classification_insult * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 50.28 tokens</li><li>max: 247 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 60.6 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 64.07 tokens</li><li>max: 249 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task304_numeric_fused_head_resolution * Dataset: task304_numeric_fused_head_resolution * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 15 tokens</li><li>mean: 116.82 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 118.84 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 131.78 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1293_kilt_tasks_hotpotqa_question_answering * Dataset: task1293_kilt_tasks_hotpotqa_question_answering * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 10 tokens</li><li>mean: 24.8 tokens</li><li>max: 114 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 24.33 tokens</li><li>max: 114 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 23.79 tokens</li><li>max: 84 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task216_rocstories_correct_answer_generation * Dataset: task216_rocstories_correct_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 39 tokens</li><li>mean: 59.37 tokens</li><li>max: 83 tokens</li></ul> | <ul><li>min: 36 tokens</li><li>mean: 58.11 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 39 tokens</li><li>mean: 58.26 tokens</li><li>max: 95 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1326_qa_zre_question_generation_from_answer * Dataset: task1326_qa_zre_question_generation_from_answer * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 17 tokens</li><li>mean: 46.71 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 45.51 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 49.23 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1338_peixian_equity_evaluation_corpus_sentiment_classifier * Dataset: task1338_peixian_equity_evaluation_corpus_sentiment_classifier * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 9.72 tokens</li><li>max: 16 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 9.73 tokens</li><li>max: 16 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 9.61 tokens</li><li>max: 17 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1729_personachat_generate_next * Dataset: task1729_personachat_generate_next * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 44 tokens</li><li>mean: 147.13 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 142.78 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 50 tokens</li><li>mean: 144.33 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1202_atomic_classification_xneed * Dataset: task1202_atomic_classification_xneed * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 14 tokens</li><li>mean: 19.54 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 19.41 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 19.22 tokens</li><li>max: 28 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task400_paws_paraphrase_classification * Dataset: task400_paws_paraphrase_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 19 tokens</li><li>mean: 52.25 tokens</li><li>max: 97 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 51.75 tokens</li><li>max: 98 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 52.95 tokens</li><li>max: 97 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task502_scruples_anecdotes_whoiswrong_verification * Dataset: task502_scruples_anecdotes_whoiswrong_verification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 12 tokens</li><li>mean: 230.24 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 236.91 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 235.21 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task088_identify_typo_verification * Dataset: task088_identify_typo_verification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 11 tokens</li><li>mean: 15.12 tokens</li><li>max: 48 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 15.06 tokens</li><li>max: 47 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 15.45 tokens</li><li>max: 47 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task221_rocstories_two_choice_classification * Dataset: task221_rocstories_two_choice_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 47 tokens</li><li>mean: 72.36 tokens</li><li>max: 108 tokens</li></ul> | <ul><li>min: 48 tokens</li><li>mean: 72.48 tokens</li><li>max: 109 tokens</li></ul> | <ul><li>min: 46 tokens</li><li>mean: 73.1 tokens</li><li>max: 108 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task200_mnli_entailment_classification * Dataset: task200_mnli_entailment_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 24 tokens</li><li>mean: 72.71 tokens</li><li>max: 198 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 73.01 tokens</li><li>max: 224 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 73.39 tokens</li><li>max: 226 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task074_squad1.1_question_generation * Dataset: task074_squad1.1_question_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 30 tokens</li><li>mean: 150.13 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 33 tokens</li><li>mean: 160.24 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 38 tokens</li><li>mean: 164.44 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task581_socialiqa_question_generation * Dataset: task581_socialiqa_question_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 12 tokens</li><li>mean: 26.5 tokens</li><li>max: 69 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 25.65 tokens</li><li>max: 48 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 25.77 tokens</li><li>max: 48 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1186_nne_hrngo_classification * Dataset: task1186_nne_hrngo_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 19 tokens</li><li>mean: 33.8 tokens</li><li>max: 79 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 33.54 tokens</li><li>max: 74 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 33.65 tokens</li><li>max: 77 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task898_freebase_qa_answer_generation * Dataset: task898_freebase_qa_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 19.39 tokens</li><li>max: 125 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 17.69 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 17.38 tokens</li><li>max: 79 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1408_dart_similarity_classification * Dataset: task1408_dart_similarity_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 22 tokens</li><li>mean: 59.5 tokens</li><li>max: 147 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 61.89 tokens</li><li>max: 154 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 48.9 tokens</li><li>max: 124 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task168_strategyqa_question_decomposition * Dataset: task168_strategyqa_question_decomposition * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 42 tokens</li><li>mean: 79.99 tokens</li><li>max: 181 tokens</li></ul> | <ul><li>min: 42 tokens</li><li>mean: 79.63 tokens</li><li>max: 179 tokens</li></ul> | <ul><li>min: 42 tokens</li><li>mean: 76.6 tokens</li><li>max: 166 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1357_xlsum_summary_generation * Dataset: task1357_xlsum_summary_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 67 tokens</li><li>mean: 241.38 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 69 tokens</li><li>mean: 243.16 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 67 tokens</li><li>mean: 246.78 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task390_torque_text_span_selection * Dataset: task390_torque_text_span_selection * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 47 tokens</li><li>mean: 110.58 tokens</li><li>max: 196 tokens</li></ul> | <ul><li>min: 42 tokens</li><li>mean: 110.41 tokens</li><li>max: 195 tokens</li></ul> | <ul><li>min: 48 tokens</li><li>mean: 111.15 tokens</li><li>max: 196 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task165_mcscript_question_answering_commonsense * Dataset: task165_mcscript_question_answering_commonsense * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 147 tokens</li><li>mean: 199.7 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 145 tokens</li><li>mean: 198.04 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 147 tokens</li><li>mean: 200.11 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1533_daily_dialog_formal_classification * Dataset: task1533_daily_dialog_formal_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 13 tokens</li><li>mean: 130.14 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 136.4 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 137.09 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task002_quoref_answer_generation * Dataset: task002_quoref_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 214 tokens</li><li>mean: 255.53 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 214 tokens</li><li>mean: 255.5 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 224 tokens</li><li>mean: 255.58 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1297_qasc_question_answering * Dataset: task1297_qasc_question_answering * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 61 tokens</li><li>mean: 84.44 tokens</li><li>max: 134 tokens</li></ul> | <ul><li>min: 59 tokens</li><li>mean: 85.31 tokens</li><li>max: 130 tokens</li></ul> | <ul><li>min: 58 tokens</li><li>mean: 84.94 tokens</li><li>max: 125 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task305_jeopardy_answer_generation_normal * Dataset: task305_jeopardy_answer_generation_normal * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 9 tokens</li><li>mean: 27.68 tokens</li><li>max: 59 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 27.48 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 27.42 tokens</li><li>max: 46 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task029_winogrande_full_object * Dataset: task029_winogrande_full_object * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 7.38 tokens</li><li>max: 12 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 7.34 tokens</li><li>max: 11 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 7.24 tokens</li><li>max: 10 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1327_qa_zre_answer_generation_from_question * Dataset: task1327_qa_zre_answer_generation_from_question * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 24 tokens</li><li>mean: 54.88 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 52.02 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 56.19 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task326_jigsaw_classification_obscene * Dataset: task326_jigsaw_classification_obscene * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 63.85 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 76.17 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 72.28 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1542_every_ith_element_from_starting * Dataset: task1542_every_ith_element_from_starting * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 13 tokens</li><li>mean: 125.18 tokens</li><li>max: 245 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 123.56 tokens</li><li>max: 244 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 121.24 tokens</li><li>max: 238 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task570_recipe_nlg_ner_generation * Dataset: task570_recipe_nlg_ner_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 9 tokens</li><li>mean: 74.84 tokens</li><li>max: 250 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 73.97 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 76.51 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1409_dart_text_generation * Dataset: task1409_dart_text_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 18 tokens</li><li>mean: 67.5 tokens</li><li>max: 174 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 72.28 tokens</li><li>max: 170 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 67.22 tokens</li><li>max: 164 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task401_numeric_fused_head_reference * Dataset: task401_numeric_fused_head_reference * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 16 tokens</li><li>mean: 109.31 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 114.71 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 120.55 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task846_pubmedqa_classification * Dataset: task846_pubmedqa_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 32 tokens</li><li>mean: 86.22 tokens</li><li>max: 246 tokens</li></ul> | <ul><li>min: 33 tokens</li><li>mean: 85.64 tokens</li><li>max: 225 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 94.03 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1712_poki_classification * Dataset: task1712_poki_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 53.16 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 56.97 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 63.57 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task344_hybridqa_answer_generation * Dataset: task344_hybridqa_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 9 tokens</li><li>mean: 22.21 tokens</li><li>max: 50 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 21.92 tokens</li><li>max: 58 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 22.19 tokens</li><li>max: 55 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task875_emotion_classification * Dataset: task875_emotion_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 23.18 tokens</li><li>max: 75 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 18.52 tokens</li><li>max: 63 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 20.35 tokens</li><li>max: 68 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1214_atomic_classification_xwant * Dataset: task1214_atomic_classification_xwant * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 14 tokens</li><li>mean: 19.64 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 19.36 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 19.54 tokens</li><li>max: 31 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task106_scruples_ethical_judgment * Dataset: task106_scruples_ethical_judgment * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 12 tokens</li><li>mean: 30.0 tokens</li><li>max: 70 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 28.89 tokens</li><li>max: 86 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 28.73 tokens</li><li>max: 58 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task238_iirc_answer_from_passage_answer_generation * Dataset: task238_iirc_answer_from_passage_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 138 tokens</li><li>mean: 242.78 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 165 tokens</li><li>mean: 242.57 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 173 tokens</li><li>mean: 243.0 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1391_winogrande_easy_answer_generation * Dataset: task1391_winogrande_easy_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 26 tokens</li><li>mean: 31.63 tokens</li><li>max: 54 tokens</li></ul> | <ul><li>min: 26 tokens</li><li>mean: 31.36 tokens</li><li>max: 48 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 31.3 tokens</li><li>max: 49 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task195_sentiment140_classification * Dataset: task195_sentiment140_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 22.47 tokens</li><li>max: 118 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 18.84 tokens</li><li>max: 79 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 21.25 tokens</li><li>max: 51 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task163_count_words_ending_with_letter * Dataset: task163_count_words_ending_with_letter * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 28 tokens</li><li>mean: 32.05 tokens</li><li>max: 54 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 31.69 tokens</li><li>max: 57 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 31.58 tokens</li><li>max: 43 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task579_socialiqa_classification * Dataset: task579_socialiqa_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 39 tokens</li><li>mean: 54.11 tokens</li><li>max: 132 tokens</li></ul> | <ul><li>min: 36 tokens</li><li>mean: 53.52 tokens</li><li>max: 103 tokens</li></ul> | <ul><li>min: 40 tokens</li><li>mean: 54.12 tokens</li><li>max: 84 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task569_recipe_nlg_text_generation * Dataset: task569_recipe_nlg_text_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 25 tokens</li><li>mean: 192.16 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 55 tokens</li><li>mean: 193.74 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 37 tokens</li><li>mean: 199.11 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1602_webquestion_question_genreation * Dataset: task1602_webquestion_question_genreation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 12 tokens</li><li>mean: 23.95 tokens</li><li>max: 112 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 24.6 tokens</li><li>max: 112 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 22.6 tokens</li><li>max: 120 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task747_glucose_cause_emotion_detection * Dataset: task747_glucose_cause_emotion_detection * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 35 tokens</li><li>mean: 68.23 tokens</li><li>max: 112 tokens</li></ul> | <ul><li>min: 36 tokens</li><li>mean: 68.25 tokens</li><li>max: 108 tokens</li></ul> | <ul><li>min: 36 tokens</li><li>mean: 68.75 tokens</li><li>max: 99 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task219_rocstories_title_answer_generation * Dataset: task219_rocstories_title_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 42 tokens</li><li>mean: 67.62 tokens</li><li>max: 97 tokens</li></ul> | <ul><li>min: 45 tokens</li><li>mean: 66.65 tokens</li><li>max: 97 tokens</li></ul> | <ul><li>min: 41 tokens</li><li>mean: 66.89 tokens</li><li>max: 96 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task178_quartz_question_answering * Dataset: task178_quartz_question_answering * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 28 tokens</li><li>mean: 57.96 tokens</li><li>max: 110 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 57.18 tokens</li><li>max: 111 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 56.74 tokens</li><li>max: 102 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task103_facts2story_long_text_generation * Dataset: task103_facts2story_long_text_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 52 tokens</li><li>mean: 80.34 tokens</li><li>max: 143 tokens</li></ul> | <ul><li>min: 51 tokens</li><li>mean: 82.24 tokens</li><li>max: 157 tokens</li></ul> | <ul><li>min: 49 tokens</li><li>mean: 78.57 tokens</li><li>max: 136 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task301_record_question_generation * Dataset: task301_record_question_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 140 tokens</li><li>mean: 210.76 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 139 tokens</li><li>mean: 209.62 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 143 tokens</li><li>mean: 209.06 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1369_healthfact_sentence_generation * Dataset: task1369_healthfact_sentence_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 110 tokens</li><li>mean: 243.14 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 101 tokens</li><li>mean: 242.95 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 113 tokens</li><li>mean: 251.89 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task515_senteval_odd_word_out * Dataset: task515_senteval_odd_word_out * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 19.75 tokens</li><li>max: 36 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 19.02 tokens</li><li>max: 38 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 18.93 tokens</li><li>max: 35 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task496_semeval_answer_generation * Dataset: task496_semeval_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 28.06 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 27.74 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 27.69 tokens</li><li>max: 45 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1658_billsum_summarization * Dataset: task1658_billsum_summarization * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 256 tokens</li><li>mean: 256.0 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 256 tokens</li><li>mean: 256.0 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 256 tokens</li><li>mean: 256.0 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1204_atomic_classification_hinderedby * Dataset: task1204_atomic_classification_hinderedby * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 14 tokens</li><li>mean: 21.98 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 22.01 tokens</li><li>max: 34 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 21.48 tokens</li><li>max: 38 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1392_superglue_multirc_answer_verification * Dataset: task1392_superglue_multirc_answer_verification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 128 tokens</li><li>mean: 241.47 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 127 tokens</li><li>mean: 241.68 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 136 tokens</li><li>mean: 241.8 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task306_jeopardy_answer_generation_double * Dataset: task306_jeopardy_answer_generation_double * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 10 tokens</li><li>mean: 27.73 tokens</li><li>max: 47 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 27.13 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 27.69 tokens</li><li>max: 47 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1286_openbookqa_question_answering * Dataset: task1286_openbookqa_question_answering * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 22 tokens</li><li>mean: 39.38 tokens</li><li>max: 85 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 38.71 tokens</li><li>max: 96 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 38.22 tokens</li><li>max: 89 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task159_check_frequency_of_words_in_sentence_pair * Dataset: task159_check_frequency_of_words_in_sentence_pair * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 44 tokens</li><li>mean: 50.34 tokens</li><li>max: 67 tokens</li></ul> | <ul><li>min: 44 tokens</li><li>mean: 50.29 tokens</li><li>max: 67 tokens</li></ul> | <ul><li>min: 44 tokens</li><li>mean: 50.51 tokens</li><li>max: 66 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task151_tomqa_find_location_easy_clean * Dataset: task151_tomqa_find_location_easy_clean * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 37 tokens</li><li>mean: 50.63 tokens</li><li>max: 79 tokens</li></ul> | <ul><li>min: 37 tokens</li><li>mean: 50.35 tokens</li><li>max: 74 tokens</li></ul> | <ul><li>min: 37 tokens</li><li>mean: 50.53 tokens</li><li>max: 74 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task323_jigsaw_classification_sexually_explicit * Dataset: task323_jigsaw_classification_sexually_explicit * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 66.74 tokens</li><li>max: 248 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 77.15 tokens</li><li>max: 248 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 75.88 tokens</li><li>max: 251 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task037_qasc_generate_related_fact * Dataset: task037_qasc_generate_related_fact * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 13 tokens</li><li>mean: 22.02 tokens</li><li>max: 50 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 21.97 tokens</li><li>max: 42 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 21.87 tokens</li><li>max: 40 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task027_drop_answer_type_generation * Dataset: task027_drop_answer_type_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 87 tokens</li><li>mean: 229.25 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 74 tokens</li><li>mean: 230.99 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 71 tokens</li><li>mean: 232.46 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1596_event2mind_text_generation_2 * Dataset: task1596_event2mind_text_generation_2 * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 9.92 tokens</li><li>max: 18 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 10.0 tokens</li><li>max: 19 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 10.05 tokens</li><li>max: 18 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task141_odd-man-out_classification_category * Dataset: task141_odd-man-out_classification_category * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 16 tokens</li><li>mean: 18.45 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 18.39 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 18.46 tokens</li><li>max: 25 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task194_duorc_answer_generation * Dataset: task194_duorc_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 149 tokens</li><li>mean: 251.91 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 147 tokens</li><li>mean: 252.15 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 148 tokens</li><li>mean: 251.93 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task679_hope_edi_english_text_classification * Dataset: task679_hope_edi_english_text_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 27.42 tokens</li><li>max: 199 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 26.83 tokens</li><li>max: 205 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 29.66 tokens</li><li>max: 194 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task246_dream_question_generation * Dataset: task246_dream_question_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 17 tokens</li><li>mean: 80.19 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 80.98 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 86.73 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1195_disflqa_disfluent_to_fluent_conversion * Dataset: task1195_disflqa_disfluent_to_fluent_conversion * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 9 tokens</li><li>mean: 19.8 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 19.78 tokens</li><li>max: 40 tokens</li></ul> | <ul><li>min: 2 tokens</li><li>mean: 20.34 tokens</li><li>max: 44 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task065_timetravel_consistent_sentence_classification * Dataset: task065_timetravel_consistent_sentence_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 55 tokens</li><li>mean: 79.64 tokens</li><li>max: 117 tokens</li></ul> | <ul><li>min: 51 tokens</li><li>mean: 79.21 tokens</li><li>max: 110 tokens</li></ul> | <ul><li>min: 53 tokens</li><li>mean: 79.78 tokens</li><li>max: 110 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task351_winomt_classification_gender_identifiability_anti * Dataset: task351_winomt_classification_gender_identifiability_anti * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 16 tokens</li><li>mean: 21.77 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 21.69 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 21.8 tokens</li><li>max: 30 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task580_socialiqa_answer_generation * Dataset: task580_socialiqa_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 35 tokens</li><li>mean: 52.45 tokens</li><li>max: 107 tokens</li></ul> | <ul><li>min: 35 tokens</li><li>mean: 51.1 tokens</li><li>max: 86 tokens</li></ul> | <ul><li>min: 35 tokens</li><li>mean: 50.97 tokens</li><li>max: 87 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task583_udeps_eng_coarse_pos_tagging * Dataset: task583_udeps_eng_coarse_pos_tagging * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 12 tokens</li><li>mean: 40.78 tokens</li><li>max: 185 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 40.09 tokens</li><li>max: 185 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 40.55 tokens</li><li>max: 185 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task202_mnli_contradiction_classification * Dataset: task202_mnli_contradiction_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 24 tokens</li><li>mean: 74.1 tokens</li><li>max: 190 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 76.44 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 75.12 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task222_rocstories_two_chioce_slotting_classification * Dataset: task222_rocstories_two_chioce_slotting_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 48 tokens</li><li>mean: 73.15 tokens</li><li>max: 105 tokens</li></ul> | <ul><li>min: 48 tokens</li><li>mean: 73.22 tokens</li><li>max: 100 tokens</li></ul> | <ul><li>min: 49 tokens</li><li>mean: 72.05 tokens</li><li>max: 102 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task498_scruples_anecdotes_whoiswrong_classification * Dataset: task498_scruples_anecdotes_whoiswrong_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 24 tokens</li><li>mean: 225.53 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 47 tokens</li><li>mean: 231.91 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 47 tokens</li><li>mean: 230.65 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task067_abductivenli_answer_generation * Dataset: task067_abductivenli_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 14 tokens</li><li>mean: 26.79 tokens</li><li>max: 40 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 26.12 tokens</li><li>max: 42 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 26.33 tokens</li><li>max: 38 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task616_cola_classification * Dataset: task616_cola_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 12.79 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 12.55 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 12.25 tokens</li><li>max: 29 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task286_olid_offense_judgment * Dataset: task286_olid_offense_judgment * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 33.05 tokens</li><li>max: 145 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 31.09 tokens</li><li>max: 171 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 30.89 tokens</li><li>max: 169 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task188_snli_neutral_to_entailment_text_modification * Dataset: task188_snli_neutral_to_entailment_text_modification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 18 tokens</li><li>mean: 31.81 tokens</li><li>max: 79 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 31.16 tokens</li><li>max: 84 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 33.04 tokens</li><li>max: 84 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task223_quartz_explanation_generation * Dataset: task223_quartz_explanation_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 12 tokens</li><li>mean: 31.45 tokens</li><li>max: 68 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 31.82 tokens</li><li>max: 68 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 29.1 tokens</li><li>max: 96 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task820_protoqa_answer_generation * Dataset: task820_protoqa_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 14.84 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 14.52 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 14.23 tokens</li><li>max: 29 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task196_sentiment140_answer_generation * Dataset: task196_sentiment140_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 17 tokens</li><li>mean: 36.15 tokens</li><li>max: 72 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 32.89 tokens</li><li>max: 61 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 36.14 tokens</li><li>max: 72 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1678_mathqa_answer_selection * Dataset: task1678_mathqa_answer_selection * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 33 tokens</li><li>mean: 69.95 tokens</li><li>max: 177 tokens</li></ul> | <ul><li>min: 30 tokens</li><li>mean: 68.73 tokens</li><li>max: 146 tokens</li></ul> | <ul><li>min: 33 tokens</li><li>mean: 69.24 tokens</li><li>max: 160 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task349_squad2.0_answerable_unanswerable_question_classification * Dataset: task349_squad2.0_answerable_unanswerable_question_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 53 tokens</li><li>mean: 175.57 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 57 tokens</li><li>mean: 175.84 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 53 tokens</li><li>mean: 175.49 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task154_tomqa_find_location_hard_noise * Dataset: task154_tomqa_find_location_hard_noise * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 129 tokens</li><li>mean: 175.63 tokens</li><li>max: 253 tokens</li></ul> | <ul><li>min: 126 tokens</li><li>mean: 175.85 tokens</li><li>max: 249 tokens</li></ul> | <ul><li>min: 128 tokens</li><li>mean: 177.2 tokens</li><li>max: 254 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task333_hateeval_classification_hate_en * Dataset: task333_hateeval_classification_hate_en * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 38.62 tokens</li><li>max: 117 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 37.48 tokens</li><li>max: 109 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 36.83 tokens</li><li>max: 113 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task235_iirc_question_from_subtext_answer_generation * Dataset: task235_iirc_question_from_subtext_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 14 tokens</li><li>mean: 52.54 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 50.77 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 55.44 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1554_scitail_classification * Dataset: task1554_scitail_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 16.72 tokens</li><li>max: 38 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 25.6 tokens</li><li>max: 68 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 24.39 tokens</li><li>max: 59 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task210_logic2text_structured_text_generation * Dataset: task210_logic2text_structured_text_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 13 tokens</li><li>mean: 31.83 tokens</li><li>max: 101 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 30.89 tokens</li><li>max: 94 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 32.76 tokens</li><li>max: 89 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task035_winogrande_question_modification_person * Dataset: task035_winogrande_question_modification_person * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 31 tokens</li><li>mean: 36.24 tokens</li><li>max: 50 tokens</li></ul> | <ul><li>min: 31 tokens</li><li>mean: 35.8 tokens</li><li>max: 55 tokens</li></ul> | <ul><li>min: 31 tokens</li><li>mean: 35.46 tokens</li><li>max: 48 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task230_iirc_passage_classification * Dataset: task230_iirc_passage_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 256 tokens</li><li>mean: 256.0 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 256 tokens</li><li>mean: 256.0 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 256 tokens</li><li>mean: 256.0 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1356_xlsum_title_generation * Dataset: task1356_xlsum_title_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 59 tokens</li><li>mean: 239.39 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 58 tokens</li><li>mean: 241.03 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 64 tokens</li><li>mean: 248.12 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1726_mathqa_correct_answer_generation * Dataset: task1726_mathqa_correct_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 10 tokens</li><li>mean: 43.95 tokens</li><li>max: 156 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 42.44 tokens</li><li>max: 129 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 42.8 tokens</li><li>max: 133 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task302_record_classification * Dataset: task302_record_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 194 tokens</li><li>mean: 253.52 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 198 tokens</li><li>mean: 252.98 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 195 tokens</li><li>mean: 252.9 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task380_boolq_yes_no_question * Dataset: task380_boolq_yes_no_question * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 26 tokens</li><li>mean: 133.18 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 26 tokens</li><li>mean: 138.06 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 137.06 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task212_logic2text_classification * Dataset: task212_logic2text_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 14 tokens</li><li>mean: 33.56 tokens</li><li>max: 146 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 32.24 tokens</li><li>max: 146 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 33.17 tokens</li><li>max: 127 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task748_glucose_reverse_cause_event_detection * Dataset: task748_glucose_reverse_cause_event_detection * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 35 tokens</li><li>mean: 68.0 tokens</li><li>max: 105 tokens</li></ul> | <ul><li>min: 38 tokens</li><li>mean: 67.24 tokens</li><li>max: 106 tokens</li></ul> | <ul><li>min: 39 tokens</li><li>mean: 68.82 tokens</li><li>max: 105 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task834_mathdataset_classification * Dataset: task834_mathdataset_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 27.89 tokens</li><li>max: 83 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 28.2 tokens</li><li>max: 83 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 27.11 tokens</li><li>max: 93 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task350_winomt_classification_gender_identifiability_pro * Dataset: task350_winomt_classification_gender_identifiability_pro * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 16 tokens</li><li>mean: 21.8 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 21.62 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 21.81 tokens</li><li>max: 30 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task191_hotpotqa_question_generation * Dataset: task191_hotpotqa_question_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 198 tokens</li><li>mean: 255.91 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 238 tokens</li><li>mean: 255.94 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 255 tokens</li><li>mean: 256.0 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task236_iirc_question_from_passage_answer_generation * Dataset: task236_iirc_question_from_passage_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 135 tokens</li><li>mean: 238.16 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 155 tokens</li><li>mean: 237.5 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 154 tokens</li><li>mean: 239.56 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task217_rocstories_ordering_answer_generation * Dataset: task217_rocstories_ordering_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 45 tokens</li><li>mean: 72.48 tokens</li><li>max: 107 tokens</li></ul> | <ul><li>min: 48 tokens</li><li>mean: 72.44 tokens</li><li>max: 107 tokens</li></ul> | <ul><li>min: 48 tokens</li><li>mean: 71.11 tokens</li><li>max: 105 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task568_circa_question_generation * Dataset: task568_circa_question_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 9.65 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.52 tokens</li><li>max: 20 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 8.98 tokens</li><li>max: 20 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task614_glucose_cause_event_detection * Dataset: task614_glucose_cause_event_detection * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 39 tokens</li><li>mean: 67.94 tokens</li><li>max: 102 tokens</li></ul> | <ul><li>min: 39 tokens</li><li>mean: 67.3 tokens</li><li>max: 106 tokens</li></ul> | <ul><li>min: 38 tokens</li><li>mean: 68.61 tokens</li><li>max: 103 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task361_spolin_yesand_prompt_response_classification * Dataset: task361_spolin_yesand_prompt_response_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 18 tokens</li><li>mean: 46.89 tokens</li><li>max: 137 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 46.11 tokens</li><li>max: 119 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 47.3 tokens</li><li>max: 128 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task421_persent_sentence_sentiment_classification * Dataset: task421_persent_sentence_sentiment_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 22 tokens</li><li>mean: 67.26 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 70.21 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 72.11 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task203_mnli_sentence_generation * Dataset: task203_mnli_sentence_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 14 tokens</li><li>mean: 38.83 tokens</li><li>max: 175 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 35.68 tokens</li><li>max: 175 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 33.77 tokens</li><li>max: 170 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task420_persent_document_sentiment_classification * Dataset: task420_persent_document_sentiment_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 22 tokens</li><li>mean: 222.98 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 233.17 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 228.48 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task153_tomqa_find_location_hard_clean * Dataset: task153_tomqa_find_location_hard_clean * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 39 tokens</li><li>mean: 161.63 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 39 tokens</li><li>mean: 160.81 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 39 tokens</li><li>mean: 164.26 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task346_hybridqa_classification * Dataset: task346_hybridqa_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 18 tokens</li><li>mean: 32.85 tokens</li><li>max: 68 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 32.03 tokens</li><li>max: 63 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 31.88 tokens</li><li>max: 75 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1211_atomic_classification_hassubevent * Dataset: task1211_atomic_classification_hassubevent * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 11 tokens</li><li>mean: 16.25 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 16.07 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 16.8 tokens</li><li>max: 29 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task360_spolin_yesand_response_generation * Dataset: task360_spolin_yesand_response_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 22.68 tokens</li><li>max: 89 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 21.02 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 20.67 tokens</li><li>max: 67 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task510_reddit_tifu_title_summarization * Dataset: task510_reddit_tifu_title_summarization * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 9 tokens</li><li>mean: 216.21 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 218.0 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 221.49 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task511_reddit_tifu_long_text_summarization * Dataset: task511_reddit_tifu_long_text_summarization * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 29 tokens</li><li>mean: 239.99 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 76 tokens</li><li>mean: 239.55 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 244.85 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task345_hybridqa_answer_generation * Dataset: task345_hybridqa_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 9 tokens</li><li>mean: 22.24 tokens</li><li>max: 50 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 21.66 tokens</li><li>max: 70 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 20.97 tokens</li><li>max: 47 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task270_csrg_counterfactual_context_generation * Dataset: task270_csrg_counterfactual_context_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 63 tokens</li><li>mean: 100.12 tokens</li><li>max: 158 tokens</li></ul> | <ul><li>min: 63 tokens</li><li>mean: 98.52 tokens</li><li>max: 142 tokens</li></ul> | <ul><li>min: 62 tokens</li><li>mean: 100.4 tokens</li><li>max: 141 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task307_jeopardy_answer_generation_final * Dataset: task307_jeopardy_answer_generation_final * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 15 tokens</li><li>mean: 29.63 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 29.27 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 29.25 tokens</li><li>max: 43 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task001_quoref_question_generation * Dataset: task001_quoref_question_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 201 tokens</li><li>mean: 255.1 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 99 tokens</li><li>mean: 254.46 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 173 tokens</li><li>mean: 255.11 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task089_swap_words_verification * Dataset: task089_swap_words_verification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 9 tokens</li><li>mean: 12.91 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 12.67 tokens</li><li>max: 24 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 12.26 tokens</li><li>max: 22 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1196_atomic_classification_oeffect * Dataset: task1196_atomic_classification_oeffect * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 14 tokens</li><li>mean: 18.77 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 18.57 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 18.5 tokens</li><li>max: 29 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task080_piqa_answer_generation * Dataset: task080_piqa_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 3 tokens</li><li>mean: 10.89 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 10.71 tokens</li><li>max: 24 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 10.16 tokens</li><li>max: 26 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1598_nyc_long_text_generation * Dataset: task1598_nyc_long_text_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 17 tokens</li><li>mean: 35.48 tokens</li><li>max: 56 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 35.6 tokens</li><li>max: 56 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 36.56 tokens</li><li>max: 55 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task240_tweetqa_question_generation * Dataset: task240_tweetqa_question_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 27 tokens</li><li>mean: 51.19 tokens</li><li>max: 94 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 50.8 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 51.63 tokens</li><li>max: 95 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task615_moviesqa_answer_generation * Dataset: task615_moviesqa_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 11.44 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 11.45 tokens</li><li>max: 19 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 11.41 tokens</li><li>max: 22 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1347_glue_sts-b_similarity_classification * Dataset: task1347_glue_sts-b_similarity_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 17 tokens</li><li>mean: 31.16 tokens</li><li>max: 88 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 31.12 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 31.04 tokens</li><li>max: 92 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task114_is_the_given_word_longest * Dataset: task114_is_the_given_word_longest * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 25 tokens</li><li>mean: 28.95 tokens</li><li>max: 68 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 28.46 tokens</li><li>max: 48 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 28.75 tokens</li><li>max: 47 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task292_storycommonsense_character_text_generation * Dataset: task292_storycommonsense_character_text_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 43 tokens</li><li>mean: 68.1 tokens</li><li>max: 98 tokens</li></ul> | <ul><li>min: 46 tokens</li><li>mean: 67.4 tokens</li><li>max: 104 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 69.04 tokens</li><li>max: 96 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task115_help_advice_classification * Dataset: task115_help_advice_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 2 tokens</li><li>mean: 19.9 tokens</li><li>max: 91 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 18.14 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 19.28 tokens</li><li>max: 137 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task431_senteval_object_count * Dataset: task431_senteval_object_count * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 16.75 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.14 tokens</li><li>max: 36 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.78 tokens</li><li>max: 35 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1360_numer_sense_multiple_choice_qa_generation * Dataset: task1360_numer_sense_multiple_choice_qa_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 32 tokens</li><li>mean: 40.58 tokens</li><li>max: 54 tokens</li></ul> | <ul><li>min: 32 tokens</li><li>mean: 40.28 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 32 tokens</li><li>mean: 40.2 tokens</li><li>max: 60 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task177_para-nmt_paraphrasing * Dataset: task177_para-nmt_paraphrasing * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 19.73 tokens</li><li>max: 59 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 18.88 tokens</li><li>max: 58 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 18.29 tokens</li><li>max: 36 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task132_dais_text_modification * Dataset: task132_dais_text_modification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 9.3 tokens</li><li>max: 15 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 9.1 tokens</li><li>max: 15 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 10.14 tokens</li><li>max: 15 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task269_csrg_counterfactual_story_generation * Dataset: task269_csrg_counterfactual_story_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 49 tokens</li><li>mean: 79.75 tokens</li><li>max: 111 tokens</li></ul> | <ul><li>min: 53 tokens</li><li>mean: 79.41 tokens</li><li>max: 116 tokens</li></ul> | <ul><li>min: 48 tokens</li><li>mean: 79.46 tokens</li><li>max: 114 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task233_iirc_link_exists_classification * Dataset: task233_iirc_link_exists_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 145 tokens</li><li>mean: 235.19 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 142 tokens</li><li>mean: 233.32 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 151 tokens</li><li>mean: 234.78 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task161_count_words_containing_letter * Dataset: task161_count_words_containing_letter * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 27 tokens</li><li>mean: 31.0 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 30.83 tokens</li><li>max: 61 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 30.52 tokens</li><li>max: 42 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1205_atomic_classification_isafter * Dataset: task1205_atomic_classification_isafter * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 14 tokens</li><li>mean: 20.94 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 20.64 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 21.51 tokens</li><li>max: 37 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task571_recipe_nlg_ner_generation * Dataset: task571_recipe_nlg_ner_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 117.62 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 117.51 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 109.25 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1292_yelp_review_full_text_categorization * Dataset: task1292_yelp_review_full_text_categorization * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 135.37 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 144.75 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 145.27 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task428_senteval_inversion * Dataset: task428_senteval_inversion * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 16.59 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 14.63 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.31 tokens</li><li>max: 34 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task311_race_question_generation * Dataset: task311_race_question_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 115 tokens</li><li>mean: 254.55 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 137 tokens</li><li>mean: 254.56 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 171 tokens</li><li>mean: 255.54 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task429_senteval_tense * Dataset: task429_senteval_tense * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 15.9 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 14.12 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.33 tokens</li><li>max: 36 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task403_creak_commonsense_inference * Dataset: task403_creak_commonsense_inference * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 13 tokens</li><li>mean: 30.04 tokens</li><li>max: 104 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 29.3 tokens</li><li>max: 108 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 29.47 tokens</li><li>max: 122 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task929_products_reviews_classification * Dataset: task929_products_reviews_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 69.18 tokens</li><li>max: 126 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 70.54 tokens</li><li>max: 123 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 70.28 tokens</li><li>max: 123 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task582_naturalquestion_answer_generation * Dataset: task582_naturalquestion_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 10 tokens</li><li>mean: 11.69 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 11.64 tokens</li><li>max: 24 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 11.72 tokens</li><li>max: 25 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task237_iirc_answer_from_subtext_answer_generation * Dataset: task237_iirc_answer_from_subtext_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 22 tokens</li><li>mean: 66.47 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 64.67 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 61.4 tokens</li><li>max: 161 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task050_multirc_answerability * Dataset: task050_multirc_answerability * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 15 tokens</li><li>mean: 32.35 tokens</li><li>max: 112 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 31.51 tokens</li><li>max: 83 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 32.03 tokens</li><li>max: 159 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task184_break_generate_question * Dataset: task184_break_generate_question * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 13 tokens</li><li>mean: 39.76 tokens</li><li>max: 147 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 38.97 tokens</li><li>max: 149 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 39.62 tokens</li><li>max: 148 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task669_ambigqa_answer_generation * Dataset: task669_ambigqa_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 10 tokens</li><li>mean: 12.91 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 12.88 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 12.72 tokens</li><li>max: 22 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task169_strategyqa_sentence_generation * Dataset: task169_strategyqa_sentence_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 19 tokens</li><li>mean: 35.3 tokens</li><li>max: 65 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 34.36 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 33.36 tokens</li><li>max: 65 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task500_scruples_anecdotes_title_generation * Dataset: task500_scruples_anecdotes_title_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 14 tokens</li><li>mean: 224.51 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 31 tokens</li><li>mean: 232.39 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 234.4 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task241_tweetqa_classification * Dataset: task241_tweetqa_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 31 tokens</li><li>mean: 61.75 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 36 tokens</li><li>mean: 61.98 tokens</li><li>max: 106 tokens</li></ul> | <ul><li>min: 31 tokens</li><li>mean: 61.67 tokens</li><li>max: 92 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1345_glue_qqp_question_paraprashing * Dataset: task1345_glue_qqp_question_paraprashing * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 16.62 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.77 tokens</li><li>max: 69 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 16.61 tokens</li><li>max: 51 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task218_rocstories_swap_order_answer_generation * Dataset: task218_rocstories_swap_order_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 48 tokens</li><li>mean: 72.42 tokens</li><li>max: 118 tokens</li></ul> | <ul><li>min: 48 tokens</li><li>mean: 72.62 tokens</li><li>max: 102 tokens</li></ul> | <ul><li>min: 47 tokens</li><li>mean: 72.14 tokens</li><li>max: 106 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task613_politifact_text_generation * Dataset: task613_politifact_text_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 24.71 tokens</li><li>max: 75 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 23.58 tokens</li><li>max: 56 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 22.87 tokens</li><li>max: 61 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1167_penn_treebank_coarse_pos_tagging * Dataset: task1167_penn_treebank_coarse_pos_tagging * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 16 tokens</li><li>mean: 53.81 tokens</li><li>max: 200 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 53.49 tokens</li><li>max: 220 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 54.95 tokens</li><li>max: 202 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1422_mathqa_physics * Dataset: task1422_mathqa_physics * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 34 tokens</li><li>mean: 72.14 tokens</li><li>max: 164 tokens</li></ul> | <ul><li>min: 38 tokens</li><li>mean: 71.53 tokens</li><li>max: 157 tokens</li></ul> | <ul><li>min: 39 tokens</li><li>mean: 72.08 tokens</li><li>max: 155 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task247_dream_answer_generation * Dataset: task247_dream_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 38 tokens</li><li>mean: 159.4 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 39 tokens</li><li>mean: 157.79 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 41 tokens</li><li>mean: 167.32 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task199_mnli_classification * Dataset: task199_mnli_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 13 tokens</li><li>mean: 43.33 tokens</li><li>max: 127 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 44.68 tokens</li><li>max: 149 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 44.31 tokens</li><li>max: 113 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task164_mcscript_question_answering_text * Dataset: task164_mcscript_question_answering_text * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 150 tokens</li><li>mean: 200.67 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 150 tokens</li><li>mean: 200.46 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 142 tokens</li><li>mean: 200.89 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1541_agnews_classification * Dataset: task1541_agnews_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 21 tokens</li><li>mean: 53.39 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 52.89 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 53.84 tokens</li><li>max: 161 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task516_senteval_conjoints_inversion * Dataset: task516_senteval_conjoints_inversion * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 20.31 tokens</li><li>max: 34 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 18.97 tokens</li><li>max: 34 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 18.91 tokens</li><li>max: 34 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task294_storycommonsense_motiv_text_generation * Dataset: task294_storycommonsense_motiv_text_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 14 tokens</li><li>mean: 40.09 tokens</li><li>max: 86 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 40.44 tokens</li><li>max: 86 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 39.58 tokens</li><li>max: 86 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task501_scruples_anecdotes_post_type_verification * Dataset: task501_scruples_anecdotes_post_type_verification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 18 tokens</li><li>mean: 231.44 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 235.23 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 234.84 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task213_rocstories_correct_ending_classification * Dataset: task213_rocstories_correct_ending_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 62 tokens</li><li>mean: 86.03 tokens</li><li>max: 125 tokens</li></ul> | <ul><li>min: 60 tokens</li><li>mean: 85.66 tokens</li><li>max: 131 tokens</li></ul> | <ul><li>min: 59 tokens</li><li>mean: 86.01 tokens</li><li>max: 131 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task821_protoqa_question_generation * Dataset: task821_protoqa_question_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 14.61 tokens</li><li>max: 61 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 14.97 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 13.79 tokens</li><li>max: 93 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task493_review_polarity_classification * Dataset: task493_review_polarity_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 18 tokens</li><li>mean: 99.85 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 104.97 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 112.97 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task308_jeopardy_answer_generation_all * Dataset: task308_jeopardy_answer_generation_all * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 12 tokens</li><li>mean: 27.97 tokens</li><li>max: 50 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 27.0 tokens</li><li>max: 44 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 27.52 tokens</li><li>max: 48 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1595_event2mind_text_generation_1 * Dataset: task1595_event2mind_text_generation_1 * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 9.9 tokens</li><li>max: 18 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 9.96 tokens</li><li>max: 20 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 10.03 tokens</li><li>max: 20 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task040_qasc_question_generation * Dataset: task040_qasc_question_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 15.03 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.04 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 13.79 tokens</li><li>max: 32 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task231_iirc_link_classification * Dataset: task231_iirc_link_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 179 tokens</li><li>mean: 246.14 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 170 tokens</li><li>mean: 246.33 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 161 tokens</li><li>mean: 246.99 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1727_wiqa_what_is_the_effect * Dataset: task1727_wiqa_what_is_the_effect * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 44 tokens</li><li>mean: 95.04 tokens</li><li>max: 183 tokens</li></ul> | <ul><li>min: 44 tokens</li><li>mean: 95.1 tokens</li><li>max: 185 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 95.37 tokens</li><li>max: 183 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task578_curiosity_dialogs_answer_generation * Dataset: task578_curiosity_dialogs_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 10 tokens</li><li>mean: 230.36 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 118 tokens</li><li>mean: 235.58 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 229.92 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task310_race_classification * Dataset: task310_race_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 101 tokens</li><li>mean: 254.92 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 218 tokens</li><li>mean: 255.81 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 101 tokens</li><li>mean: 254.92 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task309_race_answer_generation * Dataset: task309_race_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 75 tokens</li><li>mean: 254.76 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 204 tokens</li><li>mean: 255.48 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 75 tokens</li><li>mean: 255.23 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task379_agnews_topic_classification * Dataset: task379_agnews_topic_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 20 tokens</li><li>mean: 54.44 tokens</li><li>max: 193 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 54.58 tokens</li><li>max: 175 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 55.12 tokens</li><li>max: 187 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task030_winogrande_full_person * Dataset: task030_winogrande_full_person * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 7.63 tokens</li><li>max: 12 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 7.52 tokens</li><li>max: 12 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 7.39 tokens</li><li>max: 11 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1540_parsed_pdfs_summarization * Dataset: task1540_parsed_pdfs_summarization * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 3 tokens</li><li>mean: 188.0 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 46 tokens</li><li>mean: 189.34 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 192.03 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task039_qasc_find_overlapping_words * Dataset: task039_qasc_find_overlapping_words * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 16 tokens</li><li>mean: 30.57 tokens</li><li>max: 55 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 30.03 tokens</li><li>max: 57 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 30.68 tokens</li><li>max: 60 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1206_atomic_classification_isbefore * Dataset: task1206_atomic_classification_isbefore * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 14 tokens</li><li>mean: 21.27 tokens</li><li>max: 40 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 20.85 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 21.37 tokens</li><li>max: 31 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task157_count_vowels_and_consonants * Dataset: task157_count_vowels_and_consonants * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 24 tokens</li><li>mean: 27.98 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 27.87 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 28.32 tokens</li><li>max: 39 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task339_record_answer_generation * Dataset: task339_record_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 171 tokens</li><li>mean: 234.55 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 171 tokens</li><li>mean: 233.87 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 171 tokens</li><li>mean: 232.63 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task453_swag_answer_generation * Dataset: task453_swag_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 9 tokens</li><li>mean: 18.38 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 18.13 tokens</li><li>max: 63 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 17.47 tokens</li><li>max: 55 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task848_pubmedqa_classification * Dataset: task848_pubmedqa_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 21 tokens</li><li>mean: 249.24 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 249.85 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 84 tokens</li><li>mean: 251.72 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task673_google_wellformed_query_classification * Dataset: task673_google_wellformed_query_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 11.6 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 11.2 tokens</li><li>max: 24 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 11.37 tokens</li><li>max: 22 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task676_ollie_relationship_answer_generation * Dataset: task676_ollie_relationship_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 29 tokens</li><li>mean: 50.98 tokens</li><li>max: 113 tokens</li></ul> | <ul><li>min: 29 tokens</li><li>mean: 48.82 tokens</li><li>max: 134 tokens</li></ul> | <ul><li>min: 30 tokens</li><li>mean: 51.69 tokens</li><li>max: 113 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task268_casehold_legal_answer_generation * Dataset: task268_casehold_legal_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 235 tokens</li><li>mean: 255.94 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 156 tokens</li><li>mean: 255.5 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 226 tokens</li><li>mean: 255.95 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task844_financial_phrasebank_classification * Dataset: task844_financial_phrasebank_classification * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 14 tokens</li><li>mean: 40.06 tokens</li><li>max: 86 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 38.31 tokens</li><li>max: 78 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 38.91 tokens</li><li>max: 86 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task330_gap_answer_generation * Dataset: task330_gap_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 26 tokens</li><li>mean: 107.15 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 44 tokens</li><li>mean: 108.5 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 45 tokens</li><li>mean: 111.29 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task595_mocha_answer_generation * Dataset: task595_mocha_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 44 tokens</li><li>mean: 94.29 tokens</li><li>max: 178 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 95.79 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 117.82 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task1285_kpa_keypoint_matching * Dataset: task1285_kpa_keypoint_matching * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 30 tokens</li><li>mean: 52.19 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 29 tokens</li><li>mean: 50.09 tokens</li><li>max: 84 tokens</li></ul> | <ul><li>min: 31 tokens</li><li>mean: 53.0 tokens</li><li>max: 88 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task234_iirc_passage_line_answer_generation * Dataset: task234_iirc_passage_line_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 143 tokens</li><li>mean: 234.48 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 155 tokens</li><li>mean: 235.32 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 146 tokens</li><li>mean: 236.21 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task494_review_polarity_answer_generation * Dataset: task494_review_polarity_answer_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 3 tokens</li><li>mean: 107.59 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 114.18 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 114.95 tokens</li><li>max: 249 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task670_ambigqa_question_generation * Dataset: task670_ambigqa_question_generation * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 11 tokens</li><li>mean: 12.7 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 12.46 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 12.24 tokens</li><li>max: 18 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### task289_gigaword_summarization * Dataset: task289_gigaword_summarization * Size: 634 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 634 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 25 tokens</li><li>mean: 51.28 tokens</li><li>max: 87 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 51.71 tokens</li><li>max: 87 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 51.14 tokens</li><li>max: 87 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### npr * Dataset: npr * Size: 24,774 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 12.18 tokens</li><li>max: 34 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 146.68 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 109.65 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### nli * Dataset: nli * Size: 49,548 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 21.0 tokens</li><li>max: 229 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 11.74 tokens</li><li>max: 38 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 11.98 tokens</li><li>max: 45 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### SimpleWiki * Dataset: SimpleWiki * Size: 5,006 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 29.27 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 33.55 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 55.34 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### amazon_review_2018 * Dataset: amazon_review_2018 * Size: 99,032 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 11.29 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 87.93 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 69.37 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### ccnews_title_text * Dataset: ccnews_title_text * Size: 24,774 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 15.71 tokens</li><li>max: 57 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 209.36 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 197.52 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### agnews * Dataset: agnews * Size: 44,606 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 11.84 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 40.9 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 44.47 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### xsum * Dataset: xsum * Size: 9,948 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 27.96 tokens</li><li>max: 86 tokens</li></ul> | <ul><li>min: 36 tokens</li><li>mean: 227.43 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 229.78 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### msmarco * Dataset: msmarco * Size: 173,290 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 9.09 tokens</li><li>max: 39 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 82.25 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 79.69 tokens</li><li>max: 220 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### yahoo_answers_title_answer * Dataset: yahoo_answers_title_answer * Size: 24,774 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 16.8 tokens</li><li>max: 69 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 78.53 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 87.35 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### squad_pairs * Dataset: squad_pairs * Size: 24,774 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 14.48 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 152.39 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 160.54 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### wow * Dataset: wow * Size: 29,716 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 90.07 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 29 tokens</li><li>mean: 111.81 tokens</li><li>max: 150 tokens</li></ul> | <ul><li>min: 92 tokens</li><li>mean: 113.15 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### mteb-amazon_counterfactual-avs_triplets * Dataset: mteb-amazon_counterfactual-avs_triplets * Size: 3,991 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 12 tokens</li><li>mean: 27.26 tokens</li><li>max: 137 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 26.57 tokens</li><li>max: 96 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 26.88 tokens</li><li>max: 96 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### mteb-amazon_massive_intent-avs_triplets * Dataset: mteb-amazon_massive_intent-avs_triplets * Size: 11,405 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 3 tokens</li><li>mean: 9.49 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 9.19 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 9.49 tokens</li><li>max: 25 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### mteb-amazon_massive_scenario-avs_triplets * Dataset: mteb-amazon_massive_scenario-avs_triplets * Size: 11,405 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 3 tokens</li><li>mean: 9.59 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 8.97 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 9.69 tokens</li><li>max: 29 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### mteb-amazon_reviews_multi-avs_triplets * Dataset: mteb-amazon_reviews_multi-avs_triplets * Size: 198,000 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 49.83 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 51.32 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 49.66 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### mteb-banking77-avs_triplets * Dataset: mteb-banking77-avs_triplets * Size: 9,947 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 16.19 tokens</li><li>max: 68 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 15.76 tokens</li><li>max: 79 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 15.78 tokens</li><li>max: 87 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### mteb-emotion-avs_triplets * Dataset: mteb-emotion-avs_triplets * Size: 15,840 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 21.76 tokens</li><li>max: 65 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 17.13 tokens</li><li>max: 62 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 21.95 tokens</li><li>max: 65 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### mteb-imdb-avs_triplets * Dataset: mteb-imdb-avs_triplets * Size: 24,647 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 34 tokens</li><li>mean: 207.65 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 57 tokens</li><li>mean: 222.57 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 207.98 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### mteb-mtop_domain-avs_triplets * Dataset: mteb-mtop_domain-avs_triplets * Size: 15,523 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 10.29 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.7 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 10.01 tokens</li><li>max: 28 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### mteb-mtop_intent-avs_triplets * Dataset: mteb-mtop_intent-avs_triplets * Size: 15,523 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 4 tokens</li><li>mean: 10.11 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.64 tokens</li><li>max: 34 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 10.13 tokens</li><li>max: 33 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### mteb-toxic_conversations_50k-avs_triplets * Dataset: mteb-toxic_conversations_50k-avs_triplets * Size: 49,421 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 3 tokens</li><li>mean: 68.39 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 91.3 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 70.1 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### mteb-tweet_sentiment_extraction-avs_triplets * Dataset: mteb-tweet_sentiment_extraction-avs_triplets * Size: 27,245 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 3 tokens</li><li>mean: 20.32 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 20.24 tokens</li><li>max: 51 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 20.98 tokens</li><li>max: 51 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` #### covid-bing-query-gpt4-avs_triplets * Dataset: covid-bing-query-gpt4-avs_triplets * Size: 4,942 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 15.22 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 37.46 tokens</li><li>max: 167 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 37.77 tokens</li><li>max: 128 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Evaluation Dataset #### Unnamed Dataset * Size: 18,269 evaluation samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 15.5 tokens</li><li>max: 56 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 143.45 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 145.01 tokens</li><li>max: 256 tokens</li></ul> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `learning_rate`: 2e-05 - `num_train_epochs`: 10 - `warmup_ratio`: 0.1 - `fp16`: True - `gradient_checkpointing`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 10 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: True - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs <details><summary>Click to expand</summary> | Epoch | Step | Training Loss | Validation Loss | medi-mteb-dev_cosine_accuracy | |:------:|:------:|:-------------:|:---------------:|:-----------------------------:| | 0 | 0 | - | - | 0.8503 | | 0.0175 | 500 | 1.9411 | 1.9039 | 0.8588 | | 0.0351 | 1000 | 1.5495 | 0.9698 | 0.8698 | | 0.0526 | 1500 | 1.3527 | 0.7684 | 0.8753 | | 0.0701 | 2000 | 1.1995 | 0.7102 | 0.8777 | | 0.0877 | 2500 | 1.1782 | 0.6829 | 0.8793 | | 0.1052 | 3000 | 1.1662 | 0.6633 | 0.8830 | | 0.1227 | 3500 | 1.139 | 0.6510 | 0.8844 | | 0.1403 | 4000 | 1.1389 | 0.6429 | 0.8851 | | 0.1578 | 4500 | 1.1381 | 0.6273 | 0.8863 | | 0.1753 | 5000 | 1.0616 | 0.6225 | 0.8869 | | 0.1929 | 5500 | 1.114 | 0.6169 | 0.8872 | | 0.2104 | 6000 | 0.9854 | 0.6108 | 0.8886 | | 0.2279 | 6500 | 1.081 | 0.6047 | 0.8900 | | 0.2455 | 7000 | 0.9899 | 0.5983 | 0.8912 | | 0.2630 | 7500 | 1.0551 | 0.5931 | 0.8921 | | 0.2805 | 8000 | 1.0515 | 0.5882 | 0.8930 | | 0.2981 | 8500 | 1.0384 | 0.5768 | 0.8946 | | 0.3156 | 9000 | 1.0545 | 0.5716 | 0.8945 | | 0.3331 | 9500 | 1.006 | 0.5744 | 0.8959 | | 0.3507 | 10000 | 0.9629 | 0.5719 | 0.8960 | | 0.3682 | 10500 | 1.0877 | 0.5600 | 0.8958 | | 0.3857 | 11000 | 1.0594 | 0.5639 | 0.8975 | | 0.4033 | 11500 | 1.0708 | 0.5672 | 0.8975 | | 0.4208 | 12000 | 1.0275 | 0.5481 | 0.8986 | | 0.4383 | 12500 | 0.9467 | 0.5552 | 0.9007 | | 0.4559 | 13000 | 1.0048 | 0.5524 | 0.9008 | | 0.4734 | 13500 | 1.0135 | 0.5482 | 0.9002 | | 0.4909 | 14000 | 0.9579 | 0.5428 | 0.9002 | | 0.5085 | 14500 | 0.9534 | 0.5373 | 0.9015 | | 0.5260 | 15000 | 0.9225 | 0.5347 | 0.9025 | | 0.5435 | 15500 | 0.9936 | 0.5384 | 0.9011 | | 0.5611 | 16000 | 0.926 | 0.5298 | 0.9028 | | 0.5786 | 16500 | 0.9904 | 0.5338 | 0.9034 | | 0.5961 | 17000 | 0.9302 | 0.5281 | 0.9033 | | 0.6137 | 17500 | 0.908 | 0.5332 | 0.9025 | | 0.6312 | 18000 | 0.8936 | 0.5322 | 0.9046 | | 0.6487 | 18500 | 0.9549 | 0.5312 | 0.9039 | | 0.6663 | 19000 | 0.9498 | 0.5319 | 0.9030 | | 0.6838 | 19500 | 0.9291 | 0.5279 | 0.9038 | | 0.7013 | 20000 | 0.9573 | 0.5165 | 0.9017 | | 0.7189 | 20500 | 0.9395 | 0.5223 | 0.9039 | | 0.7364 | 21000 | 0.8753 | 0.5335 | 0.9009 | | 0.7539 | 21500 | 0.95 | 0.5173 | 0.9040 | | 0.7715 | 22000 | 0.9656 | 0.5451 | 0.9043 | | 0.7890 | 22500 | 0.9145 | 0.5305 | 0.9033 | | 0.8065 | 23000 | 0.9768 | 0.5135 | 0.9041 | | 0.8241 | 23500 | 0.8779 | 0.5185 | 0.9037 | | 0.8416 | 24000 | 0.9603 | 0.5338 | 0.9036 | | 0.8591 | 24500 | 0.9045 | 0.5090 | 0.9056 | | 0.8767 | 25000 | 0.9536 | 0.5254 | 0.9043 | | 0.8942 | 25500 | 0.8499 | 0.5388 | 0.9023 | | 0.9117 | 26000 | 0.88 | 0.5676 | 0.9011 | | 0.9293 | 26500 | 0.8884 | 0.5127 | 0.9046 | | 0.9468 | 27000 | 0.8556 | 0.5227 | 0.9065 | | 0.9643 | 27500 | 0.8641 | 0.5901 | 0.9027 | | 0.9819 | 28000 | 0.8884 | 0.4982 | 0.9054 | | 0.9994 | 28500 | 0.8404 | 0.5078 | 0.9064 | | 1.0169 | 29000 | 0.8613 | 0.5211 | 0.9052 | | 1.0345 | 29500 | 0.8971 | 0.5061 | 0.9065 | | 1.0520 | 30000 | 0.9426 | 0.5118 | 0.9062 | | 1.0695 | 30500 | 0.8791 | 0.5062 | 0.9062 | | 1.0871 | 31000 | 0.8953 | 0.5056 | 0.9044 | | 1.1046 | 31500 | 0.9229 | 0.5002 | 0.9065 | | 1.1221 | 32000 | 0.8914 | 0.4912 | 0.9088 | | 1.1397 | 32500 | 0.9105 | 0.4973 | 0.9086 | | 1.1572 | 33000 | 0.9168 | 0.4954 | 0.9074 | | 1.1747 | 33500 | 0.845 | 0.5073 | 0.9088 | | 1.1923 | 34000 | 0.9209 | 0.4890 | 0.9088 | | 1.2098 | 34500 | 0.8014 | 0.5063 | 0.9063 | | 1.2273 | 35000 | 0.8888 | 0.5270 | 0.9070 | | 1.2449 | 35500 | 0.8269 | 0.5062 | 0.9059 | | 1.2624 | 36000 | 0.8637 | 0.4951 | 0.9054 | | 1.2799 | 36500 | 0.8796 | 0.4922 | 0.9083 | | 1.2975 | 37000 | 0.8644 | 0.4851 | 0.9068 | | 1.3150 | 37500 | 0.8907 | 0.5396 | 0.9069 | | 1.3325 | 38000 | 0.8477 | 0.4944 | 0.9082 | | 1.3501 | 38500 | 0.8237 | 0.4915 | 0.9081 | | 1.3676 | 39000 | 0.9217 | 0.4918 | 0.9083 | | 1.3851 | 39500 | 0.887 | 0.4955 | 0.9064 | | 1.4027 | 40000 | 0.9172 | 0.5259 | 0.9077 | | 1.4202 | 40500 | 0.8693 | 0.5002 | 0.9092 | | 1.4377 | 41000 | 0.8223 | 0.5109 | 0.9084 | | 1.4553 | 41500 | 0.8554 | 0.4859 | 0.9079 | | 1.4728 | 42000 | 0.8772 | 0.4850 | 0.9079 | | 1.4903 | 42500 | 0.8232 | 0.4860 | 0.9088 | | 1.5079 | 43000 | 0.8218 | 0.4917 | 0.9083 | | 1.5254 | 43500 | 0.7905 | 0.4839 | 0.9094 | | 1.5429 | 44000 | 0.847 | 0.5150 | 0.9081 | | 1.5605 | 44500 | 0.7929 | 0.5234 | 0.9082 | | 1.5780 | 45000 | 0.8621 | 0.5084 | 0.9094 | | 1.5955 | 45500 | 0.7908 | 0.4980 | 0.9092 | | 1.6131 | 46000 | 0.792 | 0.5385 | 0.9071 | | 1.6306 | 46500 | 0.7569 | 0.5405 | 0.9088 | | 1.6481 | 47000 | 0.8178 | 0.5172 | 0.9078 | | 1.6657 | 47500 | 0.8101 | 0.5379 | 0.9082 | | 1.6832 | 48000 | 0.8013 | 0.5627 | 0.9068 | | 1.7007 | 48500 | 0.8298 | 0.5947 | 0.9072 | | 1.7183 | 49000 | 0.8028 | 0.5302 | 0.9076 | | 1.7358 | 49500 | 0.7663 | 0.5523 | 0.9066 | | 1.7533 | 50000 | 0.8255 | 0.5361 | 0.9080 | | 1.7709 | 50500 | 0.8354 | 0.5373 | 0.9080 | | 1.7884 | 51000 | 0.7917 | 0.5546 | 0.9079 | | 1.8059 | 51500 | 0.837 | 0.5113 | 0.9085 | | 1.8235 | 52000 | 0.7488 | 0.5037 | 0.9082 | | 1.8410 | 52500 | 0.8439 | 0.5349 | 0.9084 | | 1.8585 | 53000 | 0.7688 | 0.5279 | 0.9083 | | 1.8761 | 53500 | 0.8205 | 0.5496 | 0.9071 | | 1.8936 | 54000 | 0.7256 | 0.5454 | 0.9075 | | 1.9111 | 54500 | 0.7536 | 0.5582 | 0.9060 | | 1.9287 | 55000 | 0.7544 | 0.5331 | 0.9075 | | 1.9462 | 55500 | 0.7332 | 0.5139 | 0.9091 | | 1.9637 | 56000 | 0.7244 | 0.5767 | 0.9078 | | 1.9813 | 56500 | 0.7574 | 0.4962 | 0.9084 | | 1.9988 | 57000 | 0.7116 | 0.5210 | 0.9090 | | 2.0163 | 57500 | 0.7376 | 0.5196 | 0.9088 | | 2.0339 | 58000 | 0.768 | 0.5609 | 0.9086 | | 2.0514 | 58500 | 0.8056 | 0.5230 | 0.9081 | | 2.0689 | 59000 | 0.7744 | 0.5527 | 0.9077 | | 2.0865 | 59500 | 0.7543 | 0.4949 | 0.9090 | | 2.1040 | 60000 | 0.8 | 0.4925 | 0.9095 | | 2.1215 | 60500 | 0.7664 | 0.4989 | 0.9093 | | 2.1391 | 61000 | 0.7849 | 0.4956 | 0.9106 | | 2.1566 | 61500 | 0.7955 | 0.5312 | 0.9099 | | 2.1741 | 62000 | 0.7326 | 0.5126 | 0.9112 | | 2.1917 | 62500 | 0.7975 | 0.4701 | 0.9114 | | 2.2092 | 63000 | 0.7001 | 0.5118 | 0.9093 | | 2.2267 | 63500 | 0.7477 | 0.5371 | 0.9102 | | 2.2443 | 64000 | 0.7227 | 0.5536 | 0.9083 | | 2.2618 | 64500 | 0.7687 | 0.5174 | 0.9102 | | 2.2793 | 65000 | 0.7633 | 0.4925 | 0.9102 | | 2.2969 | 65500 | 0.7572 | 0.5059 | 0.9093 | | 2.3144 | 66000 | 0.7846 | 0.5391 | 0.9088 | | 2.3319 | 66500 | 0.7434 | 0.4991 | 0.9111 | | 2.3495 | 67000 | 0.7124 | 0.5115 | 0.9107 | | 2.3670 | 67500 | 0.8085 | 0.4974 | 0.9086 | | 2.3845 | 68000 | 0.7879 | 0.5114 | 0.9089 | | 2.4021 | 68500 | 0.7977 | 0.5297 | 0.9086 | | 2.4196 | 69000 | 0.782 | 0.5251 | 0.9103 | | 2.4371 | 69500 | 0.7237 | 0.5568 | 0.9088 | | 2.4547 | 70000 | 0.7556 | 0.5008 | 0.9098 | | 2.4722 | 70500 | 0.777 | 0.4784 | 0.9097 | | 2.4897 | 71000 | 0.7205 | 0.4993 | 0.9097 | | 2.5073 | 71500 | 0.7237 | 0.5096 | 0.9102 | | 2.5248 | 72000 | 0.6976 | 0.4833 | 0.9107 | | 2.5423 | 72500 | 0.7572 | 0.5234 | 0.9092 | | 2.5599 | 73000 | 0.7012 | 0.5339 | 0.9096 | | 2.5774 | 73500 | 0.7799 | 0.5056 | 0.9107 | | 2.5949 | 74000 | 0.7036 | 0.4961 | 0.9101 | | 2.6125 | 74500 | 0.6932 | 0.5656 | 0.9088 | | 2.6300 | 75000 | 0.6676 | 0.5347 | 0.9097 | | 2.6475 | 75500 | 0.7246 | 0.5110 | 0.9101 | | 2.6651 | 76000 | 0.715 | 0.5551 | 0.9096 | | 2.6826 | 76500 | 0.7298 | 0.5658 | 0.9106 | | 2.7001 | 77000 | 0.7349 | 0.5571 | 0.9106 | | 2.7177 | 77500 | 0.721 | 0.5667 | 0.9100 | | 2.7352 | 78000 | 0.6863 | 0.5616 | 0.9066 | | 2.7527 | 78500 | 0.739 | 0.5419 | 0.9101 | | 2.7703 | 79000 | 0.7529 | 0.5343 | 0.9107 | | 2.7878 | 79500 | 0.7008 | 0.5601 | 0.9107 | | 2.8053 | 80000 | 0.7655 | 0.5189 | 0.9097 | | 2.8229 | 80500 | 0.6666 | 0.5073 | 0.9106 | | 2.8404 | 81000 | 0.7551 | 0.5381 | 0.9102 | | 2.8579 | 81500 | 0.6769 | 0.5650 | 0.9092 | | 2.8755 | 82000 | 0.7508 | 0.5189 | 0.9097 | | 2.8930 | 82500 | 0.6418 | 0.5521 | 0.9094 | | 2.9105 | 83000 | 0.6808 | 0.5490 | 0.9095 | | 2.9281 | 83500 | 0.6833 | 0.5524 | 0.9092 | | 2.9456 | 84000 | 0.6508 | 0.5229 | 0.9105 | | 2.9631 | 84500 | 0.6576 | 0.5789 | 0.9100 | | 2.9807 | 85000 | 0.6778 | 0.5075 | 0.9108 | | 2.9982 | 85500 | 0.642 | 0.5139 | 0.9107 | | 3.0157 | 86000 | 0.6596 | 0.5337 | 0.9104 | | 3.0333 | 86500 | 0.6769 | 0.5713 | 0.9106 | | 3.0508 | 87000 | 0.7349 | 0.5374 | 0.9103 | | 3.0683 | 87500 | 0.7034 | 0.5680 | 0.9094 | | 3.0859 | 88000 | 0.6853 | 0.5130 | 0.9106 | | 3.1034 | 88500 | 0.726 | 0.5093 | 0.9123 | | 3.1209 | 89000 | 0.6939 | 0.5078 | 0.9104 | | 3.1385 | 89500 | 0.7085 | 0.4847 | 0.9125 | | 3.1560 | 90000 | 0.7118 | 0.5154 | 0.9113 | | 3.1735 | 90500 | 0.6755 | 0.5066 | 0.9121 | | 3.1911 | 91000 | 0.718 | 0.4665 | 0.9129 | | 3.2086 | 91500 | 0.6277 | 0.5047 | 0.9111 | | 3.2261 | 92000 | 0.6907 | 0.5292 | 0.9123 | | 3.2437 | 92500 | 0.6624 | 0.5414 | 0.9103 | | 3.2612 | 93000 | 0.6943 | 0.5274 | 0.9101 | | 3.2787 | 93500 | 0.6979 | 0.4985 | 0.9110 | | 3.2963 | 94000 | 0.6858 | 0.5156 | 0.9099 | | 3.3138 | 94500 | 0.7221 | 0.5062 | 0.9114 | | 3.3313 | 95000 | 0.6647 | 0.5129 | 0.9108 | | 3.3489 | 95500 | 0.6572 | 0.5213 | 0.9127 | | 3.3664 | 96000 | 0.7417 | 0.4926 | 0.9119 | | 3.3839 | 96500 | 0.7237 | 0.5090 | 0.9104 | | 3.4015 | 97000 | 0.7218 | 0.5336 | 0.9111 | | 3.4190 | 97500 | 0.7091 | 0.5062 | 0.9128 | | 3.4365 | 98000 | 0.668 | 0.5727 | 0.9118 | | 3.4541 | 98500 | 0.6724 | 0.5106 | 0.9119 | | 3.4716 | 99000 | 0.7331 | 0.4740 | 0.9130 | | 3.4891 | 99500 | 0.6427 | 0.5021 | 0.9119 | | 3.5067 | 100000 | 0.6659 | 0.5037 | 0.9119 | | 3.5242 | 100500 | 0.6413 | 0.5024 | 0.9109 | | 3.5417 | 101000 | 0.6889 | 0.5277 | 0.9109 | | 3.5593 | 101500 | 0.6401 | 0.5389 | 0.9103 | | 3.5768 | 102000 | 0.7116 | 0.5114 | 0.9111 | | 3.5943 | 102500 | 0.6511 | 0.5124 | 0.9112 | | 3.6119 | 103000 | 0.6392 | 0.5505 | 0.9096 | | 3.6294 | 103500 | 0.6049 | 0.5306 | 0.9099 | | 3.6469 | 104000 | 0.675 | 0.5219 | 0.9098 | | 3.6645 | 104500 | 0.6498 | 0.5392 | 0.9100 | | 3.6820 | 105000 | 0.6774 | 0.5609 | 0.9097 | | 3.6995 | 105500 | 0.6655 | 0.5441 | 0.9107 | | 3.7171 | 106000 | 0.6664 | 0.5713 | 0.9113 | | 3.7346 | 106500 | 0.6343 | 0.5742 | 0.9086 | | 3.7521 | 107000 | 0.6686 | 0.5225 | 0.9113 | | 3.7697 | 107500 | 0.7018 | 0.5221 | 0.9111 | | 3.7872 | 108000 | 0.6479 | 0.5641 | 0.9113 | | 3.8047 | 108500 | 0.7005 | 0.5352 | 0.9123 | | 3.8223 | 109000 | 0.6068 | 0.5007 | 0.9107 | | 3.8398 | 109500 | 0.6846 | 0.5593 | 0.9102 | | 3.8573 | 110000 | 0.6272 | 0.5458 | 0.9107 | | 3.8749 | 110500 | 0.685 | 0.5178 | 0.9100 | | 3.8924 | 111000 | 0.5992 | 0.5200 | 0.9102 | | 3.9099 | 111500 | 0.6231 | 0.5488 | 0.9101 | | 3.9275 | 112000 | 0.6343 | 0.5496 | 0.9100 | | 3.9450 | 112500 | 0.593 | 0.5207 | 0.9115 | | 3.9625 | 113000 | 0.6017 | 0.5679 | 0.9108 | | 3.9801 | 113500 | 0.6218 | 0.5174 | 0.9113 | | 3.9976 | 114000 | 0.5916 | 0.5108 | 0.9118 | | 4.0151 | 114500 | 0.603 | 0.5259 | 0.9117 | | 4.0327 | 115000 | 0.6215 | 0.5362 | 0.9121 | | 4.0502 | 115500 | 0.6784 | 0.5343 | 0.9112 | | 4.0677 | 116000 | 0.65 | 0.5488 | 0.9114 | | 4.0853 | 116500 | 0.632 | 0.4905 | 0.9119 | | 4.1028 | 117000 | 0.6708 | 0.5091 | 0.9129 | | 4.1203 | 117500 | 0.6374 | 0.5228 | 0.9124 | | 4.1379 | 118000 | 0.6593 | 0.4976 | 0.9125 | | 4.1554 | 118500 | 0.649 | 0.5151 | 0.9109 | | 4.1729 | 119000 | 0.629 | 0.5303 | 0.9124 | | 4.1905 | 119500 | 0.6709 | 0.4868 | 0.9121 | | 4.2080 | 120000 | 0.5803 | 0.5177 | 0.9130 | | 4.2255 | 120500 | 0.6356 | 0.5329 | 0.9140 | | 4.2431 | 121000 | 0.6075 | 0.5057 | 0.9129 | | 4.2606 | 121500 | 0.6463 | 0.5084 | 0.9126 | | 4.2781 | 122000 | 0.6408 | 0.4859 | 0.9127 | | 4.2957 | 122500 | 0.6331 | 0.5210 | 0.9114 | | 4.3132 | 123000 | 0.6719 | 0.4893 | 0.9122 | | 4.3308 | 123500 | 0.6227 | 0.5126 | 0.9129 | | 4.3483 | 124000 | 0.6144 | 0.5293 | 0.9136 | | 4.3658 | 124500 | 0.6589 | 0.4978 | 0.9127 | | 4.3834 | 125000 | 0.6849 | 0.5195 | 0.9122 | | 4.4009 | 125500 | 0.6731 | 0.5150 | 0.9119 | | 4.4184 | 126000 | 0.658 | 0.4890 | 0.9136 | | 4.4360 | 126500 | 0.6256 | 0.5271 | 0.9134 | | 4.4535 | 127000 | 0.6295 | 0.5182 | 0.9129 | | 4.4710 | 127500 | 0.6804 | 0.4870 | 0.9133 | | 4.4886 | 128000 | 0.5868 | 0.4831 | 0.9129 | | 4.5061 | 128500 | 0.6316 | 0.4963 | 0.9135 | | 4.5236 | 129000 | 0.5873 | 0.5179 | 0.9149 | | 4.5412 | 129500 | 0.6383 | 0.5188 | 0.9126 | | 4.5587 | 130000 | 0.5936 | 0.5420 | 0.9117 | | 4.5762 | 130500 | 0.654 | 0.5248 | 0.9123 | | 4.5938 | 131000 | 0.6172 | 0.5067 | 0.9130 | | 4.6113 | 131500 | 0.5766 | 0.5335 | 0.9117 | | 4.6288 | 132000 | 0.5688 | 0.5345 | 0.9106 | | 4.6464 | 132500 | 0.6254 | 0.5352 | 0.9115 | | 4.6639 | 133000 | 0.5978 | 0.5244 | 0.9117 | | 4.6814 | 133500 | 0.6332 | 0.5511 | 0.9119 | | 4.6990 | 134000 | 0.6209 | 0.5356 | 0.9120 | | 4.7165 | 134500 | 0.6166 | 0.5532 | 0.9125 | | 4.7340 | 135000 | 0.5897 | 0.5888 | 0.9105 | | 4.7516 | 135500 | 0.624 | 0.5153 | 0.9123 | | 4.7691 | 136000 | 0.6563 | 0.5260 | 0.9134 | | 4.7866 | 136500 | 0.6098 | 0.5603 | 0.9122 | | 4.8042 | 137000 | 0.6313 | 0.5390 | 0.9124 | | 4.8217 | 137500 | 0.5737 | 0.5093 | 0.9129 | | 4.8392 | 138000 | 0.6475 | 0.5320 | 0.9114 | | 4.8568 | 138500 | 0.5752 | 0.5531 | 0.9120 | | 4.8743 | 139000 | 0.6378 | 0.4997 | 0.9114 | | 4.8918 | 139500 | 0.5641 | 0.5121 | 0.9120 | | 4.9094 | 140000 | 0.5771 | 0.5343 | 0.9114 | | 4.9269 | 140500 | 0.5869 | 0.5277 | 0.9124 | | 4.9444 | 141000 | 0.5417 | 0.5105 | 0.9143 | | 4.9620 | 141500 | 0.5517 | 0.5664 | 0.9133 | | 4.9795 | 142000 | 0.589 | 0.5326 | 0.9122 | | 4.9970 | 142500 | 0.5449 | 0.5236 | 0.9136 | | 5.0146 | 143000 | 0.5687 | 0.5217 | 0.9141 | | 5.0321 | 143500 | 0.5815 | 0.5520 | 0.9131 | | 5.0496 | 144000 | 0.6309 | 0.5290 | 0.9125 | | 5.0672 | 144500 | 0.6086 | 0.5305 | 0.9128 | | 5.0847 | 145000 | 0.5905 | 0.5044 | 0.9135 | | 5.1022 | 145500 | 0.6242 | 0.5113 | 0.9144 | | 5.1198 | 146000 | 0.603 | 0.5263 | 0.9137 | | 5.1373 | 146500 | 0.6187 | 0.5086 | 0.9131 | | 5.1548 | 147000 | 0.6007 | 0.5291 | 0.9136 | | 5.1724 | 147500 | 0.5934 | 0.5113 | 0.9131 | | 5.1899 | 148000 | 0.6208 | 0.4981 | 0.9142 | | 5.2074 | 148500 | 0.5524 | 0.5414 | 0.9146 | | 5.2250 | 149000 | 0.5941 | 0.5274 | 0.9146 | | 5.2425 | 149500 | 0.5694 | 0.5315 | 0.9140 | | 5.2600 | 150000 | 0.6045 | 0.5177 | 0.9138 | | 5.2776 | 150500 | 0.5928 | 0.4923 | 0.9146 | | 5.2951 | 151000 | 0.594 | 0.5209 | 0.9138 | | 5.3126 | 151500 | 0.6303 | 0.5014 | 0.9137 | | 5.3302 | 152000 | 0.5867 | 0.5151 | 0.9135 | | 5.3477 | 152500 | 0.5686 | 0.5244 | 0.9142 | | 5.3652 | 153000 | 0.6198 | 0.5063 | 0.9140 | | 5.3828 | 153500 | 0.6458 | 0.5403 | 0.9131 | | 5.4003 | 154000 | 0.6284 | 0.4988 | 0.9140 | | 5.4178 | 154500 | 0.6192 | 0.5008 | 0.9143 | | 5.4354 | 155000 | 0.5943 | 0.5334 | 0.9134 | | 5.4529 | 155500 | 0.5725 | 0.5270 | 0.9141 | | 5.4704 | 156000 | 0.656 | 0.4985 | 0.9146 | | 5.4880 | 156500 | 0.5562 | 0.4863 | 0.9137 | | 5.5055 | 157000 | 0.5888 | 0.5099 | 0.9141 | | 5.5230 | 157500 | 0.5329 | 0.5039 | 0.9149 | | 5.5406 | 158000 | 0.619 | 0.5232 | 0.9136 | | 5.5581 | 158500 | 0.5528 | 0.5471 | 0.9135 | | 5.5756 | 159000 | 0.6086 | 0.5226 | 0.9125 | | 5.5932 | 159500 | 0.5895 | 0.5072 | 0.9132 | | 5.6107 | 160000 | 0.5358 | 0.5419 | 0.9139 | | 5.6282 | 160500 | 0.5438 | 0.5334 | 0.9121 | | 5.6458 | 161000 | 0.579 | 0.5548 | 0.9118 | | 5.6633 | 161500 | 0.5636 | 0.5257 | 0.9127 | | 5.6808 | 162000 | 0.5984 | 0.5520 | 0.9136 | | 5.6984 | 162500 | 0.581 | 0.5314 | 0.9135 | | 5.7159 | 163000 | 0.5923 | 0.5665 | 0.9132 | | 5.7334 | 163500 | 0.5433 | 0.5717 | 0.9121 | | 5.7510 | 164000 | 0.583 | 0.5338 | 0.9137 | | 5.7685 | 164500 | 0.6272 | 0.5275 | 0.9137 | | 5.7860 | 165000 | 0.576 | 0.5657 | 0.9130 | | 5.8036 | 165500 | 0.5983 | 0.5457 | 0.9131 | | 5.8211 | 166000 | 0.5389 | 0.5252 | 0.9141 | | 5.8386 | 166500 | 0.6035 | 0.5478 | 0.9131 | | 5.8562 | 167000 | 0.5398 | 0.5334 | 0.9136 | | 5.8737 | 167500 | 0.5986 | 0.5021 | 0.9136 | | 5.8912 | 168000 | 0.5383 | 0.5261 | 0.9137 | | 5.9088 | 168500 | 0.5376 | 0.5374 | 0.9128 | | 5.9263 | 169000 | 0.5555 | 0.5375 | 0.9136 | | 5.9438 | 169500 | 0.5182 | 0.5230 | 0.9137 | | 5.9614 | 170000 | 0.5175 | 0.5653 | 0.9143 | | 5.9789 | 170500 | 0.5572 | 0.5433 | 0.9141 | | 5.9964 | 171000 | 0.5169 | 0.5035 | 0.9151 | | 6.0140 | 171500 | 0.5336 | 0.5178 | 0.9149 | | 6.0315 | 172000 | 0.5479 | 0.5427 | 0.9141 | | 6.0490 | 172500 | 0.5885 | 0.5417 | 0.9137 | | 6.0666 | 173000 | 0.5694 | 0.5232 | 0.9138 | | 6.0841 | 173500 | 0.5634 | 0.5074 | 0.9142 | | 6.1016 | 174000 | 0.5888 | 0.5102 | 0.9145 | | 6.1192 | 174500 | 0.576 | 0.5225 | 0.9148 | | 6.1367 | 175000 | 0.5843 | 0.5161 | 0.9144 | | 6.1542 | 175500 | 0.5635 | 0.5244 | 0.9141 | | 6.1718 | 176000 | 0.5666 | 0.5088 | 0.9149 | | 6.1893 | 176500 | 0.5868 | 0.5185 | 0.9150 | | 6.2068 | 177000 | 0.5211 | 0.5348 | 0.9154 | | 6.2244 | 177500 | 0.5672 | 0.5268 | 0.9150 | | 6.2419 | 178000 | 0.5286 | 0.5431 | 0.9141 | | 6.2594 | 178500 | 0.5723 | 0.5359 | 0.9154 | | 6.2770 | 179000 | 0.5648 | 0.5016 | 0.9154 | | 6.2945 | 179500 | 0.5566 | 0.5200 | 0.9145 | | 6.3120 | 180000 | 0.6074 | 0.5132 | 0.9145 | | 6.3296 | 180500 | 0.5473 | 0.5294 | 0.9145 | | 6.3471 | 181000 | 0.5325 | 0.5380 | 0.9150 | | 6.3646 | 181500 | 0.5868 | 0.5243 | 0.9149 | | 6.3822 | 182000 | 0.6155 | 0.5368 | 0.9143 | | 6.3997 | 182500 | 0.5944 | 0.4978 | 0.9149 | | 6.4172 | 183000 | 0.5838 | 0.5224 | 0.9146 | | 6.4348 | 183500 | 0.5644 | 0.5384 | 0.9146 | | 6.4523 | 184000 | 0.5471 | 0.5549 | 0.9152 | | 6.4698 | 184500 | 0.6198 | 0.5101 | 0.9147 | | 6.4874 | 185000 | 0.5304 | 0.5016 | 0.9152 | | 6.5049 | 185500 | 0.5621 | 0.5076 | 0.9155 | | 6.5224 | 186000 | 0.5027 | 0.5085 | 0.9148 | | 6.5400 | 186500 | 0.5882 | 0.5293 | 0.9147 | | 6.5575 | 187000 | 0.5228 | 0.5374 | 0.9152 | | 6.5750 | 187500 | 0.5717 | 0.5233 | 0.9140 | | 6.5926 | 188000 | 0.5651 | 0.5269 | 0.9136 | | 6.6101 | 188500 | 0.5182 | 0.5328 | 0.9140 | | 6.6276 | 189000 | 0.508 | 0.5250 | 0.9134 | | 6.6452 | 189500 | 0.5464 | 0.5427 | 0.9128 | | 6.6627 | 190000 | 0.5362 | 0.5137 | 0.9136 | | 6.6802 | 190500 | 0.5732 | 0.5161 | 0.9148 | | 6.6978 | 191000 | 0.5466 | 0.5416 | 0.9136 | | 6.7153 | 191500 | 0.5501 | 0.5736 | 0.9137 | | 6.7328 | 192000 | 0.5258 | 0.5528 | 0.9130 | | 6.7504 | 192500 | 0.5589 | 0.5380 | 0.9142 | | 6.7679 | 193000 | 0.5947 | 0.5297 | 0.9148 | | 6.7854 | 193500 | 0.5579 | 0.5590 | 0.9145 | | 6.8030 | 194000 | 0.5644 | 0.5412 | 0.9142 | | 6.8205 | 194500 | 0.5128 | 0.5181 | 0.9137 | | 6.8380 | 195000 | 0.5802 | 0.5451 | 0.9136 | | 6.8556 | 195500 | 0.5002 | 0.5293 | 0.9144 | | 6.8731 | 196000 | 0.5763 | 0.5153 | 0.9140 | | 6.8906 | 196500 | 0.5205 | 0.5261 | 0.9144 | | 6.9082 | 197000 | 0.5112 | 0.5342 | 0.9149 | | 6.9257 | 197500 | 0.523 | 0.5503 | 0.9140 | | 6.9432 | 198000 | 0.4875 | 0.5420 | 0.9148 | | 6.9608 | 198500 | 0.4963 | 0.5638 | 0.9142 | | 6.9783 | 199000 | 0.5327 | 0.5536 | 0.9149 | | 6.9958 | 199500 | 0.4822 | 0.5224 | 0.9141 | | 7.0134 | 200000 | 0.5078 | 0.5300 | 0.9140 | | 7.0309 | 200500 | 0.5208 | 0.5486 | 0.9149 | | 7.0484 | 201000 | 0.5641 | 0.5442 | 0.9148 | | 7.0660 | 201500 | 0.5484 | 0.5165 | 0.9143 | | 7.0835 | 202000 | 0.5289 | 0.5206 | 0.9142 | | 7.1010 | 202500 | 0.557 | 0.5178 | 0.9146 | | 7.1186 | 203000 | 0.556 | 0.5190 | 0.9147 | | 7.1361 | 203500 | 0.5567 | 0.5244 | 0.9143 | | 7.1536 | 204000 | 0.5376 | 0.5212 | 0.9148 | | 7.1712 | 204500 | 0.5448 | 0.5138 | 0.9150 | | 7.1887 | 205000 | 0.5541 | 0.5231 | 0.9155 | | 7.2062 | 205500 | 0.5006 | 0.5261 | 0.9155 | | 7.2238 | 206000 | 0.5366 | 0.5184 | 0.9159 | | 7.2413 | 206500 | 0.5127 | 0.5360 | 0.9148 | | 7.2588 | 207000 | 0.5469 | 0.5225 | 0.9148 | | 7.2764 | 207500 | 0.5414 | 0.5080 | 0.9152 | | 7.2939 | 208000 | 0.5361 | 0.5135 | 0.9151 | | 7.3114 | 208500 | 0.5833 | 0.5132 | 0.9147 | | 7.3290 | 209000 | 0.515 | 0.5282 | 0.9137 | | 7.3465 | 209500 | 0.5165 | 0.5362 | 0.9154 | | 7.3640 | 210000 | 0.5551 | 0.5327 | 0.9159 | | 7.3816 | 210500 | 0.5845 | 0.5409 | 0.9143 | | 7.3991 | 211000 | 0.5798 | 0.5057 | 0.9147 | | 7.4166 | 211500 | 0.5614 | 0.5275 | 0.9149 | | 7.4342 | 212000 | 0.5445 | 0.5175 | 0.9153 | | 7.4517 | 212500 | 0.5175 | 0.5424 | 0.9139 | | 7.4692 | 213000 | 0.6043 | 0.5075 | 0.9148 | | 7.4868 | 213500 | 0.5051 | 0.5067 | 0.9154 | | 7.5043 | 214000 | 0.5337 | 0.5143 | 0.9153 | | 7.5218 | 214500 | 0.4822 | 0.5049 | 0.9156 | | 7.5394 | 215000 | 0.5722 | 0.5359 | 0.9153 | | 7.5569 | 215500 | 0.5014 | 0.5306 | 0.9147 | | 7.5744 | 216000 | 0.5441 | 0.5222 | 0.9138 | | 7.5920 | 216500 | 0.5391 | 0.5261 | 0.9138 | | 7.6095 | 217000 | 0.494 | 0.5275 | 0.9144 | | 7.6270 | 217500 | 0.4881 | 0.5268 | 0.9141 | | 7.6446 | 218000 | 0.5263 | 0.5381 | 0.9138 | | 7.6621 | 218500 | 0.5017 | 0.5209 | 0.9134 | | 7.6796 | 219000 | 0.5566 | 0.5347 | 0.9138 | | 7.6972 | 219500 | 0.5201 | 0.5519 | 0.9135 | | 7.7147 | 220000 | 0.5269 | 0.5718 | 0.9143 | | 7.7322 | 220500 | 0.5125 | 0.5442 | 0.9135 | | 7.7498 | 221000 | 0.5307 | 0.5292 | 0.9142 | | 7.7673 | 221500 | 0.5718 | 0.5179 | 0.9140 | | 7.7848 | 222000 | 0.5345 | 0.5512 | 0.9147 | | 7.8024 | 222500 | 0.5456 | 0.5447 | 0.9143 | | 7.8199 | 223000 | 0.4889 | 0.5197 | 0.9144 | | 7.8374 | 223500 | 0.5532 | 0.5487 | 0.9146 | | 7.8550 | 224000 | 0.4902 | 0.5257 | 0.9137 | | 7.8725 | 224500 | 0.5535 | 0.5095 | 0.9135 | | 7.8900 | 225000 | 0.4988 | 0.5404 | 0.9141 | | 7.9076 | 225500 | 0.4883 | 0.5280 | 0.9143 | | 7.9251 | 226000 | 0.4975 | 0.5458 | 0.9133 | | 7.9426 | 226500 | 0.4698 | 0.5357 | 0.9147 | | 7.9602 | 227000 | 0.4831 | 0.5391 | 0.9143 | | 7.9777 | 227500 | 0.5073 | 0.5492 | 0.9148 | | 7.9952 | 228000 | 0.4637 | 0.5140 | 0.9148 | | 8.0128 | 228500 | 0.4817 | 0.5200 | 0.9137 | | 8.0303 | 229000 | 0.5078 | 0.5370 | 0.9146 | | 8.0478 | 229500 | 0.5342 | 0.5497 | 0.9149 | | 8.0654 | 230000 | 0.5317 | 0.5179 | 0.9156 | | 8.0829 | 230500 | 0.5074 | 0.5286 | 0.9151 | | 8.1004 | 231000 | 0.5302 | 0.5165 | 0.9162 | | 8.1180 | 231500 | 0.5481 | 0.5200 | 0.9163 | | 8.1355 | 232000 | 0.538 | 0.5216 | 0.9161 | | 8.1530 | 232500 | 0.5168 | 0.5189 | 0.9152 | | 8.1706 | 233000 | 0.5118 | 0.5195 | 0.9153 | | 8.1881 | 233500 | 0.5394 | 0.5192 | 0.9155 | | 8.2056 | 234000 | 0.488 | 0.5100 | 0.9153 | | 8.2232 | 234500 | 0.5214 | 0.5162 | 0.9161 | | 8.2407 | 235000 | 0.4944 | 0.5343 | 0.9149 | | 8.2582 | 235500 | 0.5226 | 0.5190 | 0.9152 | | 8.2758 | 236000 | 0.5234 | 0.5146 | 0.9159 | | 8.2933 | 236500 | 0.5165 | 0.5011 | 0.9153 | | 8.3108 | 237000 | 0.5599 | 0.5129 | 0.9152 | | 8.3284 | 237500 | 0.4991 | 0.5212 | 0.9154 | | 8.3459 | 238000 | 0.5007 | 0.5383 | 0.9148 | | 8.3634 | 238500 | 0.5406 | 0.5394 | 0.9154 | | 8.3810 | 239000 | 0.5606 | 0.5445 | 0.9147 | | 8.3985 | 239500 | 0.5626 | 0.5143 | 0.9149 | | 8.4160 | 240000 | 0.5353 | 0.5338 | 0.9156 | | 8.4336 | 240500 | 0.5168 | 0.5208 | 0.9158 | | 8.4511 | 241000 | 0.5058 | 0.5312 | 0.9146 | | 8.4686 | 241500 | 0.5919 | 0.5143 | 0.9149 | | 8.4862 | 242000 | 0.4883 | 0.5149 | 0.9159 | | 8.5037 | 242500 | 0.5072 | 0.5132 | 0.9156 | | 8.5212 | 243000 | 0.4655 | 0.5111 | 0.9148 | | 8.5388 | 243500 | 0.5592 | 0.5269 | 0.9155 | | 8.5563 | 244000 | 0.4836 | 0.5217 | 0.9152 | | 8.5738 | 244500 | 0.5299 | 0.5269 | 0.9143 | | 8.5914 | 245000 | 0.5081 | 0.5206 | 0.9136 | | 8.6089 | 245500 | 0.48 | 0.5159 | 0.9144 | | 8.6264 | 246000 | 0.4713 | 0.5272 | 0.9141 | | 8.6440 | 246500 | 0.5038 | 0.5287 | 0.9139 | | 8.6615 | 247000 | 0.4872 | 0.5199 | 0.9142 | | 8.6790 | 247500 | 0.5429 | 0.5227 | 0.9138 | | 8.6966 | 248000 | 0.5042 | 0.5402 | 0.9136 | | 8.7141 | 248500 | 0.511 | 0.5530 | 0.9141 | | 8.7316 | 249000 | 0.5097 | 0.5374 | 0.9131 | | 8.7492 | 249500 | 0.4974 | 0.5312 | 0.9138 | | 8.7667 | 250000 | 0.5617 | 0.5381 | 0.9148 | | 8.7842 | 250500 | 0.5234 | 0.5476 | 0.9150 | | 8.8018 | 251000 | 0.5133 | 0.5447 | 0.9147 | | 8.8193 | 251500 | 0.488 | 0.5270 | 0.9148 | | 8.8368 | 252000 | 0.5377 | 0.5325 | 0.9144 | | 8.8544 | 252500 | 0.479 | 0.5324 | 0.9145 | | 8.8719 | 253000 | 0.5329 | 0.5200 | 0.9140 | | 8.8894 | 253500 | 0.4744 | 0.5346 | 0.9140 | | 8.9070 | 254000 | 0.4827 | 0.5333 | 0.9145 | | 8.9245 | 254500 | 0.4757 | 0.5415 | 0.9139 | | 8.9420 | 255000 | 0.4504 | 0.5307 | 0.9147 | | 8.9596 | 255500 | 0.4657 | 0.5337 | 0.9146 | | 8.9771 | 256000 | 0.4976 | 0.5473 | 0.9150 | | 8.9946 | 256500 | 0.459 | 0.5214 | 0.9144 | | 9.0122 | 257000 | 0.4615 | 0.5296 | 0.9147 | | 9.0297 | 257500 | 0.5019 | 0.5312 | 0.9149 | | 9.0472 | 258000 | 0.5142 | 0.5379 | 0.9152 | | 9.0648 | 258500 | 0.5174 | 0.5197 | 0.9150 | | 9.0823 | 259000 | 0.4896 | 0.5277 | 0.9155 | | 9.0998 | 259500 | 0.5114 | 0.5240 | 0.9161 | | 9.1174 | 260000 | 0.529 | 0.5293 | 0.9155 | | 9.1349 | 260500 | 0.5305 | 0.5242 | 0.9157 | | 9.1524 | 261000 | 0.4941 | 0.5160 | 0.9155 | | 9.1700 | 261500 | 0.5025 | 0.5274 | 0.9153 | | 9.1875 | 262000 | 0.5148 | 0.5198 | 0.9155 | | 9.2050 | 262500 | 0.4882 | 0.5116 | 0.9160 | | 9.2226 | 263000 | 0.4964 | 0.5139 | 0.9155 | | 9.2401 | 263500 | 0.4792 | 0.5284 | 0.9153 | | 9.2576 | 264000 | 0.5089 | 0.5175 | 0.9154 | | 9.2752 | 264500 | 0.5124 | 0.5188 | 0.9154 | | 9.2927 | 265000 | 0.4968 | 0.5153 | 0.9152 | | 9.3102 | 265500 | 0.5454 | 0.5129 | 0.9152 | | 9.3278 | 266000 | 0.4858 | 0.5209 | 0.9147 | | 9.3453 | 266500 | 0.4822 | 0.5257 | 0.9148 | | 9.3628 | 267000 | 0.5343 | 0.5298 | 0.9148 | | 9.3804 | 267500 | 0.5443 | 0.5303 | 0.9145 | | 9.3979 | 268000 | 0.546 | 0.5204 | 0.9153 | | 9.4154 | 268500 | 0.5253 | 0.5326 | 0.9154 | | 9.4330 | 269000 | 0.5062 | 0.5270 | 0.9154 | | 9.4505 | 269500 | 0.4901 | 0.5284 | 0.9150 | | 9.4680 | 270000 | 0.5675 | 0.5271 | 0.9154 | | 9.4856 | 270500 | 0.4831 | 0.5263 | 0.9152 | | 9.5031 | 271000 | 0.4873 | 0.5256 | 0.9152 | | 9.5206 | 271500 | 0.4576 | 0.5208 | 0.9155 | | 9.5382 | 272000 | 0.5392 | 0.5250 | 0.9154 | | 9.5557 | 272500 | 0.4716 | 0.5238 | 0.9158 | | 9.5732 | 273000 | 0.5202 | 0.5282 | 0.9156 | | 9.5908 | 273500 | 0.5036 | 0.5284 | 0.9149 | | 9.6083 | 274000 | 0.4645 | 0.5216 | 0.9151 | | 9.6258 | 274500 | 0.4683 | 0.5273 | 0.9154 | | 9.6434 | 275000 | 0.4881 | 0.5307 | 0.9154 | | 9.6609 | 275500 | 0.4677 | 0.5234 | 0.9155 | | 9.6784 | 276000 | 0.54 | 0.5212 | 0.9153 | | 9.6960 | 276500 | 0.4948 | 0.5277 | 0.9150 | | 9.7135 | 277000 | 0.5008 | 0.5293 | 0.9150 | | 9.7310 | 277500 | 0.4907 | 0.5307 | 0.9147 | | 9.7486 | 278000 | 0.4876 | 0.5276 | 0.9144 | | 9.7661 | 278500 | 0.539 | 0.5324 | 0.9145 | | 9.7836 | 279000 | 0.5147 | 0.5325 | 0.9145 | | 9.8012 | 279500 | 0.5095 | 0.5367 | 0.9150 | | 9.8187 | 280000 | 0.476 | 0.5333 | 0.9147 | | 9.8362 | 280500 | 0.5189 | 0.5325 | 0.9150 | | 9.8538 | 281000 | 0.4633 | 0.5342 | 0.9149 | | 9.8713 | 281500 | 0.5199 | 0.5314 | 0.9146 | | 9.8888 | 282000 | 0.4645 | 0.5312 | 0.9151 | | 9.9064 | 282500 | 0.4702 | 0.5339 | 0.9151 | | 9.9239 | 283000 | 0.4609 | 0.5362 | 0.9151 | | 9.9414 | 283500 | 0.4365 | 0.5340 | 0.9152 | | 9.9590 | 284000 | 0.4587 | 0.5339 | 0.9152 | | 9.9765 | 284500 | 0.4861 | 0.5355 | 0.9153 | | 9.9940 | 285000 | 0.4473 | 0.5352 | 0.9153 | </details> ### Framework Versions - Python: 3.10.10 - Sentence Transformers: 3.4.0.dev0 - Transformers: 4.46.3 - PyTorch: 2.5.1+cu124 - Accelerate: 0.34.2 - Datasets: 2.21.0 - Tokenizers: 0.20.3 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
[ "PCR", "PUBMEDQA", "SCIFACT", "SCIQ", "SCITAIL" ]
ProdeusUnity/Midnight-Miqu-70B-v1.5-Safetensorsfix
ProdeusUnity
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2311.03099", "base_model:migtissera/Tess-70B-v1.6", "base_model:merge:migtissera/Tess-70B-v1.6", "base_model:sophosympatheia/Midnight-Miqu-70B-v1.0", "base_model:merge:sophosympatheia/Midnight-Miqu-70B-v1.0", "license:other", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-12-01T04:07:21Z
2024-12-01T04:14:51+00:00
14
0
--- base_model: - sophosympatheia/Midnight-Miqu-70B-v1.0 - migtissera/Tess-70B-v1.6 library_name: transformers license: other tags: - mergekit - merge model-index: - name: Midnight-Miqu-70B-v1.5 results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 61.18 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sophosympatheia/Midnight-Miqu-70B-v1.5 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 38.54 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sophosympatheia/Midnight-Miqu-70B-v1.5 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 2.42 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sophosympatheia/Midnight-Miqu-70B-v1.5 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 6.15 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sophosympatheia/Midnight-Miqu-70B-v1.5 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 11.65 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sophosympatheia/Midnight-Miqu-70B-v1.5 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 31.39 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=sophosympatheia/Midnight-Miqu-70B-v1.5 name: Open LLM Leaderboard --- <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/Tn9MBg6.png" alt="MidnightMiqu" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> ### Overview Looking for the 103B version? You can get it from [FluffyKaeloky/Midnight-Miqu-103B-v1.5](https://huggingface.co/FluffyKaeloky/Midnight-Miqu-103B-v1.5). This is a DARE Linear merge between [sophosympatheia/Midnight-Miqu-70B-v1.0](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.0) and [migtissera/Tess-70B-v1.6](https://huggingface.co/migtissera/Tess-70B-v1.6). This version is close in feel and performance to Midnight Miqu v1.0 but I think it picked up some goodness from Tess. Their EQ Bench scores are virtually the same and their post-EXL2 quant perplexity scores were the same too. However, Midnight Miqu v1.5 passes some tests I use that Midnight Miqu v1.0 fails, without sacrificing writing quality. This model is uncensored. *You are responsible for whatever you do with it.* This model was designed for roleplaying and storytelling and I think it does well at both. It may also perform well at other tasks but I have not tested its performance in other areas. ### Long Context Tips You can run this model out to 32K context with alpha_rope set to 1, just like with Miqu. ### Sampler Tips * I recommend using Quadratic Sampling (i.e. smoothing factor) for creative work. I think this version performs best with a smoothing factor close to 0.2. * I recommend using Min-P. Experiment to find your best setting. * You can enable dynamic temperature if you want, but that adds yet another variable to consider and I find it's unnecessary with you're already using Min-P and smoothing factor. * You don't need to use a high repetition penalty with this model, such as going above 1.10, but experiment with it. Experiment with any and all of the settings below! What suits my preferences may not suit yours. If you save the below settings as a .json file, you can import them directly into Silly Tavern. ``` { "temp": 1, "temperature_last": true, "top_p": 1, "top_k": 0, "top_a": 0, "tfs": 1, "epsilon_cutoff": 0, "eta_cutoff": 0, "typical_p": 1, "min_p": 0.12, "rep_pen": 1.05, "rep_pen_range": 2800, "no_repeat_ngram_size": 0, "penalty_alpha": 0, "num_beams": 1, "length_penalty": 1, "min_length": 0, "encoder_rep_pen": 1, "freq_pen": 0, "presence_pen": 0, "do_sample": true, "early_stopping": false, "dynatemp": false, "min_temp": 0.8, "max_temp": 1.35, "dynatemp_exponent": 1, "smoothing_factor": 0.23, "add_bos_token": true, "truncation_length": 2048, "ban_eos_token": false, "skip_special_tokens": true, "streaming": true, "mirostat_mode": 0, "mirostat_tau": 2, "mirostat_eta": 0.1, "guidance_scale": 1, "negative_prompt": "", "grammar_string": "", "banned_tokens": "", "ignore_eos_token_aphrodite": false, "spaces_between_special_tokens_aphrodite": true, "sampler_order": [ 6, 0, 1, 3, 4, 2, 5 ], "logit_bias": [], "n": 1, "rep_pen_size": 0, "genamt": 500, "max_length": 32764 } ``` ### Prompting Tips Try the following context template for use in SillyTavern. It might help, although it's a little heavy on tokens. If you save the text as a .json file, you can import it directly. ``` { "story_string": "{{#if system}}{{system}}\n{{/if}}\nCONTEXTUAL INFORMATION\n{{#if wiBefore}}\n- World and character info:\n{{wiBefore}}\n{{/if}}\n{{#if description}}\n- {{char}}'s background and persona:\n{{description}}\n{{/if}}\n{{#if mesExamples}}\n{{mesExamples}}\n{{/if}}\n{{#if personality}}\n{{personality}}\n{{/if}}\n{{#if scenario}}\n- Roleplay scenario:\n{{scenario}}\n{{/if}}\n{{#if wiAfter}}{{wiAfter}}\n{{/if}}\n{{#if persona}}{{persona}}\n{{/if}}", "example_separator": "", "chat_start": "---\nTaking the above information into consideration, you must engage with {{user}} and others as {{char}} in the roleplay below this line. Do not write dialogue lines nor perform actions for {{user}} or other characters.\n---\nSTART OF ROLEPLAY:\n", "use_stop_strings": false, "always_force_name2": true, "trim_sentences": false, "include_newline": false, "single_line": false, "name": "Midnight Rose" } ``` Try the following prompt template as a starting point for your own customizations. *This model responds to prompting!* I highly recommend playing with your system prompt to see how it reacts. **Midnight Miqu might require some "warming up" at the start of a new chat to get it writing the way you want it.** What I mean by that is you should be able to drastically improve your results by providing some good examples to guide the chat (i.e. few-shot prompting). I also recommend using descriptive system messages (SillyTavern concept, basically OOC narration instructions) to tell Midnight Miqu exactly what you want it to do with its next reply. Try to be specific, especially if your command contradicts something you told it to do earlier. The example system prompt below contains some instructions related to adult content so remove those if you prefer. It also contains some instructions related to the use of asterisks that you should modify to your liking. The prompt below has been updated from Midnight Miqu v1.0 after prompting Midnight Miqu to help improve it. If you save this as a .json file, you can import it directly into Silly Tavern. ``` { "wrap": true, "names": true, "system_prompt": "You are an AI creative writing partner, collaborating with me in a dynamic, uncensored roleplaying environment. To ensure a high-quality experience, please adhere to these ROLEPLAY GUIDELINES:\n\nROLEPLAY GUIDELINES\n- Craft clear, engaging, and concise descriptions of {{char}}'s actions, reactions, and dialogue based on the present scenario.\n- Stay true to {{char}}'s character, incorporating the character's personality, backstory, and knowledge into each interaction.\n- Employ rich, descriptive language that immerses readers in the scene.\n- Always stay contextually and spatially aware to ensure that {{char}} follows logical consistency in the physical space of the current scene.\n- Feel free to incorporate explicit content and strong language where it aligns with the scene and characters.\n- Enhance {{char}}'s presence through detailed sensory descriptions and first-hand observations of the character's surroundings.\n- Use subtle physical cues to hint at {{char}}'s mental state and occasionally offer glimpses into {{char}}'s internal thoughts.\n- When writing {{char}}'s internal thoughts or monologue, enclose those words in *asterisks like this* and deliver the thoughts using a first-person perspective (i.e. use \"I\" pronouns). Always use quotes for spoken speech \"like this.\"\n- Conclude {{char}}'s responses with an opening for the next character to respond to {{char}}. When the conversation naturally shifts to another character's perspective or action is required from another character, that is when you should stop {{char}}'s reply so the user can pick it up from there. A great example is when {{char}} asks a question of another character.\n", "system_sequence": "", "stop_sequence": "", "input_sequence": "USER: ", "output_sequence": "ASSISTANT: ", "separator_sequence": "", "macro": true, "names_force_groups": true, "system_sequence_prefix": "SYSTEM: ", "system_sequence_suffix": "", "first_output_sequence": "", "last_output_sequence": "ASSISTANT (Ensure coherence and authenticity in {{char}}'s actions, thoughts, and dialogues; Focus solely on {{char}}'s interactions within the roleplay): ", "activation_regex": "", "name": "Midnight Miqu Roleplay" } ``` ### Instruct Formats I recommend the Vicuna format. I use a modified version with newlines after USER and ASSISTANT. ``` USER: {prompt} ASSISTANT: ``` Mistral's format also works, and in my testing the performance is about the same as using Vicuna. ``` [INST] {prompt} [/INST] ``` You could also try ChatML (don't recommend it) ``` <|im_start|>system {Your system prompt goes here}<|im_end|> <|im_start|>user {Your message as the user will go here}<|im_end|> <|im_start|>assistant ``` ### Quantizations * GGUF * [mradermacher/Midnight-Miqu-70B-v1.5-GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-GGUF) -- Various static GGUF quants * GPTQ * [Kotokin/Midnight-Miqu-70B-v1.5_GPTQ32G](https://huggingface.co/Kotokin/Midnight-Miqu-70B-v1.5_GPTQ32G) * EXL2 * [Dracones/Midnight-Miqu-70B-v1.5_exl2_4.0bpw](https://huggingface.co/Dracones/Midnight-Miqu-70B-v1.5_exl2_4.0bpw) * [Dracones/Midnight-Miqu-70B-v1.5_exl2_4.5bpw](https://huggingface.co/Dracones/Midnight-Miqu-70B-v1.5_exl2_4.5bpw) * [Dracones/Midnight-Miqu-70B-v1.5_exl2_5.0bpw](https://huggingface.co/Dracones/Midnight-Miqu-70B-v1.5_exl2_5.0bpw) * [Dracones/Midnight-Miqu-70B-v1.5_exl2_6.0bpw](https://huggingface.co/Dracones/Midnight-Miqu-70B-v1.5_exl2_6.0bpw) * If you don't see something you're looking for, [try searching Hugging Face](https://huggingface.co/models?search=midnight-miqu-70b-v1.5). There may be newer quants available than what I've documented here. ### Licence and usage restrictions <font color="red">152334H/miqu-1-70b-sf was based on a leaked version of one of Mistral's models.</font> All miqu-derived models, including this merge, are **only suitable for personal use.** Mistral has been cool about it so far, but you should be aware that by downloading this merge you are assuming whatever legal risk is inherent in acquiring and using a model based on leaked weights. This merge comes with no warranties or guarantees of any kind, but you probably already knew that. I am not a lawyer and I do not profess to know what we have gotten ourselves into here. You should consult with a lawyer before using any Hugging Face model beyond private use... but definitely don't use this one for that! ## Merge Details ### Merge Method This model was merged using the linear [DARE](https://arxiv.org/abs/2311.03099) merge method using [152334H_miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) as a base. ### Models Merged The following models were included in the merge: * [sophosympatheia/Midnight-Miqu-70B-v1.0](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.0) * [migtissera/Tess-70B-v1.6](https://huggingface.co/migtissera/Tess-70B-v1.6) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: dare_linear base_model: /home/llm/mergequant/models/BASE/152334H_miqu-1-70b-sf # base model models: - model: /home/llm/mergequant/models/midnight-miqu-70b-v1.0 - model: /home/llm/mergequant/models/BASE/Tess-70B-v1.6 parameters: weight: 1.0 dtype: float16 ``` ### Notes I tried several methods of merging Midnight Miqu v1.0 with Tess v1.6, and this dare_linear approach worked the best by far. I tried the same approach with other Miqu finetunes like ShinojiResearch/Senku-70B-Full and abideen/Liberated-Miqu-70B, but there was a huge difference in performance. The merge with Tess was the best one. I also tried the SLERP approach I used to create Midnight Miqu v1.0, only using Tess instead of 152334H_miqu-1-70b in that config, and that result was nowhere near as good either. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_sophosympatheia__Midnight-Miqu-70B-v1.5) | Metric |Value| |-------------------|----:| |Avg. |25.22| |IFEval (0-Shot) |61.18| |BBH (3-Shot) |38.54| |MATH Lvl 5 (4-Shot)| 2.42| |GPQA (0-shot) | 6.15| |MuSR (0-shot) |11.65| |MMLU-PRO (5-shot) |31.39|
[ "CRAFT" ]
AlekseyCalvin/Kupreyanov_Style_FluxLoRA_var2_onDeDist_bySilverAgePoets
AlekseyCalvin
text-to-image
[ "diffusers", "flux", "lora", "replicate", "woodcut", "avantgarde", "SilverAgePoets", "Soviet", "art-style", "image-generation", "flux-diffusers", "dedistilled", "de-distilled", "DrawThings", "PEFT", "photo", "realism", "Surrealism", "illustration", "experimental", "character", "historical person", "poetry", "literature", "history", "archival", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:apache-2.0", "region:us" ]
2024-12-01T06:41:58Z
2024-12-01T12:54:57+00:00
14
0
--- base_model: black-forest-labs/FLUX.1-dev language: - en library_name: diffusers license: apache-2.0 pipeline_tag: text-to-image tags: - flux - diffusers - lora - replicate - woodcut - avantgarde - SilverAgePoets - Soviet - art-style - image-generation - flux-diffusers - dedistilled - de-distilled - DrawThings - PEFT - photo - realism - Surrealism - illustration - experimental - character - historical person - poetry - literature - history - archival emoji: 🔜 instance_prompt: Nikolay Kupreyanov woodcut etching style illustration widget: - text: 'Nikolay Kupreyanov woodcut etching style illustration of a rebelling cat in a suit screaming at monstrous mutant policemen during a street battle in San Francisco, red title text at the top: "END EMPIRE! END DEAD ERAS'' RX!"' output: url: images/example_8y4363h7m.png - text: 'Nikolay Kupreyanov woodcut etching style illustration of rebelling proletarians bringing down the pillars of global Capital, upon which sits a bulging massive many limbed chimeric monster with the crown of a tzar, a fat flabby body, many limbs armed with deadly weapons, a beastly head, and a serpent''s tail. Yet, the rebels prevail, and though many among them are massacred, some succeed un shattering the foundations of Capital and its ghastly overlord! Caption on the stone monument being destroyed reads: “CAPITAL"' output: url: images/example_pjzem0a6q.png - text: 'Nikolay Kupreyanov style woodcut etching illustration art of a female Soviet perestoika era colorful anarchist punk poet in leather jacket with patches walking thru Leningrad and saying, in a word bubble: "... by the PORCH … a PUDDLE where the STAR COLLAPSED ! ... ", best quality, elaborate details, crisp detailed background, on the ground a fallen medal decoration of a golden Soviet star had fallen into a dirty puddle by a wooden porch of a house' output: url: images/example_mm5hd1qwz.png - text: 'Nikolay Kupreyanov woodcut etching style illustration of two rebelling proletarians a young androgynous revolutionary sailor on the left and a Ukrainian Bolshevik cossack on the right tying up and capturing a flabby green malicious top-hatted monster with bloody hands whose name is "CAPITAL": title text atop' output: url: images/example_njrns7amb.png - text: 'Nikolay Kupreyanov style woodcut etching illustration art. The artwork features strong contrasts and dynamic lines typical of Kupreyanov’s style. This artwork depicts a woman reading a book, set against an industrial background with smokestacks and cranes. The poster contains bold text phrases: "RESOURCE KNOWLEDGE!" and, in speech balloon, the text: “COMRADES, PRESERVE LIBRARIES!" Kupreyanov''s distinct geometric illustrative style, dynamic and with strong contrasts, indicative of early 20th-century Soviet agitprop. The woman wears a headscarf and simple clothing, standing on cobblestones.' output: url: images/example_s5caz06y5.png - text: Nikolay Kupreyanov style woodcut etching illustration art. A cat eating a dolphin. Barbed wire fence backdrop. Woodcut etching. Avant garden. Constructivism. Suprematism. An illustration in Nikolay Kupreyanov's style depicting a dynamic, surreal scene. A figure appears in an abstract environment featuring stylized waves and geometric patterns. In the background, there are outlines of a ship and circular elements resembling tunnels or ripples. The composition is predominantly black and white, with intricate shading and bold lines, creating a sense of motion and energy. output: url: images/example_if4g3sun1.png - text: Nikolay Kupreyanov style woodcut etching illustration art. A black and white intimate illustration in the style of Nikolay Kupreyanov, depicting an avant-garde woman ironing textiles on a table, undressed, stylized. The composition includes a chair and a vintage oil lamp, with scattered buttons and scissors on the floor. The artwork is dated 1921, featuring strong contrasts and geometric shapes typical of early 20th-century printmaking styles. output: url: images/example_ts2zxqxpf.png - text: Nikolay Kupreyanov style woodcut etching illustration art. The artwork features strong contrasts and dynamic lines typical of Kupreyanov’s style. A portrait of a man playing cards. The figure has exaggerated, angular features, and the scene is illuminated by a single candle on a table. A glass sits nearby, and a backdrop of vertical lines suggests books or a striped pattern. The artwork features strong contrasts and dynamic lines typical of Kupreyanov’snes typical of Kupreyanov’s style. output: url: images/example_8hjj5opvw.png - text: Nikolay Kupreyanov style woodcut etching illustration art. An image of an early turret-cannon armored car moving through a city and firing. A stylized scene from the Russian Civil War of 1919-1920. Influenced by Cubism. Thick defined lines. Exaggerated shapes. Strong angles. Deep blacks. output: url: images/example_mn4kvtftq.png --- # Nikolay Kupreyanov FLUX LoRA Variant 2 ## By SilverAgePoets.com **Find this LoRA's Alternate Variant – base FLUX.1-Dev-tuned – [AT THIS LINK](https://huggingface.co/AlekseyCalvin/NikolayKupreyanov_FluxLoRA_v1_bySilverAgePoets)**. <br> **This Variant of *our Kupreyanov Style LoRA* was fine-tuned over a De-Distilled Version of FLUX, at a higher rank than Var.1, but fewer steps.** <br> Our Low-Rank Adapter (LoRA) for FLUX models fine-tuned on a set of manually pre-processed & exhaustively captioned art scans... <br> Representing a small selection (30 pieces) from the distinctive oeuvre of woodcuts, etchings, engravings, drawings, & poster designs by: <br> the artist/engraver **Nikolay Nikolayevich Kupreyanov** *(1894 — 1933)*, one of the iconic avant-garde formulators of c.1910s-1920s early Soviet visual culture. <br> ![photo of Kupreyanov with a cat](http://www.printsmuseum.ru/files/artist/f6e4f9d7c0015db022ad35d7d826e449.jpg) <Gallery /> For generating stylized images reminiscent of early-Soviet Suprematism-influenced woodcuts, drawings, & designs for posters, ex-libris, & more. <br> Check out a gallery of source woodcuts by Nikolay Kupreyanov [here](http://www.printsmuseum.ru/artist/view/9/). <br> Most of the images found at the above link were used by for this fine-tune (alongside further sources, such as sampling of Kupreyanov's agitprop poster designs). <br> ## Trigger words You should use `Nikolay Kupreyanov woodcut etching style illustration` or 'Kupreyanov style art' to summon the artist's latent hand. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('AlekseyCalvin/Kupreyanov_Style_FluxLoRA_var2_onDeDist_bySilverAgePoets', weight_name='KupreyanovLora_v2_DT_500st_rank32_fp32_Convert.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
[ "MEDAL" ]
sigridjineth/stella_en_1.5B_v5_sigrid
sigridjineth
sentence-similarity
[ "sentence-transformers", "pytorch", "safetensors", "qwen2", "text-generation", "mteb", "transformers", "sentence-similarity", "custom_code", "arxiv:2205.13147", "license:mit", "model-index", "autotrain_compatible", "text-generation-inference", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-12-12T17:33:09Z
2024-12-12T17:39:28+00:00
14
0
--- license: mit tags: - mteb - sentence-transformers - transformers - sentence-similarity model-index: - name: stella_en_1.5B_v5 results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 92.86567164179104 - type: ap value: 72.13503907102613 - type: ap_weighted value: 72.13503907102613 - type: f1 value: 89.5586886376355 - type: f1_weighted value: 93.13621183004571 - type: main_score value: 92.86567164179104 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 97.16485 - type: ap value: 96.05546315415225 - type: ap_weighted value: 96.05546315415225 - type: f1 value: 97.16351087403213 - type: f1_weighted value: 97.16351087403213 - type: main_score value: 97.16485 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 59.358 - type: f1 value: 59.0264615883114 - type: f1_weighted value: 59.0264615883114 - type: main_score value: 59.358 - task: type: Retrieval dataset: name: MTEB ArguAna type: mteb/arguana config: default split: test revision: c22ab2a51041ffd869aaddef7af8d8215647e41a metrics: - type: main_score value: 65.269 - type: map_at_1 value: 41.607 - type: map_at_10 value: 57.104 - type: map_at_100 value: 57.621 - type: map_at_1000 value: 57.621 - type: map_at_20 value: 57.533 - type: map_at_3 value: 52.891999999999996 - type: map_at_5 value: 55.371 - type: mrr_at_1 value: 42.318634423897585 - type: mrr_at_10 value: 57.353970511865406 - type: mrr_at_100 value: 57.88398078476526 - type: mrr_at_1000 value: 57.88467807648422 - type: mrr_at_20 value: 57.796730533206166 - type: mrr_at_3 value: 53.200568990042775 - type: mrr_at_5 value: 55.6330014224753 - type: nauc_map_at_1000_diff1 value: 24.54414600428287 - type: nauc_map_at_1000_max value: -8.389738078358459 - type: nauc_map_at_1000_std value: -18.188787645801366 - type: nauc_map_at_100_diff1 value: 24.543138576462308 - type: nauc_map_at_100_max value: -8.390896839752044 - type: nauc_map_at_100_std value: -18.192549240185247 - type: nauc_map_at_10_diff1 value: 24.219607088995822 - type: nauc_map_at_10_max value: -8.245734391254308 - type: nauc_map_at_10_std value: -18.229706566466447 - type: nauc_map_at_1_diff1 value: 29.325201664812788 - type: nauc_map_at_1_max value: -11.742800494823971 - type: nauc_map_at_1_std value: -18.610215769702528 - type: nauc_map_at_20_diff1 value: 24.471097562798803 - type: nauc_map_at_20_max value: -8.318035874000799 - type: nauc_map_at_20_std value: -18.171541096773108 - type: nauc_map_at_3_diff1 value: 24.275846107642824 - type: nauc_map_at_3_max value: -8.212242049581894 - type: nauc_map_at_3_std value: -17.920379368937496 - type: nauc_map_at_5_diff1 value: 23.873692493209255 - type: nauc_map_at_5_max value: -8.110347163828767 - type: nauc_map_at_5_std value: -18.20863325596931 - type: nauc_mrr_at_1000_diff1 value: 22.656410956419975 - type: nauc_mrr_at_1000_max value: -8.924888102233243 - type: nauc_mrr_at_1000_std value: -18.103674384502526 - type: nauc_mrr_at_100_diff1 value: 22.655448817140968 - type: nauc_mrr_at_100_max value: -8.926034318499038 - type: nauc_mrr_at_100_std value: -18.10743930104164 - type: nauc_mrr_at_10_diff1 value: 22.297536272996872 - type: nauc_mrr_at_10_max value: -8.836407556658274 - type: nauc_mrr_at_10_std value: -18.1598393044477 - type: nauc_mrr_at_1_diff1 value: 27.419572424489708 - type: nauc_mrr_at_1_max value: -11.42241314820691 - type: nauc_mrr_at_1_std value: -18.54893865856313 - type: nauc_mrr_at_20_diff1 value: 22.590227214657418 - type: nauc_mrr_at_20_max value: -8.849986456376993 - type: nauc_mrr_at_20_std value: -18.0862391777352 - type: nauc_mrr_at_3_diff1 value: 22.415270167774988 - type: nauc_mrr_at_3_max value: -8.692871854156435 - type: nauc_mrr_at_3_std value: -17.6740102891955 - type: nauc_mrr_at_5_diff1 value: 21.96284578521464 - type: nauc_mrr_at_5_max value: -8.757031535546025 - type: nauc_mrr_at_5_std value: -18.210766964081294 - type: nauc_ndcg_at_1000_diff1 value: 23.939400161569115 - type: nauc_ndcg_at_1000_max value: -7.866999120512983 - type: nauc_ndcg_at_1000_std value: -17.981457019643617 - type: nauc_ndcg_at_100_diff1 value: 23.920033349619317 - type: nauc_ndcg_at_100_max value: -7.889849409678031 - type: nauc_ndcg_at_100_std value: -18.054931990360537 - type: nauc_ndcg_at_10_diff1 value: 22.543020461303534 - type: nauc_ndcg_at_10_max value: -7.072111788010867 - type: nauc_ndcg_at_10_std value: -18.26397604573537 - type: nauc_ndcg_at_1_diff1 value: 29.325201664812788 - type: nauc_ndcg_at_1_max value: -11.742800494823971 - type: nauc_ndcg_at_1_std value: -18.610215769702528 - type: nauc_ndcg_at_20_diff1 value: 23.551587021207972 - type: nauc_ndcg_at_20_max value: -7.298056222649139 - type: nauc_ndcg_at_20_std value: -18.056004880930608 - type: nauc_ndcg_at_3_diff1 value: 22.669089506345273 - type: nauc_ndcg_at_3_max value: -7.278024373570137 - type: nauc_ndcg_at_3_std value: -17.816657759914193 - type: nauc_ndcg_at_5_diff1 value: 21.72619728226575 - type: nauc_ndcg_at_5_max value: -6.959741647471228 - type: nauc_ndcg_at_5_std value: -18.35173705190235 - type: nauc_precision_at_1000_diff1 value: 5.0388241058076995 - type: nauc_precision_at_1000_max value: 34.439879624882145 - type: nauc_precision_at_1000_std value: 77.22610895194498 - type: nauc_precision_at_100_diff1 value: 1.340670767252794 - type: nauc_precision_at_100_max value: 19.30870025961241 - type: nauc_precision_at_100_std value: 35.37688289157788 - type: nauc_precision_at_10_diff1 value: 7.734227153124332 - type: nauc_precision_at_10_max value: 4.202399088422237 - type: nauc_precision_at_10_std value: -18.383890254046698 - type: nauc_precision_at_1_diff1 value: 29.325201664812788 - type: nauc_precision_at_1_max value: -11.742800494823971 - type: nauc_precision_at_1_std value: -18.610215769702528 - type: nauc_precision_at_20_diff1 value: 9.48070999361637 - type: nauc_precision_at_20_max value: 19.056709637253025 - type: nauc_precision_at_20_std value: -13.266821166159485 - type: nauc_precision_at_3_diff1 value: 17.245260303409747 - type: nauc_precision_at_3_max value: -4.202455033452335 - type: nauc_precision_at_3_std value: -17.514264039955332 - type: nauc_precision_at_5_diff1 value: 12.074628162049974 - type: nauc_precision_at_5_max value: -1.9145501461107832 - type: nauc_precision_at_5_std value: -19.162525528916344 - type: nauc_recall_at_1000_diff1 value: 5.038824105805915 - type: nauc_recall_at_1000_max value: 34.43987962487738 - type: nauc_recall_at_1000_std value: 77.22610895193765 - type: nauc_recall_at_100_diff1 value: 1.3406707672497025 - type: nauc_recall_at_100_max value: 19.30870025960776 - type: nauc_recall_at_100_std value: 35.37688289157515 - type: nauc_recall_at_10_diff1 value: 7.734227153124366 - type: nauc_recall_at_10_max value: 4.202399088421976 - type: nauc_recall_at_10_std value: -18.38389025404673 - type: nauc_recall_at_1_diff1 value: 29.325201664812788 - type: nauc_recall_at_1_max value: -11.742800494823971 - type: nauc_recall_at_1_std value: -18.610215769702528 - type: nauc_recall_at_20_diff1 value: 9.480709993616845 - type: nauc_recall_at_20_max value: 19.05670963725301 - type: nauc_recall_at_20_std value: -13.266821166158651 - type: nauc_recall_at_3_diff1 value: 17.24526030340978 - type: nauc_recall_at_3_max value: -4.202455033452323 - type: nauc_recall_at_3_std value: -17.51426403995538 - type: nauc_recall_at_5_diff1 value: 12.074628162049992 - type: nauc_recall_at_5_max value: -1.914550146110865 - type: nauc_recall_at_5_std value: -19.162525528916362 - type: ndcg_at_1 value: 41.607 - type: ndcg_at_10 value: 65.269 - type: ndcg_at_100 value: 67.289 - type: ndcg_at_1000 value: 67.29899999999999 - type: ndcg_at_20 value: 66.76299999999999 - type: ndcg_at_3 value: 56.604 - type: ndcg_at_5 value: 61.07900000000001 - type: precision_at_1 value: 41.607 - type: precision_at_10 value: 9.118 - type: precision_at_100 value: 0.996 - type: precision_at_1000 value: 0.1 - type: precision_at_20 value: 4.8469999999999995 - type: precision_at_3 value: 22.451 - type: precision_at_5 value: 15.647 - type: recall_at_1 value: 41.607 - type: recall_at_10 value: 91.181 - type: recall_at_100 value: 99.57300000000001 - type: recall_at_1000 value: 99.644 - type: recall_at_20 value: 96.942 - type: recall_at_3 value: 67.354 - type: recall_at_5 value: 78.236 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: main_score value: 55.437138353189994 - type: v_measure value: 55.437138353189994 - type: v_measure_std value: 14.718556601335491 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: main_score value: 50.65858459544658 - type: v_measure value: 50.65858459544658 - type: v_measure_std value: 14.887033747525146 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: main_score value: 67.32597152838535 - type: map value: 67.32597152838535 - type: mrr value: 78.98683111286988 - type: nAUC_map_diff1 value: 16.8624639710487 - type: nAUC_map_max value: 24.91996491142433 - type: nAUC_map_std value: 17.91865808793225 - type: nAUC_mrr_diff1 value: 25.03766425631947 - type: nAUC_mrr_max value: 41.64561939958336 - type: nAUC_mrr_std value: 23.179909345891968 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cosine_pearson value: 85.790820496042 - type: cosine_spearman value: 83.10731534330517 - type: euclidean_pearson value: 84.61741304343133 - type: euclidean_spearman value: 83.17297949010973 - type: main_score value: 83.10731534330517 - type: manhattan_pearson value: 85.2137696526676 - type: manhattan_spearman value: 84.39168195786738 - type: pearson value: 85.790820496042 - type: spearman value: 83.10731534330517 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 89.78896103896105 - type: f1 value: 89.76107366333488 - type: f1_weighted value: 89.76107366333488 - type: main_score value: 89.78896103896105 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: main_score value: 50.68092296236376 - type: v_measure value: 50.68092296236376 - type: v_measure_std value: 0.7832640983085436 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: main_score value: 46.86629236732983 - type: v_measure value: 46.86629236732983 - type: v_measure_std value: 0.8784322236350974 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval type: mteb/cqadupstack config: default split: test revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 metrics: - type: main_score value: 47.74883333333334 - type: map_at_1 value: 30.179249999999996 - type: map_at_10 value: 41.60824999999999 - type: map_at_100 value: 42.94008333333332 - type: map_at_1000 value: 43.04666666666667 - type: map_at_20 value: 42.36833333333334 - type: map_at_3 value: 38.23491666666666 - type: map_at_5 value: 40.10183333333333 - type: mrr_at_1 value: 36.47676085808166 - type: mrr_at_10 value: 46.300991916437155 - type: mrr_at_100 value: 47.12155753713262 - type: mrr_at_1000 value: 47.168033610799945 - type: mrr_at_20 value: 46.80405724560391 - type: mrr_at_3 value: 43.77000352801797 - type: mrr_at_5 value: 45.22295361704542 - type: nauc_map_at_1000_diff1 value: 46.953671666941524 - type: nauc_map_at_1000_max value: 32.260396316089675 - type: nauc_map_at_1000_std value: 0.6657766120094878 - type: nauc_map_at_100_diff1 value: 46.94717463394555 - type: nauc_map_at_100_max value: 32.25088350678177 - type: nauc_map_at_100_std value: 0.6257017014549283 - type: nauc_map_at_10_diff1 value: 46.974678429336464 - type: nauc_map_at_10_max value: 31.862230807295504 - type: nauc_map_at_10_std value: -0.14758828549579284 - type: nauc_map_at_1_diff1 value: 52.48913346466124 - type: nauc_map_at_1_max value: 29.874374024967725 - type: nauc_map_at_1_std value: -2.433547569836134 - type: nauc_map_at_20_diff1 value: 46.96088684217651 - type: nauc_map_at_20_max value: 32.08954208613205 - type: nauc_map_at_20_std value: 0.25946321113436527 - type: nauc_map_at_3_diff1 value: 47.703230121518345 - type: nauc_map_at_3_max value: 30.977880095983107 - type: nauc_map_at_3_std value: -1.342777563991804 - type: nauc_map_at_5_diff1 value: 47.1615010199957 - type: nauc_map_at_5_max value: 31.420885812683284 - type: nauc_map_at_5_std value: -0.8789297099444306 - type: nauc_mrr_at_1000_diff1 value: 46.69178645962615 - type: nauc_mrr_at_1000_max value: 34.392807413340655 - type: nauc_mrr_at_1000_std value: 1.6155464863667934 - type: nauc_mrr_at_100_diff1 value: 46.67417236349189 - type: nauc_mrr_at_100_max value: 34.384607045512624 - type: nauc_mrr_at_100_std value: 1.6259917384109652 - type: nauc_mrr_at_10_diff1 value: 46.60497560446239 - type: nauc_mrr_at_10_max value: 34.32918897817958 - type: nauc_mrr_at_10_std value: 1.39387793769014 - type: nauc_mrr_at_1_diff1 value: 51.61608573254137 - type: nauc_mrr_at_1_max value: 35.18105023234596 - type: nauc_mrr_at_1_std value: 0.17943702145478177 - type: nauc_mrr_at_20_diff1 value: 46.635943069860254 - type: nauc_mrr_at_20_max value: 34.37050973118794 - type: nauc_mrr_at_20_std value: 1.5346464678860607 - type: nauc_mrr_at_3_diff1 value: 47.154389369038334 - type: nauc_mrr_at_3_max value: 34.41036411855465 - type: nauc_mrr_at_3_std value: 0.924551812357872 - type: nauc_mrr_at_5_diff1 value: 46.6690101691763 - type: nauc_mrr_at_5_max value: 34.29740388138466 - type: nauc_mrr_at_5_std value: 1.0567184149139792 - type: nauc_ndcg_at_1000_diff1 value: 45.375448289173264 - type: nauc_ndcg_at_1000_max value: 33.47957083714482 - type: nauc_ndcg_at_1000_std value: 3.192251100225568 - type: nauc_ndcg_at_100_diff1 value: 44.93601014699499 - type: nauc_ndcg_at_100_max value: 33.21249888295249 - type: nauc_ndcg_at_100_std value: 3.609842852934217 - type: nauc_ndcg_at_10_diff1 value: 44.87893284011915 - type: nauc_ndcg_at_10_max value: 32.384885249478515 - type: nauc_ndcg_at_10_std value: 1.454493065035396 - type: nauc_ndcg_at_1_diff1 value: 51.61608573254137 - type: nauc_ndcg_at_1_max value: 35.18105023234596 - type: nauc_ndcg_at_1_std value: 0.17943702145478177 - type: nauc_ndcg_at_20_diff1 value: 44.867752179050605 - type: nauc_ndcg_at_20_max value: 32.689535921840196 - type: nauc_ndcg_at_20_std value: 2.337765158573901 - type: nauc_ndcg_at_3_diff1 value: 45.87485821381341 - type: nauc_ndcg_at_3_max value: 32.33282450558947 - type: nauc_ndcg_at_3_std value: 0.0681643829273283 - type: nauc_ndcg_at_5_diff1 value: 45.202902131892394 - type: nauc_ndcg_at_5_max value: 32.1026971523917 - type: nauc_ndcg_at_5_std value: 0.3565572833774486 - type: nauc_precision_at_1000_diff1 value: -8.935267931198956 - type: nauc_precision_at_1000_max value: 6.464981960169269 - type: nauc_precision_at_1000_std value: 10.662786182234633 - type: nauc_precision_at_100_diff1 value: -1.64091517847155 - type: nauc_precision_at_100_max value: 15.175617871025024 - type: nauc_precision_at_100_std value: 16.924256989248075 - type: nauc_precision_at_10_diff1 value: 15.676651966277047 - type: nauc_precision_at_10_max value: 26.243734188847117 - type: nauc_precision_at_10_std value: 10.601741034956333 - type: nauc_precision_at_1_diff1 value: 51.61608573254137 - type: nauc_precision_at_1_max value: 35.18105023234596 - type: nauc_precision_at_1_std value: 0.17943702145478177 - type: nauc_precision_at_20_diff1 value: 9.447267260198654 - type: nauc_precision_at_20_max value: 23.024130858142723 - type: nauc_precision_at_20_std value: 13.739145648899603 - type: nauc_precision_at_3_diff1 value: 30.11583572134629 - type: nauc_precision_at_3_max value: 31.37321080069495 - type: nauc_precision_at_3_std value: 4.705512374126024 - type: nauc_precision_at_5_diff1 value: 23.192015335996093 - type: nauc_precision_at_5_max value: 29.415746835998764 - type: nauc_precision_at_5_std value: 6.843498772798558 - type: nauc_recall_at_1000_diff1 value: 25.36573313426033 - type: nauc_recall_at_1000_max value: 43.06672256524168 - type: nauc_recall_at_1000_std value: 47.93664853815292 - type: nauc_recall_at_100_diff1 value: 31.222880916617406 - type: nauc_recall_at_100_max value: 31.761159904172658 - type: nauc_recall_at_100_std value: 23.034218976635877 - type: nauc_recall_at_10_diff1 value: 36.23439028915225 - type: nauc_recall_at_10_max value: 28.473458977606438 - type: nauc_recall_at_10_std value: 3.7797969934159 - type: nauc_recall_at_1_diff1 value: 52.48913346466124 - type: nauc_recall_at_1_max value: 29.874374024967725 - type: nauc_recall_at_1_std value: -2.433547569836134 - type: nauc_recall_at_20_diff1 value: 34.678676952584766 - type: nauc_recall_at_20_max value: 29.04638392522168 - type: nauc_recall_at_20_std value: 8.148894982082549 - type: nauc_recall_at_3_diff1 value: 41.31029996231311 - type: nauc_recall_at_3_max value: 28.44199443414157 - type: nauc_recall_at_3_std value: -0.747324057600377 - type: nauc_recall_at_5_diff1 value: 38.535873899920674 - type: nauc_recall_at_5_max value: 27.942667805948375 - type: nauc_recall_at_5_std value: 0.30652206930973686 - type: ndcg_at_1 value: 36.47675 - type: ndcg_at_10 value: 47.74883333333334 - type: ndcg_at_100 value: 52.902416666666674 - type: ndcg_at_1000 value: 54.69116666666667 - type: ndcg_at_20 value: 49.89758333333333 - type: ndcg_at_3 value: 42.462250000000004 - type: ndcg_at_5 value: 44.91841666666667 - type: precision_at_1 value: 36.47675 - type: precision_at_10 value: 8.582416666666665 - type: precision_at_100 value: 1.31475 - type: precision_at_1000 value: 0.16458333333333333 - type: precision_at_20 value: 5.021833333333333 - type: precision_at_3 value: 20.004499999999997 - type: precision_at_5 value: 14.178666666666665 - type: recall_at_1 value: 30.179249999999996 - type: recall_at_10 value: 60.950166666666675 - type: recall_at_100 value: 83.19025 - type: recall_at_1000 value: 95.27774999999998 - type: recall_at_20 value: 68.80175 - type: recall_at_3 value: 46.01841666666666 - type: recall_at_5 value: 52.482416666666666 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: mteb/climate-fever config: default split: test revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380 metrics: - type: main_score value: 46.113 - type: map_at_1 value: 20.122999999999998 - type: map_at_10 value: 35.474 - type: map_at_100 value: 37.592 - type: map_at_1000 value: 37.773 - type: map_at_20 value: 36.637 - type: map_at_3 value: 29.731 - type: map_at_5 value: 32.964 - type: mrr_at_1 value: 46.71009771986971 - type: mrr_at_10 value: 58.855669303552105 - type: mrr_at_100 value: 59.389249674038425 - type: mrr_at_1000 value: 59.408448104362364 - type: mrr_at_20 value: 59.23881203149016 - type: mrr_at_3 value: 56.18892508143328 - type: mrr_at_5 value: 57.85342019543985 - type: nauc_map_at_1000_diff1 value: 27.047031037721958 - type: nauc_map_at_1000_max value: 43.25240279148033 - type: nauc_map_at_1000_std value: 20.795849418696037 - type: nauc_map_at_100_diff1 value: 27.044739015116452 - type: nauc_map_at_100_max value: 43.24042159787812 - type: nauc_map_at_100_std value: 20.799952124137683 - type: nauc_map_at_10_diff1 value: 27.372696854670338 - type: nauc_map_at_10_max value: 43.054456574721684 - type: nauc_map_at_10_std value: 19.537162110136645 - type: nauc_map_at_1_diff1 value: 43.65424623953092 - type: nauc_map_at_1_max value: 45.17986509998762 - type: nauc_map_at_1_std value: 8.497107052335414 - type: nauc_map_at_20_diff1 value: 27.224535846566074 - type: nauc_map_at_20_max value: 43.12222854561229 - type: nauc_map_at_20_std value: 20.29982972202669 - type: nauc_map_at_3_diff1 value: 30.87847002319001 - type: nauc_map_at_3_max value: 42.890027891707575 - type: nauc_map_at_3_std value: 13.857451947580929 - type: nauc_map_at_5_diff1 value: 27.966867093591542 - type: nauc_map_at_5_max value: 42.35826637592201 - type: nauc_map_at_5_std value: 16.993102524058624 - type: nauc_mrr_at_1000_diff1 value: 30.191544077608164 - type: nauc_mrr_at_1000_max value: 44.959438920351644 - type: nauc_mrr_at_1000_std value: 24.065801376465114 - type: nauc_mrr_at_100_diff1 value: 30.170368115494 - type: nauc_mrr_at_100_max value: 44.955868115761156 - type: nauc_mrr_at_100_std value: 24.093510767847707 - type: nauc_mrr_at_10_diff1 value: 30.128430637520175 - type: nauc_mrr_at_10_max value: 44.97689261350708 - type: nauc_mrr_at_10_std value: 24.037049561818897 - type: nauc_mrr_at_1_diff1 value: 35.323351939108214 - type: nauc_mrr_at_1_max value: 43.85026244855636 - type: nauc_mrr_at_1_std value: 17.040662141218974 - type: nauc_mrr_at_20_diff1 value: 30.192006556160443 - type: nauc_mrr_at_20_max value: 45.02814530774032 - type: nauc_mrr_at_20_std value: 24.20885865448696 - type: nauc_mrr_at_3_diff1 value: 29.88250163424518 - type: nauc_mrr_at_3_max value: 44.25768944883186 - type: nauc_mrr_at_3_std value: 22.804183393364198 - type: nauc_mrr_at_5_diff1 value: 30.269824490420767 - type: nauc_mrr_at_5_max value: 44.97443265796657 - type: nauc_mrr_at_5_std value: 23.894159916141177 - type: nauc_ndcg_at_1000_diff1 value: 24.533764005407356 - type: nauc_ndcg_at_1000_max value: 44.50902713386608 - type: nauc_ndcg_at_1000_std value: 27.589506980238404 - type: nauc_ndcg_at_100_diff1 value: 24.209785073940353 - type: nauc_ndcg_at_100_max value: 44.18257063893669 - type: nauc_ndcg_at_100_std value: 27.963150866401943 - type: nauc_ndcg_at_10_diff1 value: 25.168069201989486 - type: nauc_ndcg_at_10_max value: 43.84940910683214 - type: nauc_ndcg_at_10_std value: 24.810707270956435 - type: nauc_ndcg_at_1_diff1 value: 35.323351939108214 - type: nauc_ndcg_at_1_max value: 43.85026244855636 - type: nauc_ndcg_at_1_std value: 17.040662141218974 - type: nauc_ndcg_at_20_diff1 value: 24.829924800466834 - type: nauc_ndcg_at_20_max value: 43.738574327059716 - type: nauc_ndcg_at_20_std value: 26.252370278684072 - type: nauc_ndcg_at_3_diff1 value: 27.321943393906274 - type: nauc_ndcg_at_3_max value: 42.16584786993447 - type: nauc_ndcg_at_3_std value: 18.24775079455969 - type: nauc_ndcg_at_5_diff1 value: 26.043785418347998 - type: nauc_ndcg_at_5_max value: 42.874593895388344 - type: nauc_ndcg_at_5_std value: 21.294004555506117 - type: nauc_precision_at_1000_diff1 value: -22.073027615308582 - type: nauc_precision_at_1000_max value: -6.549723766317357 - type: nauc_precision_at_1000_std value: 18.301749191241306 - type: nauc_precision_at_100_diff1 value: -15.654286887593619 - type: nauc_precision_at_100_max value: 6.401516251421999 - type: nauc_precision_at_100_std value: 29.170680324929805 - type: nauc_precision_at_10_diff1 value: -4.362381972892247 - type: nauc_precision_at_10_max value: 22.10943515872447 - type: nauc_precision_at_10_std value: 31.869699459530022 - type: nauc_precision_at_1_diff1 value: 35.323351939108214 - type: nauc_precision_at_1_max value: 43.85026244855636 - type: nauc_precision_at_1_std value: 17.040662141218974 - type: nauc_precision_at_20_diff1 value: -7.50749661117875 - type: nauc_precision_at_20_max value: 16.80584016023257 - type: nauc_precision_at_20_std value: 31.976755897112437 - type: nauc_precision_at_3_diff1 value: 7.402667538773083 - type: nauc_precision_at_3_max value: 31.2088401330676 - type: nauc_precision_at_3_std value: 24.287905698405662 - type: nauc_precision_at_5_diff1 value: 0.7479172565343901 - type: nauc_precision_at_5_max value: 26.28427734237825 - type: nauc_precision_at_5_std value: 28.246947120310317 - type: nauc_recall_at_1000_diff1 value: 2.4778431086370496 - type: nauc_recall_at_1000_max value: 40.2231995797509 - type: nauc_recall_at_1000_std value: 52.62124052183862 - type: nauc_recall_at_100_diff1 value: 8.960962419741463 - type: nauc_recall_at_100_max value: 35.81132850291491 - type: nauc_recall_at_100_std value: 40.020903251786166 - type: nauc_recall_at_10_diff1 value: 15.603400751376636 - type: nauc_recall_at_10_max value: 37.570127529136485 - type: nauc_recall_at_10_std value: 28.07128410238545 - type: nauc_recall_at_1_diff1 value: 43.65424623953092 - type: nauc_recall_at_1_max value: 45.17986509998762 - type: nauc_recall_at_1_std value: 8.497107052335414 - type: nauc_recall_at_20_diff1 value: 13.844820282832346 - type: nauc_recall_at_20_max value: 36.0106148516309 - type: nauc_recall_at_20_std value: 31.453103910565254 - type: nauc_recall_at_3_diff1 value: 24.359328154117748 - type: nauc_recall_at_3_max value: 39.93774251377568 - type: nauc_recall_at_3_std value: 16.214921517509648 - type: nauc_recall_at_5_diff1 value: 18.75788451360292 - type: nauc_recall_at_5_max value: 38.177646107055516 - type: nauc_recall_at_5_std value: 22.17196825834675 - type: ndcg_at_1 value: 46.71 - type: ndcg_at_10 value: 46.113 - type: ndcg_at_100 value: 53.035 - type: ndcg_at_1000 value: 55.724 - type: ndcg_at_20 value: 48.929 - type: ndcg_at_3 value: 39.501999999999995 - type: ndcg_at_5 value: 41.792 - type: precision_at_1 value: 46.71 - type: precision_at_10 value: 14.274000000000001 - type: precision_at_100 value: 2.1870000000000003 - type: precision_at_1000 value: 0.269 - type: precision_at_20 value: 8.375 - type: precision_at_3 value: 29.881 - type: precision_at_5 value: 22.697 - type: recall_at_1 value: 20.122999999999998 - type: recall_at_10 value: 52.22 - type: recall_at_100 value: 75.388 - type: recall_at_1000 value: 89.938 - type: recall_at_20 value: 60.077000000000005 - type: recall_at_3 value: 35.150999999999996 - type: recall_at_5 value: 42.748000000000005 - task: type: Retrieval dataset: name: MTEB DBPedia type: mteb/dbpedia config: default split: test revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659 metrics: - type: main_score value: 52.276999999999994 - type: map_at_1 value: 9.949 - type: map_at_10 value: 24.891 - type: map_at_100 value: 37.111 - type: map_at_1000 value: 39.266 - type: map_at_20 value: 29.685 - type: map_at_3 value: 16.586000000000002 - type: map_at_5 value: 19.982 - type: mrr_at_1 value: 76.25 - type: mrr_at_10 value: 82.4518849206349 - type: mrr_at_100 value: 82.70302194564499 - type: mrr_at_1000 value: 82.70909729942254 - type: mrr_at_20 value: 82.60492765962964 - type: mrr_at_3 value: 81.33333333333331 - type: mrr_at_5 value: 82.14583333333331 - type: nauc_map_at_1000_diff1 value: 21.427201262456556 - type: nauc_map_at_1000_max value: 35.357361590816076 - type: nauc_map_at_1000_std value: 24.785419223353717 - type: nauc_map_at_100_diff1 value: 22.82358692021537 - type: nauc_map_at_100_max value: 35.07399692072945 - type: nauc_map_at_100_std value: 22.679878828987025 - type: nauc_map_at_10_diff1 value: 26.491769223479643 - type: nauc_map_at_10_max value: 20.78079385443902 - type: nauc_map_at_10_std value: -4.910406292079661 - type: nauc_map_at_1_diff1 value: 35.20851030208876 - type: nauc_map_at_1_max value: 5.783003346365858 - type: nauc_map_at_1_std value: -21.11679133835354 - type: nauc_map_at_20_diff1 value: 24.80097499300491 - type: nauc_map_at_20_max value: 26.807021360774975 - type: nauc_map_at_20_std value: 4.793103995429955 - type: nauc_map_at_3_diff1 value: 29.238193458890173 - type: nauc_map_at_3_max value: 10.300839972189456 - type: nauc_map_at_3_std value: -17.889666731981592 - type: nauc_map_at_5_diff1 value: 28.773624870573926 - type: nauc_map_at_5_max value: 14.951435645422887 - type: nauc_map_at_5_std value: -13.319697827173565 - type: nauc_mrr_at_1000_diff1 value: 55.232544856708785 - type: nauc_mrr_at_1000_max value: 64.73225637682637 - type: nauc_mrr_at_1000_std value: 37.57480399594188 - type: nauc_mrr_at_100_diff1 value: 55.219251601773735 - type: nauc_mrr_at_100_max value: 64.73305063663611 - type: nauc_mrr_at_100_std value: 37.56458562909293 - type: nauc_mrr_at_10_diff1 value: 55.123463838253464 - type: nauc_mrr_at_10_max value: 64.91914041040233 - type: nauc_mrr_at_10_std value: 37.76482503851598 - type: nauc_mrr_at_1_diff1 value: 56.45461238513347 - type: nauc_mrr_at_1_max value: 63.11782510293676 - type: nauc_mrr_at_1_std value: 33.592561284868985 - type: nauc_mrr_at_20_diff1 value: 55.15401961460458 - type: nauc_mrr_at_20_max value: 64.77145835613156 - type: nauc_mrr_at_20_std value: 37.471561418305804 - type: nauc_mrr_at_3_diff1 value: 54.64387438697658 - type: nauc_mrr_at_3_max value: 64.27618995019164 - type: nauc_mrr_at_3_std value: 39.391637295269014 - type: nauc_mrr_at_5_diff1 value: 55.08702591239485 - type: nauc_mrr_at_5_max value: 64.6071475650635 - type: nauc_mrr_at_5_std value: 37.97185134269896 - type: nauc_ndcg_at_1000_diff1 value: 31.696698876400387 - type: nauc_ndcg_at_1000_max value: 52.12183760001191 - type: nauc_ndcg_at_1000_std value: 40.197596211778716 - type: nauc_ndcg_at_100_diff1 value: 33.253120193433666 - type: nauc_ndcg_at_100_max value: 49.47167758554746 - type: nauc_ndcg_at_100_std value: 32.643833139756204 - type: nauc_ndcg_at_10_diff1 value: 27.065541392580013 - type: nauc_ndcg_at_10_max value: 45.83504281289289 - type: nauc_ndcg_at_10_std value: 27.11739500732328 - type: nauc_ndcg_at_1_diff1 value: 49.42808250022517 - type: nauc_ndcg_at_1_max value: 53.502615048520354 - type: nauc_ndcg_at_1_std value: 27.17555908836708 - type: nauc_ndcg_at_20_diff1 value: 29.374791382330308 - type: nauc_ndcg_at_20_max value: 43.91246842479055 - type: nauc_ndcg_at_20_std value: 23.419410620550316 - type: nauc_ndcg_at_3_diff1 value: 26.71550354496204 - type: nauc_ndcg_at_3_max value: 43.9641457892003 - type: nauc_ndcg_at_3_std value: 27.320024167947686 - type: nauc_ndcg_at_5_diff1 value: 27.020654974589487 - type: nauc_ndcg_at_5_max value: 46.130417266030584 - type: nauc_ndcg_at_5_std value: 28.392009019010068 - type: nauc_precision_at_1000_diff1 value: -21.47455482181002 - type: nauc_precision_at_1000_max value: -9.721907229236024 - type: nauc_precision_at_1000_std value: -1.061132062651487 - type: nauc_precision_at_100_diff1 value: -12.35759246101943 - type: nauc_precision_at_100_max value: 15.509512444892168 - type: nauc_precision_at_100_std value: 36.21183578592014 - type: nauc_precision_at_10_diff1 value: -6.136998947343125 - type: nauc_precision_at_10_max value: 32.30037906748288 - type: nauc_precision_at_10_std value: 41.4500302476981 - type: nauc_precision_at_1_diff1 value: 56.45461238513347 - type: nauc_precision_at_1_max value: 63.11782510293676 - type: nauc_precision_at_1_std value: 33.592561284868985 - type: nauc_precision_at_20_diff1 value: -7.335890123683174 - type: nauc_precision_at_20_max value: 28.31417075291312 - type: nauc_precision_at_20_std value: 41.405935715061815 - type: nauc_precision_at_3_diff1 value: 7.117255890225942 - type: nauc_precision_at_3_max value: 39.19894132683829 - type: nauc_precision_at_3_std value: 38.48255841994843 - type: nauc_precision_at_5_diff1 value: 1.861523090114206 - type: nauc_precision_at_5_max value: 38.11649223007208 - type: nauc_precision_at_5_std value: 40.52993530374645 - type: nauc_recall_at_1000_diff1 value: 26.497648584314636 - type: nauc_recall_at_1000_max value: 44.48069746734414 - type: nauc_recall_at_1000_std value: 53.16438130228715 - type: nauc_recall_at_100_diff1 value: 26.353456899511446 - type: nauc_recall_at_100_max value: 37.57379787884197 - type: nauc_recall_at_100_std value: 29.197468295989548 - type: nauc_recall_at_10_diff1 value: 22.80445738351114 - type: nauc_recall_at_10_max value: 15.895630778449046 - type: nauc_recall_at_10_std value: -8.746224797644501 - type: nauc_recall_at_1_diff1 value: 35.20851030208876 - type: nauc_recall_at_1_max value: 5.783003346365858 - type: nauc_recall_at_1_std value: -21.11679133835354 - type: nauc_recall_at_20_diff1 value: 22.34028867678706 - type: nauc_recall_at_20_max value: 21.42373427646772 - type: nauc_recall_at_20_std value: 0.4533036151015875 - type: nauc_recall_at_3_diff1 value: 24.96853445599229 - type: nauc_recall_at_3_max value: 6.245185375804208 - type: nauc_recall_at_3_std value: -20.200240127099622 - type: nauc_recall_at_5_diff1 value: 24.749259476710623 - type: nauc_recall_at_5_max value: 11.024592845995942 - type: nauc_recall_at_5_std value: -16.15683085641543 - type: ndcg_at_1 value: 64.125 - type: ndcg_at_10 value: 52.276999999999994 - type: ndcg_at_100 value: 57.440000000000005 - type: ndcg_at_1000 value: 64.082 - type: ndcg_at_20 value: 51.383 - type: ndcg_at_3 value: 55.769000000000005 - type: ndcg_at_5 value: 53.978 - type: precision_at_1 value: 76.25 - type: precision_at_10 value: 43.05 - type: precision_at_100 value: 14.09 - type: precision_at_1000 value: 2.662 - type: precision_at_20 value: 33.112 - type: precision_at_3 value: 59.833000000000006 - type: precision_at_5 value: 53.05 - type: recall_at_1 value: 9.949 - type: recall_at_10 value: 30.424 - type: recall_at_100 value: 64.062 - type: recall_at_1000 value: 85.916 - type: recall_at_20 value: 39.895 - type: recall_at_3 value: 17.876 - type: recall_at_5 value: 22.536 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 84.29499999999999 - type: f1 value: 79.76188258172078 - type: f1_weighted value: 84.96026012933847 - type: main_score value: 84.29499999999999 - task: type: Retrieval dataset: name: MTEB FEVER type: mteb/fever config: default split: test revision: bea83ef9e8fb933d90a2f1d5515737465d613e12 metrics: - type: main_score value: 94.83200000000001 - type: map_at_1 value: 87.339 - type: map_at_10 value: 92.92099999999999 - type: map_at_100 value: 93.108 - type: map_at_1000 value: 93.116 - type: map_at_20 value: 93.041 - type: map_at_3 value: 92.219 - type: map_at_5 value: 92.664 - type: mrr_at_1 value: 93.99939993999399 - type: mrr_at_10 value: 96.55188137861403 - type: mrr_at_100 value: 96.5652366009286 - type: mrr_at_1000 value: 96.5652625550811 - type: mrr_at_20 value: 96.5601781754844 - type: mrr_at_3 value: 96.45714571457142 - type: mrr_at_5 value: 96.544904490449 - type: nauc_map_at_1000_diff1 value: 51.81676454961933 - type: nauc_map_at_1000_max value: 24.904822914926118 - type: nauc_map_at_1000_std value: -3.8110347821630404 - type: nauc_map_at_100_diff1 value: 51.77514975011158 - type: nauc_map_at_100_max value: 24.912497341800094 - type: nauc_map_at_100_std value: -3.76229517662447 - type: nauc_map_at_10_diff1 value: 51.29608296382479 - type: nauc_map_at_10_max value: 24.78704970246707 - type: nauc_map_at_10_std value: -3.723130815783328 - type: nauc_map_at_1_diff1 value: 59.90813138005125 - type: nauc_map_at_1_max value: 24.58479295693794 - type: nauc_map_at_1_std value: -8.056152492777027 - type: nauc_map_at_20_diff1 value: 51.428639331678326 - type: nauc_map_at_20_max value: 24.849214517705086 - type: nauc_map_at_20_std value: -3.685550123874596 - type: nauc_map_at_3_diff1 value: 50.94399923719279 - type: nauc_map_at_3_max value: 24.359700180006207 - type: nauc_map_at_3_std value: -5.407767408816422 - type: nauc_map_at_5_diff1 value: 50.767302682959546 - type: nauc_map_at_5_max value: 24.491113461892215 - type: nauc_map_at_5_std value: -4.058336127339082 - type: nauc_mrr_at_1000_diff1 value: 79.86042313551833 - type: nauc_mrr_at_1000_max value: 23.20960445633933 - type: nauc_mrr_at_1000_std value: -23.54334295120471 - type: nauc_mrr_at_100_diff1 value: 79.85991247027636 - type: nauc_mrr_at_100_max value: 23.210085926780106 - type: nauc_mrr_at_100_std value: -23.542508200789197 - type: nauc_mrr_at_10_diff1 value: 79.71095155563415 - type: nauc_mrr_at_10_max value: 23.24128650883908 - type: nauc_mrr_at_10_std value: -23.408502781834102 - type: nauc_mrr_at_1_diff1 value: 82.6349900233902 - type: nauc_mrr_at_1_max value: 21.994548214014227 - type: nauc_mrr_at_1_std value: -22.549769792179262 - type: nauc_mrr_at_20_diff1 value: 79.76465012873038 - type: nauc_mrr_at_20_max value: 23.17575026523213 - type: nauc_mrr_at_20_std value: -23.492660166315048 - type: nauc_mrr_at_3_diff1 value: 79.91074933379953 - type: nauc_mrr_at_3_max value: 24.14246499097892 - type: nauc_mrr_at_3_std value: -25.22601708389664 - type: nauc_mrr_at_5_diff1 value: 79.62092651565847 - type: nauc_mrr_at_5_max value: 23.315937737034425 - type: nauc_mrr_at_5_std value: -23.317659360058403 - type: nauc_ndcg_at_1000_diff1 value: 54.404537986779225 - type: nauc_ndcg_at_1000_max value: 25.38408304128995 - type: nauc_ndcg_at_1000_std value: -4.916709117696968 - type: nauc_ndcg_at_100_diff1 value: 53.2448598868241 - type: nauc_ndcg_at_100_max value: 25.75325255295546 - type: nauc_ndcg_at_100_std value: -3.680507005630751 - type: nauc_ndcg_at_10_diff1 value: 50.81057355170232 - type: nauc_ndcg_at_10_max value: 25.006448273343807 - type: nauc_ndcg_at_10_std value: -2.8979899112515577 - type: nauc_ndcg_at_1_diff1 value: 82.6349900233902 - type: nauc_ndcg_at_1_max value: 21.994548214014227 - type: nauc_ndcg_at_1_std value: -22.549769792179262 - type: nauc_ndcg_at_20_diff1 value: 51.205023097166304 - type: nauc_ndcg_at_20_max value: 25.22133626556826 - type: nauc_ndcg_at_20_std value: -2.9506328244150155 - type: nauc_ndcg_at_3_diff1 value: 51.79780256736321 - type: nauc_ndcg_at_3_max value: 24.81137324438439 - type: nauc_ndcg_at_3_std value: -6.881223858227807 - type: nauc_ndcg_at_5_diff1 value: 50.290038260564565 - type: nauc_ndcg_at_5_max value: 24.57250792165796 - type: nauc_ndcg_at_5_std value: -3.5124628344654596 - type: nauc_precision_at_1000_diff1 value: -20.215211396894333 - type: nauc_precision_at_1000_max value: -14.165452298769171 - type: nauc_precision_at_1000_std value: -2.0952871214470816 - type: nauc_precision_at_100_diff1 value: -22.340257474494607 - type: nauc_precision_at_100_max value: -12.697885641360282 - type: nauc_precision_at_100_std value: 1.0688624940286244 - type: nauc_precision_at_10_diff1 value: -24.78271817420798 - type: nauc_precision_at_10_max value: -12.625257500222656 - type: nauc_precision_at_10_std value: 3.223250450607087 - type: nauc_precision_at_1_diff1 value: 82.6349900233902 - type: nauc_precision_at_1_max value: 21.994548214014227 - type: nauc_precision_at_1_std value: -22.549769792179262 - type: nauc_precision_at_20_diff1 value: -24.375756227194177 - type: nauc_precision_at_20_max value: -12.341015011563536 - type: nauc_precision_at_20_std value: 2.7475274619387955 - type: nauc_precision_at_3_diff1 value: -24.8251306777365 - type: nauc_precision_at_3_max value: -13.109579709589042 - type: nauc_precision_at_3_std value: -1.2233442335420748 - type: nauc_precision_at_5_diff1 value: -26.955418583344894 - type: nauc_precision_at_5_max value: -13.598630838071015 - type: nauc_precision_at_5_std value: 2.545780631940738 - type: nauc_recall_at_1000_diff1 value: 0.2542680835344437 - type: nauc_recall_at_1000_max value: 49.38194243035277 - type: nauc_recall_at_1000_std value: 57.021502715846026 - type: nauc_recall_at_100_diff1 value: 5.062154815367015 - type: nauc_recall_at_100_max value: 45.41178380188437 - type: nauc_recall_at_100_std value: 50.78382225901813 - type: nauc_recall_at_10_diff1 value: 20.429153629007818 - type: nauc_recall_at_10_max value: 27.516855026155508 - type: nauc_recall_at_10_std value: 21.367491371755467 - type: nauc_recall_at_1_diff1 value: 59.90813138005125 - type: nauc_recall_at_1_max value: 24.58479295693794 - type: nauc_recall_at_1_std value: -8.056152492777027 - type: nauc_recall_at_20_diff1 value: 13.072430858896942 - type: nauc_recall_at_20_max value: 29.5522659183247 - type: nauc_recall_at_20_std value: 28.70569974090291 - type: nauc_recall_at_3_diff1 value: 30.419084482663617 - type: nauc_recall_at_3_max value: 25.627389580252835 - type: nauc_recall_at_3_std value: 2.5557690877637054 - type: nauc_recall_at_5_diff1 value: 22.92561435069869 - type: nauc_recall_at_5_max value: 25.545265063475455 - type: nauc_recall_at_5_std value: 14.736172663072786 - type: ndcg_at_1 value: 93.999 - type: ndcg_at_10 value: 94.83200000000001 - type: ndcg_at_100 value: 95.363 - type: ndcg_at_1000 value: 95.478 - type: ndcg_at_20 value: 95.077 - type: ndcg_at_3 value: 94.143 - type: ndcg_at_5 value: 94.525 - type: precision_at_1 value: 93.999 - type: precision_at_10 value: 11.029 - type: precision_at_100 value: 1.1560000000000001 - type: precision_at_1000 value: 0.11800000000000001 - type: precision_at_20 value: 5.62 - type: precision_at_3 value: 35.219 - type: precision_at_5 value: 21.584 - type: recall_at_1 value: 87.339 - type: recall_at_10 value: 97.026 - type: recall_at_100 value: 98.936 - type: recall_at_1000 value: 99.599 - type: recall_at_20 value: 97.744 - type: recall_at_3 value: 95.069 - type: recall_at_5 value: 96.177 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: mteb/fiqa config: default split: test revision: 27a168819829fe9bcd655c2df245fb19452e8e06 metrics: - type: main_score value: 60.480000000000004 - type: map_at_1 value: 31.529 - type: map_at_10 value: 52.081 - type: map_at_100 value: 54.342 - type: map_at_1000 value: 54.449000000000005 - type: map_at_20 value: 53.479 - type: map_at_3 value: 45.471000000000004 - type: map_at_5 value: 49.164 - type: mrr_at_1 value: 60.03086419753087 - type: mrr_at_10 value: 67.73754409171075 - type: mrr_at_100 value: 68.332432152368 - type: mrr_at_1000 value: 68.34150941774908 - type: mrr_at_20 value: 68.14780993838725 - type: mrr_at_3 value: 65.6378600823045 - type: mrr_at_5 value: 66.88014403292176 - type: nauc_map_at_1000_diff1 value: 45.36598134579052 - type: nauc_map_at_1000_max value: 31.891451119906943 - type: nauc_map_at_1000_std value: -15.41454384137943 - type: nauc_map_at_100_diff1 value: 45.31268291874018 - type: nauc_map_at_100_max value: 31.811055683002092 - type: nauc_map_at_100_std value: -15.348503855591417 - type: nauc_map_at_10_diff1 value: 45.22606983565892 - type: nauc_map_at_10_max value: 30.46108534749699 - type: nauc_map_at_10_std value: -16.618086029682555 - type: nauc_map_at_1_diff1 value: 49.94952823753276 - type: nauc_map_at_1_max value: 13.770377574254548 - type: nauc_map_at_1_std value: -14.946357968858653 - type: nauc_map_at_20_diff1 value: 45.29274207897926 - type: nauc_map_at_20_max value: 31.27332015148257 - type: nauc_map_at_20_std value: -15.782946115613129 - type: nauc_map_at_3_diff1 value: 47.94248233566038 - type: nauc_map_at_3_max value: 24.022838776825456 - type: nauc_map_at_3_std value: -17.103518542262208 - type: nauc_map_at_5_diff1 value: 45.85345590031722 - type: nauc_map_at_5_max value: 27.78341379004547 - type: nauc_map_at_5_std value: -17.490850791756326 - type: nauc_mrr_at_1000_diff1 value: 58.225141047822824 - type: nauc_mrr_at_1000_max value: 43.39606904140525 - type: nauc_mrr_at_1000_std value: -14.64093518199122 - type: nauc_mrr_at_100_diff1 value: 58.22137274179545 - type: nauc_mrr_at_100_max value: 43.39567568136935 - type: nauc_mrr_at_100_std value: -14.62512313985582 - type: nauc_mrr_at_10_diff1 value: 58.03217329957151 - type: nauc_mrr_at_10_max value: 43.633561683075186 - type: nauc_mrr_at_10_std value: -14.563703576023808 - type: nauc_mrr_at_1_diff1 value: 61.48979902647692 - type: nauc_mrr_at_1_max value: 43.1938079066948 - type: nauc_mrr_at_1_std value: -15.808138277440465 - type: nauc_mrr_at_20_diff1 value: 58.13185370150794 - type: nauc_mrr_at_20_max value: 43.35607721183147 - type: nauc_mrr_at_20_std value: -14.635812702971263 - type: nauc_mrr_at_3_diff1 value: 58.698963168321264 - type: nauc_mrr_at_3_max value: 43.633129249785405 - type: nauc_mrr_at_3_std value: -15.733246346983854 - type: nauc_mrr_at_5_diff1 value: 57.94156745229547 - type: nauc_mrr_at_5_max value: 43.14152462640525 - type: nauc_mrr_at_5_std value: -15.318685307750895 - type: nauc_ndcg_at_1000_diff1 value: 47.871896043731496 - type: nauc_ndcg_at_1000_max value: 37.159845167533426 - type: nauc_ndcg_at_1000_std value: -13.067288160833485 - type: nauc_ndcg_at_100_diff1 value: 47.046171407204426 - type: nauc_ndcg_at_100_max value: 36.422514360855835 - type: nauc_ndcg_at_100_std value: -11.636859259571441 - type: nauc_ndcg_at_10_diff1 value: 46.232628149078096 - type: nauc_ndcg_at_10_max value: 34.82402625088358 - type: nauc_ndcg_at_10_std value: -14.768545542980114 - type: nauc_ndcg_at_1_diff1 value: 61.48979902647692 - type: nauc_ndcg_at_1_max value: 43.1938079066948 - type: nauc_ndcg_at_1_std value: -15.808138277440465 - type: nauc_ndcg_at_20_diff1 value: 46.51116172390955 - type: nauc_ndcg_at_20_max value: 35.36362650568298 - type: nauc_ndcg_at_20_std value: -12.849406209182826 - type: nauc_ndcg_at_3_diff1 value: 47.39832263785871 - type: nauc_ndcg_at_3_max value: 35.67466264628456 - type: nauc_ndcg_at_3_std value: -17.257717349296943 - type: nauc_ndcg_at_5_diff1 value: 45.91049493804232 - type: nauc_ndcg_at_5_max value: 33.8405091138445 - type: nauc_ndcg_at_5_std value: -17.477069902735895 - type: nauc_precision_at_1000_diff1 value: -12.037873000917767 - type: nauc_precision_at_1000_max value: 26.043220150002295 - type: nauc_precision_at_1000_std value: 6.84910668321572 - type: nauc_precision_at_100_diff1 value: -9.383403459051864 - type: nauc_precision_at_100_max value: 29.68713170610003 - type: nauc_precision_at_100_std value: 10.079531587056152 - type: nauc_precision_at_10_diff1 value: 3.3433323353925135 - type: nauc_precision_at_10_max value: 38.31790111725993 - type: nauc_precision_at_10_std value: 0.7888123304710856 - type: nauc_precision_at_1_diff1 value: 61.48979902647692 - type: nauc_precision_at_1_max value: 43.1938079066948 - type: nauc_precision_at_1_std value: -15.808138277440465 - type: nauc_precision_at_20_diff1 value: -2.083500986294448 - type: nauc_precision_at_20_max value: 35.77143835726343 - type: nauc_precision_at_20_std value: 5.318547021874003 - type: nauc_precision_at_3_diff1 value: 23.335617788912586 - type: nauc_precision_at_3_max value: 39.81973275320871 - type: nauc_precision_at_3_std value: -8.442769390555561 - type: nauc_precision_at_5_diff1 value: 11.521087842589482 - type: nauc_precision_at_5_max value: 39.527792539828255 - type: nauc_precision_at_5_std value: -5.412729503701626 - type: nauc_recall_at_1000_diff1 value: 10.6830893047453 - type: nauc_recall_at_1000_max value: 8.834504311238423 - type: nauc_recall_at_1000_std value: 24.670754304859692 - type: nauc_recall_at_100_diff1 value: 20.646020385527358 - type: nauc_recall_at_100_max value: 20.121595011523294 - type: nauc_recall_at_100_std value: 19.42307459311791 - type: nauc_recall_at_10_diff1 value: 33.01029313733417 - type: nauc_recall_at_10_max value: 27.948634980368702 - type: nauc_recall_at_10_std value: -10.239767371462975 - type: nauc_recall_at_1_diff1 value: 49.94952823753276 - type: nauc_recall_at_1_max value: 13.770377574254548 - type: nauc_recall_at_1_std value: -14.946357968858653 - type: nauc_recall_at_20_diff1 value: 30.040111045267963 - type: nauc_recall_at_20_max value: 25.984919302418184 - type: nauc_recall_at_20_std value: -1.4998001817460804 - type: nauc_recall_at_3_diff1 value: 42.24410559113653 - type: nauc_recall_at_3_max value: 20.269503583626914 - type: nauc_recall_at_3_std value: -17.09578532600584 - type: nauc_recall_at_5_diff1 value: 36.124149735848945 - type: nauc_recall_at_5_max value: 22.708022306002622 - type: nauc_recall_at_5_std value: -16.966976847236193 - type: ndcg_at_1 value: 60.031 - type: ndcg_at_10 value: 60.480000000000004 - type: ndcg_at_100 value: 66.94099999999999 - type: ndcg_at_1000 value: 68.303 - type: ndcg_at_20 value: 63.536 - type: ndcg_at_3 value: 55.903999999999996 - type: ndcg_at_5 value: 57.387 - type: precision_at_1 value: 60.031 - type: precision_at_10 value: 16.682 - type: precision_at_100 value: 2.336 - type: precision_at_1000 value: 0.259 - type: precision_at_20 value: 9.66 - type: precision_at_3 value: 37.191 - type: precision_at_5 value: 27.253 - type: recall_at_1 value: 31.529 - type: recall_at_10 value: 68.035 - type: recall_at_100 value: 90.925 - type: recall_at_1000 value: 98.688 - type: recall_at_20 value: 77.453 - type: recall_at_3 value: 50.221000000000004 - type: recall_at_5 value: 58.209999999999994 - task: type: Retrieval dataset: name: MTEB HotpotQA type: mteb/hotpotqa config: default split: test revision: ab518f4d6fcca38d87c25209f94beba119d02014 metrics: - type: main_score value: 76.67399999999999 - type: map_at_1 value: 43.822 - type: map_at_10 value: 68.82000000000001 - type: map_at_100 value: 69.659 - type: map_at_1000 value: 69.714 - type: map_at_20 value: 69.305 - type: map_at_3 value: 65.517 - type: map_at_5 value: 67.633 - type: mrr_at_1 value: 87.643484132343 - type: mrr_at_10 value: 91.28134679485098 - type: mrr_at_100 value: 91.37985230614755 - type: mrr_at_1000 value: 91.38202467630681 - type: mrr_at_20 value: 91.34718855278429 - type: mrr_at_3 value: 90.75849651136599 - type: mrr_at_5 value: 91.10961062345235 - type: nauc_map_at_1000_diff1 value: 3.7670405082837477 - type: nauc_map_at_1000_max value: 14.410594409695182 - type: nauc_map_at_1000_std value: 7.94738583292685 - type: nauc_map_at_100_diff1 value: 3.738796209193936 - type: nauc_map_at_100_max value: 14.408029101534694 - type: nauc_map_at_100_std value: 7.979641077687816 - type: nauc_map_at_10_diff1 value: 3.334917978089454 - type: nauc_map_at_10_max value: 13.975255289147748 - type: nauc_map_at_10_std value: 7.491959628012161 - type: nauc_map_at_1_diff1 value: 75.35066482050009 - type: nauc_map_at_1_max value: 53.573503488571475 - type: nauc_map_at_1_std value: -6.542030594426993 - type: nauc_map_at_20_diff1 value: 3.5197129341582083 - type: nauc_map_at_20_max value: 14.159880698006816 - type: nauc_map_at_20_std value: 7.856574384998483 - type: nauc_map_at_3_diff1 value: 3.0992333232864064 - type: nauc_map_at_3_max value: 12.513959281222112 - type: nauc_map_at_3_std value: 4.352912866014865 - type: nauc_map_at_5_diff1 value: 3.0351688998572537 - type: nauc_map_at_5_max value: 13.21599457624529 - type: nauc_map_at_5_std value: 6.246882983214777 - type: nauc_mrr_at_1000_diff1 value: 75.23953736361132 - type: nauc_mrr_at_1000_max value: 56.64260717262164 - type: nauc_mrr_at_1000_std value: -4.865932053762276 - type: nauc_mrr_at_100_diff1 value: 75.24091372816497 - type: nauc_mrr_at_100_max value: 56.64831104504846 - type: nauc_mrr_at_100_std value: -4.850966297943324 - type: nauc_mrr_at_10_diff1 value: 75.26540178053416 - type: nauc_mrr_at_10_max value: 56.828755673428965 - type: nauc_mrr_at_10_std value: -4.8401126970944635 - type: nauc_mrr_at_1_diff1 value: 75.35066482050009 - type: nauc_mrr_at_1_max value: 53.573503488571475 - type: nauc_mrr_at_1_std value: -6.542030594426993 - type: nauc_mrr_at_20_diff1 value: 75.24453050729845 - type: nauc_mrr_at_20_max value: 56.69220588401435 - type: nauc_mrr_at_20_std value: -4.843700730832108 - type: nauc_mrr_at_3_diff1 value: 74.98411648336175 - type: nauc_mrr_at_3_max value: 56.766537573537114 - type: nauc_mrr_at_3_std value: -4.909712671649337 - type: nauc_mrr_at_5_diff1 value: 75.20599020991028 - type: nauc_mrr_at_5_max value: 56.64236207782237 - type: nauc_mrr_at_5_std value: -5.208907367513977 - type: nauc_ndcg_at_1000_diff1 value: 11.48307079099774 - type: nauc_ndcg_at_1000_max value: 20.893326881675176 - type: nauc_ndcg_at_1000_std value: 10.43489838692119 - type: nauc_ndcg_at_100_diff1 value: 10.395588735754927 - type: nauc_ndcg_at_100_max value: 20.529573302516912 - type: nauc_ndcg_at_100_std value: 11.252973083654268 - type: nauc_ndcg_at_10_diff1 value: 8.596739352741972 - type: nauc_ndcg_at_10_max value: 18.475863682540673 - type: nauc_ndcg_at_10_std value: 9.175831033463352 - type: nauc_ndcg_at_1_diff1 value: 75.35066482050009 - type: nauc_ndcg_at_1_max value: 53.573503488571475 - type: nauc_ndcg_at_1_std value: -6.542030594426993 - type: nauc_ndcg_at_20_diff1 value: 8.998033972471749 - type: nauc_ndcg_at_20_max value: 18.892085875404522 - type: nauc_ndcg_at_20_std value: 10.3241608901084 - type: nauc_ndcg_at_3_diff1 value: 8.796384949533579 - type: nauc_ndcg_at_3_max value: 16.515261419885274 - type: nauc_ndcg_at_3_std value: 4.081902976576701 - type: nauc_ndcg_at_5_diff1 value: 8.277259464605025 - type: nauc_ndcg_at_5_max value: 17.163053202909527 - type: nauc_ndcg_at_5_std value: 6.652669449704474 - type: nauc_precision_at_1000_diff1 value: -3.490556596304827 - type: nauc_precision_at_1000_max value: 31.0473259001597 - type: nauc_precision_at_1000_std value: 52.36921397692622 - type: nauc_precision_at_100_diff1 value: -6.420747959222489 - type: nauc_precision_at_100_max value: 20.555887056005936 - type: nauc_precision_at_100_std value: 36.119132870798495 - type: nauc_precision_at_10_diff1 value: -6.461726057290426 - type: nauc_precision_at_10_max value: 12.161081825341915 - type: nauc_precision_at_10_std value: 17.961318451839993 - type: nauc_precision_at_1_diff1 value: 75.35066482050009 - type: nauc_precision_at_1_max value: 53.573503488571475 - type: nauc_precision_at_1_std value: -6.542030594426993 - type: nauc_precision_at_20_diff1 value: -7.361461296416161 - type: nauc_precision_at_20_max value: 12.663621261696733 - type: nauc_precision_at_20_std value: 23.312476851670286 - type: nauc_precision_at_3_diff1 value: -3.299056912774522 - type: nauc_precision_at_3_max value: 9.85602375812038 - type: nauc_precision_at_3_std value: 6.4962782003155475 - type: nauc_precision_at_5_diff1 value: -5.3155827772027795 - type: nauc_precision_at_5_max value: 10.32907751171833 - type: nauc_precision_at_5_std value: 11.384098087196932 - type: nauc_recall_at_1000_diff1 value: -3.4905565963043332 - type: nauc_recall_at_1000_max value: 31.04732590016041 - type: nauc_recall_at_1000_std value: 52.36921397692641 - type: nauc_recall_at_100_diff1 value: -6.420747959222586 - type: nauc_recall_at_100_max value: 20.55588705600596 - type: nauc_recall_at_100_std value: 36.11913287079825 - type: nauc_recall_at_10_diff1 value: -6.461726057290347 - type: nauc_recall_at_10_max value: 12.161081825342022 - type: nauc_recall_at_10_std value: 17.96131845184002 - type: nauc_recall_at_1_diff1 value: 75.35066482050009 - type: nauc_recall_at_1_max value: 53.573503488571475 - type: nauc_recall_at_1_std value: -6.542030594426993 - type: nauc_recall_at_20_diff1 value: -7.361461296416054 - type: nauc_recall_at_20_max value: 12.66362126169679 - type: nauc_recall_at_20_std value: 23.312476851670382 - type: nauc_recall_at_3_diff1 value: -3.2990569127745886 - type: nauc_recall_at_3_max value: 9.856023758120296 - type: nauc_recall_at_3_std value: 6.496278200315444 - type: nauc_recall_at_5_diff1 value: -5.315582777202729 - type: nauc_recall_at_5_max value: 10.329077511718229 - type: nauc_recall_at_5_std value: 11.384098087196932 - type: ndcg_at_1 value: 87.643 - type: ndcg_at_10 value: 76.67399999999999 - type: ndcg_at_100 value: 79.462 - type: ndcg_at_1000 value: 80.43599999999999 - type: ndcg_at_20 value: 77.83 - type: ndcg_at_3 value: 72.256 - type: ndcg_at_5 value: 74.789 - type: precision_at_1 value: 87.643 - type: precision_at_10 value: 15.726999999999999 - type: precision_at_100 value: 1.791 - type: precision_at_1000 value: 0.192 - type: precision_at_20 value: 8.236 - type: precision_at_3 value: 45.919 - type: precision_at_5 value: 29.558 - type: recall_at_1 value: 43.822 - type: recall_at_10 value: 78.636 - type: recall_at_100 value: 89.527 - type: recall_at_1000 value: 95.868 - type: recall_at_20 value: 82.363 - type: recall_at_3 value: 68.879 - type: recall_at_5 value: 73.896 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 96.6608 - type: ap value: 95.14657820401189 - type: ap_weighted value: 95.14657820401189 - type: f1 value: 96.66029695623422 - type: f1_weighted value: 96.66029695623423 - type: main_score value: 96.6608 - task: type: Retrieval dataset: name: MTEB MSMARCO type: mteb/msmarco config: default split: dev revision: c5a29a104738b98a9e76336939199e264163d4a0 metrics: - type: main_score value: 45.217 - type: map_at_1 value: 24.728 - type: map_at_10 value: 37.933 - type: map_at_100 value: 39.074999999999996 - type: map_at_1000 value: 39.115 - type: map_at_20 value: 38.663 - type: map_at_3 value: 33.904 - type: map_at_5 value: 36.217 - type: mrr_at_1 value: 25.44412607449857 - type: mrr_at_10 value: 38.52640196479737 - type: mrr_at_100 value: 39.60462889736067 - type: mrr_at_1000 value: 39.638904296248526 - type: mrr_at_20 value: 39.2234365827559 - type: mrr_at_3 value: 34.59646609360076 - type: mrr_at_5 value: 36.8801337153773 - type: nauc_map_at_1000_diff1 value: 37.645652178132174 - type: nauc_map_at_1000_max value: 9.953357023361367 - type: nauc_map_at_1000_std value: -20.800238036721503 - type: nauc_map_at_100_diff1 value: 37.643073495974555 - type: nauc_map_at_100_max value: 9.95921239641703 - type: nauc_map_at_100_std value: -20.76517765535793 - type: nauc_map_at_10_diff1 value: 37.44380763335014 - type: nauc_map_at_10_max value: 9.917273043055342 - type: nauc_map_at_10_std value: -21.467951225710898 - type: nauc_map_at_1_diff1 value: 41.02118887981969 - type: nauc_map_at_1_max value: 8.301113449711778 - type: nauc_map_at_1_std value: -19.436814224415027 - type: nauc_map_at_20_diff1 value: 37.58156586490493 - type: nauc_map_at_20_max value: 9.972927967610659 - type: nauc_map_at_20_std value: -20.951374218839387 - type: nauc_map_at_3_diff1 value: 37.67246795684178 - type: nauc_map_at_3_max value: 9.307031378909478 - type: nauc_map_at_3_std value: -21.77026217965021 - type: nauc_map_at_5_diff1 value: 37.39086482095963 - type: nauc_map_at_5_max value: 9.732739107368566 - type: nauc_map_at_5_std value: -21.8424296893692 - type: nauc_mrr_at_1000_diff1 value: 37.36666719603192 - type: nauc_mrr_at_1000_max value: 9.79040465289953 - type: nauc_mrr_at_1000_std value: -20.590147245965568 - type: nauc_mrr_at_100_diff1 value: 37.36560296629318 - type: nauc_mrr_at_100_max value: 9.798113710672162 - type: nauc_mrr_at_100_std value: -20.556791838504292 - type: nauc_mrr_at_10_diff1 value: 37.19257605840734 - type: nauc_mrr_at_10_max value: 9.749429811638063 - type: nauc_mrr_at_10_std value: -21.206407664327276 - type: nauc_mrr_at_1_diff1 value: 40.98478651095172 - type: nauc_mrr_at_1_max value: 8.173841799119707 - type: nauc_mrr_at_1_std value: -19.530027987868017 - type: nauc_mrr_at_20_diff1 value: 37.29973172861245 - type: nauc_mrr_at_20_max value: 9.815127660001345 - type: nauc_mrr_at_20_std value: -20.700860112175928 - type: nauc_mrr_at_3_diff1 value: 37.282848009425734 - type: nauc_mrr_at_3_max value: 9.172741713108193 - type: nauc_mrr_at_3_std value: -21.563630513502996 - type: nauc_mrr_at_5_diff1 value: 37.08609827303586 - type: nauc_mrr_at_5_max value: 9.604643424273284 - type: nauc_mrr_at_5_std value: -21.580110806494094 - type: nauc_ndcg_at_1000_diff1 value: 37.086587020218545 - type: nauc_ndcg_at_1000_max value: 10.696860688467472 - type: nauc_ndcg_at_1000_std value: -19.50989939916873 - type: nauc_ndcg_at_100_diff1 value: 37.03794531268128 - type: nauc_ndcg_at_100_max value: 10.940820719182339 - type: nauc_ndcg_at_100_std value: -18.28651832370893 - type: nauc_ndcg_at_10_diff1 value: 36.21062857920633 - type: nauc_ndcg_at_10_max value: 10.845172882571733 - type: nauc_ndcg_at_10_std value: -21.454301679510106 - type: nauc_ndcg_at_1_diff1 value: 40.98478651095172 - type: nauc_ndcg_at_1_max value: 8.173841799119707 - type: nauc_ndcg_at_1_std value: -19.530027987868017 - type: nauc_ndcg_at_20_diff1 value: 36.583262733100526 - type: nauc_ndcg_at_20_max value: 11.10492720898974 - type: nauc_ndcg_at_20_std value: -19.41753284137609 - type: nauc_ndcg_at_3_diff1 value: 36.57271365035382 - type: nauc_ndcg_at_3_max value: 9.56073433062999 - type: nauc_ndcg_at_3_std value: -22.324263670932915 - type: nauc_ndcg_at_5_diff1 value: 36.09419372820154 - type: nauc_ndcg_at_5_max value: 10.357384992631271 - type: nauc_ndcg_at_5_std value: -22.389578276324894 - type: nauc_precision_at_1000_diff1 value: -2.7435338714030597 - type: nauc_precision_at_1000_max value: 4.302274933383809 - type: nauc_precision_at_1000_std value: 8.456846348638948 - type: nauc_precision_at_100_diff1 value: 15.149466332615983 - type: nauc_precision_at_100_max value: 12.501013731673163 - type: nauc_precision_at_100_std value: 15.909667509021785 - type: nauc_precision_at_10_diff1 value: 28.699788688314214 - type: nauc_precision_at_10_max value: 13.024586051842347 - type: nauc_precision_at_10_std value: -19.197658937078703 - type: nauc_precision_at_1_diff1 value: 40.98478651095172 - type: nauc_precision_at_1_max value: 8.173841799119707 - type: nauc_precision_at_1_std value: -19.530027987868017 - type: nauc_precision_at_20_diff1 value: 26.519292942353395 - type: nauc_precision_at_20_max value: 14.389979272056438 - type: nauc_precision_at_20_std value: -7.030956994938155 - type: nauc_precision_at_3_diff1 value: 32.87913492278213 - type: nauc_precision_at_3_max value: 9.673660161387776 - type: nauc_precision_at_3_std value: -23.905612656592172 - type: nauc_precision_at_5_diff1 value: 30.903850113238597 - type: nauc_precision_at_5_max value: 11.482375434154898 - type: nauc_precision_at_5_std value: -23.828657095254247 - type: nauc_recall_at_1000_diff1 value: 35.80765639589219 - type: nauc_recall_at_1000_max value: 50.94532805969448 - type: nauc_recall_at_1000_std value: 66.79910877083275 - type: nauc_recall_at_100_diff1 value: 34.96182828311028 - type: nauc_recall_at_100_max value: 21.729699631790556 - type: nauc_recall_at_100_std value: 23.509439011686474 - type: nauc_recall_at_10_diff1 value: 31.88371369567137 - type: nauc_recall_at_10_max value: 14.425389702697073 - type: nauc_recall_at_10_std value: -20.95578001880924 - type: nauc_recall_at_1_diff1 value: 41.02118887981969 - type: nauc_recall_at_1_max value: 8.301113449711778 - type: nauc_recall_at_1_std value: -19.436814224415027 - type: nauc_recall_at_20_diff1 value: 32.42718780622455 - type: nauc_recall_at_20_max value: 16.90686126329399 - type: nauc_recall_at_20_std value: -9.38158227016737 - type: nauc_recall_at_3_diff1 value: 33.68966646043966 - type: nauc_recall_at_3_max value: 10.336277419708532 - type: nauc_recall_at_3_std value: -23.80165869168538 - type: nauc_recall_at_5_diff1 value: 32.26258807452426 - type: nauc_recall_at_5_max value: 12.303713005399935 - type: nauc_recall_at_5_std value: -23.87721891164968 - type: ndcg_at_1 value: 25.444 - type: ndcg_at_10 value: 45.217 - type: ndcg_at_100 value: 50.575 - type: ndcg_at_1000 value: 51.519999999999996 - type: ndcg_at_20 value: 47.786 - type: ndcg_at_3 value: 37.067 - type: ndcg_at_5 value: 41.184 - type: precision_at_1 value: 25.444 - type: precision_at_10 value: 7.07 - type: precision_at_100 value: 0.9730000000000001 - type: precision_at_1000 value: 0.106 - type: precision_at_20 value: 4.072 - type: precision_at_3 value: 15.754999999999999 - type: precision_at_5 value: 11.544 - type: recall_at_1 value: 24.728 - type: recall_at_10 value: 67.607 - type: recall_at_100 value: 92.094 - type: recall_at_1000 value: 99.165 - type: recall_at_20 value: 77.529 - type: recall_at_3 value: 45.535 - type: recall_at_5 value: 55.394 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 99.01276789785682 - type: f1 value: 98.9288649250924 - type: f1_weighted value: 99.01406884928141 - type: main_score value: 99.01276789785682 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 92.78385772913816 - type: f1 value: 79.78115704297824 - type: f1_weighted value: 93.90424147486428 - type: main_score value: 92.78385772913816 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 85.83053127101546 - type: f1 value: 82.72036139888232 - type: f1_weighted value: 85.81759723866098 - type: main_score value: 85.83053127101546 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 90.19838601210489 - type: f1 value: 89.55260197964978 - type: f1_weighted value: 90.11422965504119 - type: main_score value: 90.19838601210489 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: main_score value: 46.866746897607094 - type: v_measure value: 46.866746897607094 - type: v_measure_std value: 1.0966477896919726 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: main_score value: 44.6538827415503 - type: v_measure value: 44.6538827415503 - type: v_measure_std value: 1.1649569936599116 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 59042f120c80e8afa9cdbb224f67076cec0fc9a7 metrics: - type: main_score value: 33.05449204940555 - type: map value: 33.05449204940555 - type: mrr value: 34.32562058439585 - type: nAUC_map_diff1 value: 11.465656013162807 - type: nAUC_map_max value: -20.400088169502308 - type: nAUC_map_std value: -2.638964886362445 - type: nAUC_mrr_diff1 value: 10.644290702481207 - type: nAUC_mrr_max value: -15.304687384645769 - type: nAUC_mrr_std value: -0.519919931348978 - task: type: Retrieval dataset: name: MTEB NFCorpus type: mteb/nfcorpus config: default split: test revision: ec0fa4fe99da2ff19ca1214b7966684033a58814 metrics: - type: main_score value: 41.998000000000005 - type: map_at_1 value: 6.907000000000001 - type: map_at_10 value: 16.397000000000002 - type: map_at_100 value: 21.69 - type: map_at_1000 value: 23.652 - type: map_at_20 value: 18.629 - type: map_at_3 value: 11.969000000000001 - type: map_at_5 value: 13.894 - type: mrr_at_1 value: 53.25077399380805 - type: mrr_at_10 value: 61.8561108653988 - type: mrr_at_100 value: 62.42447851935404 - type: mrr_at_1000 value: 62.459626424428095 - type: mrr_at_20 value: 62.287236389990696 - type: mrr_at_3 value: 60.42311661506711 - type: mrr_at_5 value: 61.36738906088753 - type: nauc_map_at_1000_diff1 value: 17.159461939643844 - type: nauc_map_at_1000_max value: 32.42764938789903 - type: nauc_map_at_1000_std value: 11.039427848422093 - type: nauc_map_at_100_diff1 value: 19.089532984187503 - type: nauc_map_at_100_max value: 31.96721085058713 - type: nauc_map_at_100_std value: 6.947468655726444 - type: nauc_map_at_10_diff1 value: 25.77255342629802 - type: nauc_map_at_10_max value: 26.163590320961543 - type: nauc_map_at_10_std value: -5.2588093720998375 - type: nauc_map_at_1_diff1 value: 46.31602607957798 - type: nauc_map_at_1_max value: 11.807757660801942 - type: nauc_map_at_1_std value: -13.984889089354317 - type: nauc_map_at_20_diff1 value: 22.308161130465365 - type: nauc_map_at_20_max value: 29.070587307827722 - type: nauc_map_at_20_std value: -1.0103056620851558 - type: nauc_map_at_3_diff1 value: 33.580827849617506 - type: nauc_map_at_3_max value: 17.661630885799042 - type: nauc_map_at_3_std value: -11.463282544041888 - type: nauc_map_at_5_diff1 value: 30.32603342696912 - type: nauc_map_at_5_max value: 20.938905485667245 - type: nauc_map_at_5_std value: -10.537086968155755 - type: nauc_mrr_at_1000_diff1 value: 24.45065397805829 - type: nauc_mrr_at_1000_max value: 48.17519860927417 - type: nauc_mrr_at_1000_std value: 30.350767549118903 - type: nauc_mrr_at_100_diff1 value: 24.444061606534486 - type: nauc_mrr_at_100_max value: 48.1922894212229 - type: nauc_mrr_at_100_std value: 30.379257816584094 - type: nauc_mrr_at_10_diff1 value: 24.25598717198779 - type: nauc_mrr_at_10_max value: 48.10437607774264 - type: nauc_mrr_at_10_std value: 30.090202482685996 - type: nauc_mrr_at_1_diff1 value: 26.907595285201264 - type: nauc_mrr_at_1_max value: 44.006974050369955 - type: nauc_mrr_at_1_std value: 26.921001962861062 - type: nauc_mrr_at_20_diff1 value: 24.462771570553738 - type: nauc_mrr_at_20_max value: 48.264688196799746 - type: nauc_mrr_at_20_std value: 30.498095141265914 - type: nauc_mrr_at_3_diff1 value: 24.76829388237229 - type: nauc_mrr_at_3_max value: 48.213758704739924 - type: nauc_mrr_at_3_std value: 30.1502853918892 - type: nauc_mrr_at_5_diff1 value: 24.476494932330247 - type: nauc_mrr_at_5_max value: 47.977250552198804 - type: nauc_mrr_at_5_std value: 29.65248143104835 - type: nauc_ndcg_at_1000_diff1 value: 13.055818920426246 - type: nauc_ndcg_at_1000_max value: 46.00986444256306 - type: nauc_ndcg_at_1000_std value: 29.622662054922085 - type: nauc_ndcg_at_100_diff1 value: 12.260551238228816 - type: nauc_ndcg_at_100_max value: 39.89783048267698 - type: nauc_ndcg_at_100_std value: 23.806961617956613 - type: nauc_ndcg_at_10_diff1 value: 11.002915931619567 - type: nauc_ndcg_at_10_max value: 39.79323759244374 - type: nauc_ndcg_at_10_std value: 23.053072152911046 - type: nauc_ndcg_at_1_diff1 value: 27.560910719974434 - type: nauc_ndcg_at_1_max value: 41.21084046258119 - type: nauc_ndcg_at_1_std value: 26.112891742912893 - type: nauc_ndcg_at_20_diff1 value: 10.085854089024496 - type: nauc_ndcg_at_20_max value: 37.88629173784684 - type: nauc_ndcg_at_20_std value: 23.17664322248358 - type: nauc_ndcg_at_3_diff1 value: 16.58969583405987 - type: nauc_ndcg_at_3_max value: 41.282222954101435 - type: nauc_ndcg_at_3_std value: 21.080670648392747 - type: nauc_ndcg_at_5_diff1 value: 13.893127947909885 - type: nauc_ndcg_at_5_max value: 40.21188015992804 - type: nauc_ndcg_at_5_std value: 21.417443978842652 - type: nauc_precision_at_1000_diff1 value: -17.227504530334564 - type: nauc_precision_at_1000_max value: 3.798554468439066 - type: nauc_precision_at_1000_std value: 35.73617809452683 - type: nauc_precision_at_100_diff1 value: -17.63388230218776 - type: nauc_precision_at_100_max value: 15.079399882407094 - type: nauc_precision_at_100_std value: 41.83698491321226 - type: nauc_precision_at_10_diff1 value: -11.850925959645156 - type: nauc_precision_at_10_max value: 35.93283968364352 - type: nauc_precision_at_10_std value: 34.391271855921296 - type: nauc_precision_at_1_diff1 value: 27.730860778824823 - type: nauc_precision_at_1_max value: 43.97462471516834 - type: nauc_precision_at_1_std value: 27.491068270978896 - type: nauc_precision_at_20_diff1 value: -14.281328840943347 - type: nauc_precision_at_20_max value: 29.469099781759006 - type: nauc_precision_at_20_std value: 38.54703022340941 - type: nauc_precision_at_3_diff1 value: 3.486986910413196 - type: nauc_precision_at_3_max value: 41.21107780473768 - type: nauc_precision_at_3_std value: 24.057479124531216 - type: nauc_precision_at_5_diff1 value: -3.0623787872866233 - type: nauc_precision_at_5_max value: 37.49266386466702 - type: nauc_precision_at_5_std value: 26.894454268004935 - type: nauc_recall_at_1000_diff1 value: -2.446891864334283 - type: nauc_recall_at_1000_max value: 23.867293584643377 - type: nauc_recall_at_1000_std value: 16.34707128224595 - type: nauc_recall_at_100_diff1 value: 4.891133690841179 - type: nauc_recall_at_100_max value: 24.56727964996522 - type: nauc_recall_at_100_std value: 9.847212953200797 - type: nauc_recall_at_10_diff1 value: 19.211912363585288 - type: nauc_recall_at_10_max value: 24.825344777920737 - type: nauc_recall_at_10_std value: -5.447989195041898 - type: nauc_recall_at_1_diff1 value: 46.31602607957798 - type: nauc_recall_at_1_max value: 11.807757660801942 - type: nauc_recall_at_1_std value: -13.984889089354317 - type: nauc_recall_at_20_diff1 value: 12.233372054304805 - type: nauc_recall_at_20_max value: 22.284108685207148 - type: nauc_recall_at_20_std value: -4.317138366746209 - type: nauc_recall_at_3_diff1 value: 28.394631527225815 - type: nauc_recall_at_3_max value: 15.593864852625462 - type: nauc_recall_at_3_std value: -12.383531804314593 - type: nauc_recall_at_5_diff1 value: 24.457441304950343 - type: nauc_recall_at_5_max value: 19.080049396281623 - type: nauc_recall_at_5_std value: -11.879747703626627 - type: ndcg_at_1 value: 51.548 - type: ndcg_at_10 value: 41.998000000000005 - type: ndcg_at_100 value: 39.626 - type: ndcg_at_1000 value: 48.707 - type: ndcg_at_20 value: 40.181 - type: ndcg_at_3 value: 48.06 - type: ndcg_at_5 value: 45.829 - type: precision_at_1 value: 52.941 - type: precision_at_10 value: 31.330999999999996 - type: precision_at_100 value: 10.421 - type: precision_at_1000 value: 2.428 - type: precision_at_20 value: 24.118000000000002 - type: precision_at_3 value: 45.408 - type: precision_at_5 value: 39.938 - type: recall_at_1 value: 6.907000000000001 - type: recall_at_10 value: 20.51 - type: recall_at_100 value: 40.857 - type: recall_at_1000 value: 73.616 - type: recall_at_20 value: 26.52 - type: recall_at_3 value: 13.267999999999999 - type: recall_at_5 value: 16.141 - task: type: Retrieval dataset: name: MTEB NQ type: mteb/nq config: default split: test revision: b774495ed302d8c44a3a7ea25c90dbce03968f31 metrics: - type: main_score value: 71.8 - type: map_at_1 value: 47.629 - type: map_at_10 value: 64.846 - type: map_at_100 value: 65.40899999999999 - type: map_at_1000 value: 65.416 - type: map_at_20 value: 65.239 - type: map_at_3 value: 61.185 - type: map_at_5 value: 63.583 - type: mrr_at_1 value: 53.15758980301275 - type: mrr_at_10 value: 67.12880961577366 - type: mrr_at_100 value: 67.44006405426018 - type: mrr_at_1000 value: 67.44519150402294 - type: mrr_at_20 value: 67.34317135515428 - type: mrr_at_3 value: 64.5905755117805 - type: mrr_at_5 value: 66.24613750482806 - type: nauc_map_at_1000_diff1 value: 45.73812106517133 - type: nauc_map_at_1000_max value: 35.21262031755756 - type: nauc_map_at_1000_std value: -5.549443574026027 - type: nauc_map_at_100_diff1 value: 45.74254652176879 - type: nauc_map_at_100_max value: 35.22349167515518 - type: nauc_map_at_100_std value: -5.53697496044773 - type: nauc_map_at_10_diff1 value: 45.62837128377087 - type: nauc_map_at_10_max value: 35.3261562342222 - type: nauc_map_at_10_std value: -5.761924414031163 - type: nauc_map_at_1_diff1 value: 48.69187848570499 - type: nauc_map_at_1_max value: 28.687996096473476 - type: nauc_map_at_1_std value: -7.518605958272523 - type: nauc_map_at_20_diff1 value: 45.702303442220035 - type: nauc_map_at_20_max value: 35.30719944705456 - type: nauc_map_at_20_std value: -5.59505654742681 - type: nauc_map_at_3_diff1 value: 45.376813726832474 - type: nauc_map_at_3_max value: 34.68452149643597 - type: nauc_map_at_3_std value: -7.329014950379634 - type: nauc_map_at_5_diff1 value: 45.29528861989316 - type: nauc_map_at_5_max value: 35.35741440869229 - type: nauc_map_at_5_std value: -6.028788612259288 - type: nauc_mrr_at_1000_diff1 value: 46.11808147912517 - type: nauc_mrr_at_1000_max value: 35.59241850411947 - type: nauc_mrr_at_1000_std value: -3.4072428526109317 - type: nauc_mrr_at_100_diff1 value: 46.121345545514046 - type: nauc_mrr_at_100_max value: 35.60147795073431 - type: nauc_mrr_at_100_std value: -3.3965322447588826 - type: nauc_mrr_at_10_diff1 value: 46.0920068210502 - type: nauc_mrr_at_10_max value: 35.79649987854354 - type: nauc_mrr_at_10_std value: -3.339624589368137 - type: nauc_mrr_at_1_diff1 value: 49.101364605656194 - type: nauc_mrr_at_1_max value: 31.500796071482146 - type: nauc_mrr_at_1_std value: -4.183818500718156 - type: nauc_mrr_at_20_diff1 value: 46.088076630465594 - type: nauc_mrr_at_20_max value: 35.682131663053205 - type: nauc_mrr_at_20_std value: -3.35939023178519 - type: nauc_mrr_at_3_diff1 value: 45.47570812708642 - type: nauc_mrr_at_3_max value: 35.741892517632984 - type: nauc_mrr_at_3_std value: -4.135335963822013 - type: nauc_mrr_at_5_diff1 value: 45.78903474184014 - type: nauc_mrr_at_5_max value: 35.91273593700205 - type: nauc_mrr_at_5_std value: -3.467873421286869 - type: nauc_ndcg_at_1000_diff1 value: 45.5056583000012 - type: nauc_ndcg_at_1000_max value: 36.34328379251593 - type: nauc_ndcg_at_1000_std value: -4.0759698229323345 - type: nauc_ndcg_at_100_diff1 value: 45.61918946477166 - type: nauc_ndcg_at_100_max value: 36.675460335836235 - type: nauc_ndcg_at_100_std value: -3.6795334726235986 - type: nauc_ndcg_at_10_diff1 value: 45.15343994274541 - type: nauc_ndcg_at_10_max value: 37.48139242964657 - type: nauc_ndcg_at_10_std value: -4.287039084554882 - type: nauc_ndcg_at_1_diff1 value: 49.101364605656194 - type: nauc_ndcg_at_1_max value: 31.500796071482146 - type: nauc_ndcg_at_1_std value: -4.183818500718156 - type: nauc_ndcg_at_20_diff1 value: 45.310026313402375 - type: nauc_ndcg_at_20_max value: 37.32177497902133 - type: nauc_ndcg_at_20_std value: -3.8214360391282587 - type: nauc_ndcg_at_3_diff1 value: 44.27064370528994 - type: nauc_ndcg_at_3_max value: 36.380294033571396 - type: nauc_ndcg_at_3_std value: -6.844263370898355 - type: nauc_ndcg_at_5_diff1 value: 44.29933499225583 - type: nauc_ndcg_at_5_max value: 37.46477041822136 - type: nauc_ndcg_at_5_std value: -4.866548530467956 - type: nauc_precision_at_1000_diff1 value: -14.666553359142306 - type: nauc_precision_at_1000_max value: -0.5599759853201481 - type: nauc_precision_at_1000_std value: 16.8370925526591 - type: nauc_precision_at_100_diff1 value: -11.816251306246278 - type: nauc_precision_at_100_max value: 2.969819268208207 - type: nauc_precision_at_100_std value: 18.59422946634747 - type: nauc_precision_at_10_diff1 value: 1.2050200086029401 - type: nauc_precision_at_10_max value: 17.59930352911209 - type: nauc_precision_at_10_std value: 13.714495717588985 - type: nauc_precision_at_1_diff1 value: 49.101364605656194 - type: nauc_precision_at_1_max value: 31.500796071482146 - type: nauc_precision_at_1_std value: -4.183818500718156 - type: nauc_precision_at_20_diff1 value: -5.263476664822757 - type: nauc_precision_at_20_max value: 11.42004823600046 - type: nauc_precision_at_20_std value: 16.510514518664994 - type: nauc_precision_at_3_diff1 value: 20.116460379305828 - type: nauc_precision_at_3_max value: 31.32235038301311 - type: nauc_precision_at_3_std value: 2.7486717133871923 - type: nauc_precision_at_5_diff1 value: 9.57451645335723 - type: nauc_precision_at_5_max value: 25.28449126580587 - type: nauc_precision_at_5_std value: 9.955736162466767 - type: nauc_recall_at_1000_diff1 value: -21.632253065978794 - type: nauc_recall_at_1000_max value: 70.14409090958776 - type: nauc_recall_at_1000_std value: 65.61658090892989 - type: nauc_recall_at_100_diff1 value: 51.83161124806711 - type: nauc_recall_at_100_max value: 77.49921361841523 - type: nauc_recall_at_100_std value: 48.352508746719444 - type: nauc_recall_at_10_diff1 value: 39.86695231362791 - type: nauc_recall_at_10_max value: 50.12029094799474 - type: nauc_recall_at_10_std value: 0.1650940628131058 - type: nauc_recall_at_1_diff1 value: 48.69187848570499 - type: nauc_recall_at_1_max value: 28.687996096473476 - type: nauc_recall_at_1_std value: -7.518605958272523 - type: nauc_recall_at_20_diff1 value: 39.14155398061627 - type: nauc_recall_at_20_max value: 56.78559423716229 - type: nauc_recall_at_20_std value: 7.9728224572344075 - type: nauc_recall_at_3_diff1 value: 38.69589523432158 - type: nauc_recall_at_3_max value: 39.53271258375579 - type: nauc_recall_at_3_std value: -8.646925065787512 - type: nauc_recall_at_5_diff1 value: 37.45922652959002 - type: nauc_recall_at_5_max value: 44.4911958995867 - type: nauc_recall_at_5_std value: -3.5659842556375594 - type: ndcg_at_1 value: 53.15800000000001 - type: ndcg_at_10 value: 71.8 - type: ndcg_at_100 value: 73.85199999999999 - type: ndcg_at_1000 value: 74.017 - type: ndcg_at_20 value: 72.933 - type: ndcg_at_3 value: 65.479 - type: ndcg_at_5 value: 69.182 - type: precision_at_1 value: 53.15800000000001 - type: precision_at_10 value: 10.805 - type: precision_at_100 value: 1.2 - type: precision_at_1000 value: 0.122 - type: precision_at_20 value: 5.694 - type: precision_at_3 value: 28.939999999999998 - type: precision_at_5 value: 19.641000000000002 - type: recall_at_1 value: 47.629 - type: recall_at_10 value: 90.204 - type: recall_at_100 value: 98.66 - type: recall_at_1000 value: 99.874 - type: recall_at_20 value: 94.24 - type: recall_at_3 value: 74.394 - type: recall_at_5 value: 82.711 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: mteb/quora config: default split: test revision: e4e08e0b7dbe3c8700f0daef558ff32256715259 metrics: - type: main_score value: 90.025 - type: map_at_1 value: 72.222 - type: map_at_10 value: 86.58500000000001 - type: map_at_100 value: 87.176 - type: map_at_1000 value: 87.188 - type: map_at_20 value: 86.97399999999999 - type: map_at_3 value: 83.736 - type: map_at_5 value: 85.554 - type: mrr_at_1 value: 83.04 - type: mrr_at_10 value: 89.05599603174585 - type: mrr_at_100 value: 89.12398891419457 - type: mrr_at_1000 value: 89.12434072241001 - type: mrr_at_20 value: 89.10416280692111 - type: mrr_at_3 value: 88.23833333333312 - type: mrr_at_5 value: 88.82233333333308 - type: nauc_map_at_1000_diff1 value: 78.29348113313218 - type: nauc_map_at_1000_max value: 32.31386754277228 - type: nauc_map_at_1000_std value: -50.47543661484052 - type: nauc_map_at_100_diff1 value: 78.29618548618575 - type: nauc_map_at_100_max value: 32.301475680947846 - type: nauc_map_at_100_std value: -50.50303428814228 - type: nauc_map_at_10_diff1 value: 78.47383776440803 - type: nauc_map_at_10_max value: 31.839339990133563 - type: nauc_map_at_10_std value: -52.832713555976 - type: nauc_map_at_1_diff1 value: 82.46330147467418 - type: nauc_map_at_1_max value: 23.497664918373538 - type: nauc_map_at_1_std value: -43.824657665520704 - type: nauc_map_at_20_diff1 value: 78.34772176474422 - type: nauc_map_at_20_max value: 32.16495182893947 - type: nauc_map_at_20_std value: -51.503292726558605 - type: nauc_map_at_3_diff1 value: 79.07823813069432 - type: nauc_map_at_3_max value: 29.395911687513976 - type: nauc_map_at_3_std value: -54.16377546873304 - type: nauc_map_at_5_diff1 value: 78.73076619520454 - type: nauc_map_at_5_max value: 30.700453118585237 - type: nauc_map_at_5_std value: -54.130514177664054 - type: nauc_mrr_at_1000_diff1 value: 79.04736184471865 - type: nauc_mrr_at_1000_max value: 34.43004593837643 - type: nauc_mrr_at_1000_std value: -46.137269068195316 - type: nauc_mrr_at_100_diff1 value: 79.04698704288086 - type: nauc_mrr_at_100_max value: 34.4305553741175 - type: nauc_mrr_at_100_std value: -46.13786687786434 - type: nauc_mrr_at_10_diff1 value: 79.04490677485934 - type: nauc_mrr_at_10_max value: 34.38170181522227 - type: nauc_mrr_at_10_std value: -46.38129875681807 - type: nauc_mrr_at_1_diff1 value: 79.87159215719124 - type: nauc_mrr_at_1_max value: 34.05882339253136 - type: nauc_mrr_at_1_std value: -43.56093395137571 - type: nauc_mrr_at_20_diff1 value: 79.04384174535653 - type: nauc_mrr_at_20_max value: 34.442136494675005 - type: nauc_mrr_at_20_std value: -46.205458519638654 - type: nauc_mrr_at_3_diff1 value: 78.78154519155487 - type: nauc_mrr_at_3_max value: 34.74995000500305 - type: nauc_mrr_at_3_std value: -46.36264203155416 - type: nauc_mrr_at_5_diff1 value: 79.02631187177 - type: nauc_mrr_at_5_max value: 34.538698249632205 - type: nauc_mrr_at_5_std value: -46.468881576157465 - type: nauc_ndcg_at_1000_diff1 value: 78.25260097014645 - type: nauc_ndcg_at_1000_max value: 33.68584498704271 - type: nauc_ndcg_at_1000_std value: -48.44716779494868 - type: nauc_ndcg_at_100_diff1 value: 78.25115412256716 - type: nauc_ndcg_at_100_max value: 33.63652663447088 - type: nauc_ndcg_at_100_std value: -48.489243909024715 - type: nauc_ndcg_at_10_diff1 value: 78.23875101557334 - type: nauc_ndcg_at_10_max value: 32.65217430043823 - type: nauc_ndcg_at_10_std value: -52.57770468845309 - type: nauc_ndcg_at_1_diff1 value: 79.87159215719124 - type: nauc_ndcg_at_1_max value: 34.05882339253136 - type: nauc_ndcg_at_1_std value: -43.56093395137571 - type: nauc_ndcg_at_20_diff1 value: 78.23478552311765 - type: nauc_ndcg_at_20_max value: 33.30691737901109 - type: nauc_ndcg_at_20_std value: -50.78412614854527 - type: nauc_ndcg_at_3_diff1 value: 77.66134485470224 - type: nauc_ndcg_at_3_max value: 32.19504710373125 - type: nauc_ndcg_at_3_std value: -52.01636728550155 - type: nauc_ndcg_at_5_diff1 value: 78.04734137324255 - type: nauc_ndcg_at_5_max value: 31.94593625591248 - type: nauc_ndcg_at_5_std value: -53.02169800690546 - type: nauc_precision_at_1000_diff1 value: -45.771948123542636 - type: nauc_precision_at_1000_max value: -5.182406190477681 - type: nauc_precision_at_1000_std value: 41.14460438707817 - type: nauc_precision_at_100_diff1 value: -45.64767154261461 - type: nauc_precision_at_100_max value: -5.046308286851713 - type: nauc_precision_at_100_std value: 41.07186716587844 - type: nauc_precision_at_10_diff1 value: -42.26779562305825 - type: nauc_precision_at_10_max value: -1.1264852893323076 - type: nauc_precision_at_10_std value: 27.62275729822392 - type: nauc_precision_at_1_diff1 value: 79.87159215719124 - type: nauc_precision_at_1_max value: 34.05882339253136 - type: nauc_precision_at_1_std value: -43.56093395137571 - type: nauc_precision_at_20_diff1 value: -44.24293221128388 - type: nauc_precision_at_20_max value: -3.1345628837361867 - type: nauc_precision_at_20_std value: 34.23625492740366 - type: nauc_precision_at_3_diff1 value: -24.925251389823348 - type: nauc_precision_at_3_max value: 6.622188833369412 - type: nauc_precision_at_3_std value: 6.424741786858512 - type: nauc_precision_at_5_diff1 value: -36.1407949990387 - type: nauc_precision_at_5_max value: 1.7533948968374462 - type: nauc_precision_at_5_std value: 17.914083278982634 - type: nauc_recall_at_1000_diff1 value: 52.26815466244496 - type: nauc_recall_at_1000_max value: 69.73611104239443 - type: nauc_recall_at_1000_std value: 73.18969965863008 - type: nauc_recall_at_100_diff1 value: 70.80557513785271 - type: nauc_recall_at_100_max value: 33.333440086544556 - type: nauc_recall_at_100_std value: -38.75992366905504 - type: nauc_recall_at_10_diff1 value: 74.45948457438163 - type: nauc_recall_at_10_max value: 26.64948512428989 - type: nauc_recall_at_10_std value: -82.90334292052363 - type: nauc_recall_at_1_diff1 value: 82.46330147467418 - type: nauc_recall_at_1_max value: 23.497664918373538 - type: nauc_recall_at_1_std value: -43.824657665520704 - type: nauc_recall_at_20_diff1 value: 73.80140280887753 - type: nauc_recall_at_20_max value: 30.361616426734965 - type: nauc_recall_at_20_std value: -81.1418804447414 - type: nauc_recall_at_3_diff1 value: 75.19854736087834 - type: nauc_recall_at_3_max value: 26.12298005045584 - type: nauc_recall_at_3_std value: -63.42583714745169 - type: nauc_recall_at_5_diff1 value: 74.16423451950358 - type: nauc_recall_at_5_max value: 25.552390331018987 - type: nauc_recall_at_5_std value: -71.15891947773912 - type: ndcg_at_1 value: 83.04 - type: ndcg_at_10 value: 90.025 - type: ndcg_at_100 value: 91.006 - type: ndcg_at_1000 value: 91.061 - type: ndcg_at_20 value: 90.556 - type: ndcg_at_3 value: 87.493 - type: ndcg_at_5 value: 88.955 - type: precision_at_1 value: 83.04 - type: precision_at_10 value: 13.667000000000002 - type: precision_at_100 value: 1.542 - type: precision_at_1000 value: 0.157 - type: precision_at_20 value: 7.221 - type: precision_at_3 value: 38.433 - type: precision_at_5 value: 25.228 - type: recall_at_1 value: 72.222 - type: recall_at_10 value: 96.604 - type: recall_at_100 value: 99.786 - type: recall_at_1000 value: 99.996 - type: recall_at_20 value: 98.253 - type: recall_at_3 value: 89.276 - type: recall_at_5 value: 93.46 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: main_score value: 72.86492101891123 - type: v_measure value: 72.86492101891123 - type: v_measure_std value: 2.778711445144635 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 385e3cb46b4cfa89021f56c4380204149d0efe33 metrics: - type: main_score value: 75.27316726548479 - type: v_measure value: 75.27316726548479 - type: v_measure_std value: 8.87871936725338 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: mteb/scidocs config: default split: test revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88 metrics: - type: main_score value: 26.638 - type: map_at_1 value: 6.128 - type: map_at_10 value: 16.472 - type: map_at_100 value: 19.522000000000002 - type: map_at_1000 value: 19.898 - type: map_at_20 value: 18.098 - type: map_at_3 value: 11.283 - type: map_at_5 value: 13.771 - type: mrr_at_1 value: 30.2 - type: mrr_at_10 value: 42.621150793650735 - type: mrr_at_100 value: 43.740858712021954 - type: mrr_at_1000 value: 43.762699500220904 - type: mrr_at_20 value: 43.383639927753634 - type: mrr_at_3 value: 38.83333333333331 - type: mrr_at_5 value: 41.14833333333326 - type: nauc_map_at_1000_diff1 value: 13.13534664124808 - type: nauc_map_at_1000_max value: 29.346654566149795 - type: nauc_map_at_1000_std value: 18.08121186982413 - type: nauc_map_at_100_diff1 value: 13.098072728041538 - type: nauc_map_at_100_max value: 29.299084480697523 - type: nauc_map_at_100_std value: 17.961620202918464 - type: nauc_map_at_10_diff1 value: 14.001743720394682 - type: nauc_map_at_10_max value: 28.04128290996403 - type: nauc_map_at_10_std value: 13.744481555974716 - type: nauc_map_at_1_diff1 value: 22.1926640424872 - type: nauc_map_at_1_max value: 21.32609279586034 - type: nauc_map_at_1_std value: 6.566596302915438 - type: nauc_map_at_20_diff1 value: 13.57313142419664 - type: nauc_map_at_20_max value: 28.93840146319476 - type: nauc_map_at_20_std value: 16.50869367365676 - type: nauc_map_at_3_diff1 value: 17.707700541948462 - type: nauc_map_at_3_max value: 26.058174051376238 - type: nauc_map_at_3_std value: 9.943924560735267 - type: nauc_map_at_5_diff1 value: 17.11844492157723 - type: nauc_map_at_5_max value: 27.865247403049388 - type: nauc_map_at_5_std value: 11.372588172121546 - type: nauc_mrr_at_1000_diff1 value: 21.11248719936198 - type: nauc_mrr_at_1000_max value: 26.734172102201466 - type: nauc_mrr_at_1000_std value: 11.766121765437228 - type: nauc_mrr_at_100_diff1 value: 21.107109982277702 - type: nauc_mrr_at_100_max value: 26.741616065723267 - type: nauc_mrr_at_100_std value: 11.789802686224208 - type: nauc_mrr_at_10_diff1 value: 20.74108639793207 - type: nauc_mrr_at_10_max value: 26.920838463358333 - type: nauc_mrr_at_10_std value: 11.849217361926522 - type: nauc_mrr_at_1_diff1 value: 22.177437860573356 - type: nauc_mrr_at_1_max value: 21.88074521417754 - type: nauc_mrr_at_1_std value: 6.776011900101789 - type: nauc_mrr_at_20_diff1 value: 21.126633710175994 - type: nauc_mrr_at_20_max value: 26.860736480370974 - type: nauc_mrr_at_20_std value: 11.815411633726338 - type: nauc_mrr_at_3_diff1 value: 21.689245200066466 - type: nauc_mrr_at_3_max value: 26.187305092831625 - type: nauc_mrr_at_3_std value: 10.895380313134332 - type: nauc_mrr_at_5_diff1 value: 20.898811082479778 - type: nauc_mrr_at_5_max value: 26.939217247104036 - type: nauc_mrr_at_5_std value: 11.77832949822472 - type: nauc_ndcg_at_1000_diff1 value: 13.251184947898546 - type: nauc_ndcg_at_1000_max value: 30.879594164526146 - type: nauc_ndcg_at_1000_std value: 23.125206047366625 - type: nauc_ndcg_at_100_diff1 value: 12.549100649053676 - type: nauc_ndcg_at_100_max value: 30.634680845419123 - type: nauc_ndcg_at_100_std value: 23.296226055422984 - type: nauc_ndcg_at_10_diff1 value: 14.475144549294322 - type: nauc_ndcg_at_10_max value: 29.450349815417336 - type: nauc_ndcg_at_10_std value: 15.94068314781612 - type: nauc_ndcg_at_1_diff1 value: 22.177437860573356 - type: nauc_ndcg_at_1_max value: 21.88074521417754 - type: nauc_ndcg_at_1_std value: 6.776011900101789 - type: nauc_ndcg_at_20_diff1 value: 14.173669585802266 - type: nauc_ndcg_at_20_max value: 30.475890854725 - type: nauc_ndcg_at_20_std value: 19.863898148221704 - type: nauc_ndcg_at_3_diff1 value: 18.93971261196868 - type: nauc_ndcg_at_3_max value: 27.3707298720736 - type: nauc_ndcg_at_3_std value: 11.439810510051224 - type: nauc_ndcg_at_5_diff1 value: 17.89535958094687 - type: nauc_ndcg_at_5_max value: 29.272740466638425 - type: nauc_ndcg_at_5_std value: 13.402467626635909 - type: nauc_precision_at_1000_diff1 value: -3.811547048784123 - type: nauc_precision_at_1000_max value: 22.55165337197117 - type: nauc_precision_at_1000_std value: 35.98524999650108 - type: nauc_precision_at_100_diff1 value: 0.6474234774922896 - type: nauc_precision_at_100_max value: 25.06920726527032 - type: nauc_precision_at_100_std value: 32.31439698982313 - type: nauc_precision_at_10_diff1 value: 7.943127218139508 - type: nauc_precision_at_10_max value: 28.571937636787197 - type: nauc_precision_at_10_std value: 18.8472620918488 - type: nauc_precision_at_1_diff1 value: 22.177437860573356 - type: nauc_precision_at_1_max value: 21.88074521417754 - type: nauc_precision_at_1_std value: 6.776011900101789 - type: nauc_precision_at_20_diff1 value: 6.981574259607366 - type: nauc_precision_at_20_max value: 28.986094397038727 - type: nauc_precision_at_20_std value: 25.83129974001146 - type: nauc_precision_at_3_diff1 value: 17.197490724039355 - type: nauc_precision_at_3_max value: 29.17569320583099 - type: nauc_precision_at_3_std value: 13.430554945991846 - type: nauc_precision_at_5_diff1 value: 14.952364330739362 - type: nauc_precision_at_5_max value: 31.053243354846977 - type: nauc_precision_at_5_std value: 15.856312752807822 - type: nauc_recall_at_1000_diff1 value: -4.8224253128926975 - type: nauc_recall_at_1000_max value: 21.3989024429911 - type: nauc_recall_at_1000_std value: 39.152234275603604 - type: nauc_recall_at_100_diff1 value: 0.11936808422867201 - type: nauc_recall_at_100_max value: 24.261739241957823 - type: nauc_recall_at_100_std value: 32.62984573938928 - type: nauc_recall_at_10_diff1 value: 7.851256165018388 - type: nauc_recall_at_10_max value: 27.936406600938746 - type: nauc_recall_at_10_std value: 18.683634320636113 - type: nauc_recall_at_1_diff1 value: 22.1926640424872 - type: nauc_recall_at_1_max value: 21.32609279586034 - type: nauc_recall_at_1_std value: 6.566596302915438 - type: nauc_recall_at_20_diff1 value: 6.8107211705182165 - type: nauc_recall_at_20_max value: 28.286284094687787 - type: nauc_recall_at_20_std value: 25.932013268120862 - type: nauc_recall_at_3_diff1 value: 17.04156818427151 - type: nauc_recall_at_3_max value: 28.645439108719216 - type: nauc_recall_at_3_std value: 13.346047828494411 - type: nauc_recall_at_5_diff1 value: 14.906284329771822 - type: nauc_recall_at_5_max value: 30.58628602415921 - type: nauc_recall_at_5_std value: 15.755157478191755 - type: ndcg_at_1 value: 30.2 - type: ndcg_at_10 value: 26.638 - type: ndcg_at_100 value: 37.135 - type: ndcg_at_1000 value: 42.576 - type: ndcg_at_20 value: 30.75 - type: ndcg_at_3 value: 24.675 - type: ndcg_at_5 value: 21.836 - type: precision_at_1 value: 30.2 - type: precision_at_10 value: 14.06 - type: precision_at_100 value: 2.904 - type: precision_at_1000 value: 0.42 - type: precision_at_20 value: 9.4 - type: precision_at_3 value: 23.233 - type: precision_at_5 value: 19.439999999999998 - type: recall_at_1 value: 6.128 - type: recall_at_10 value: 28.471999999999998 - type: recall_at_100 value: 58.952000000000005 - type: recall_at_1000 value: 85.137 - type: recall_at_20 value: 38.17 - type: recall_at_3 value: 14.127999999999998 - type: recall_at_5 value: 19.673 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: 20a6d6f312dd54037fe07a32d58e5e168867909d metrics: - type: cosine_pearson value: 86.86608529160739 - type: cosine_spearman value: 82.88625166203383 - type: euclidean_pearson value: 84.15494418856142 - type: euclidean_spearman value: 82.88449294676421 - type: main_score value: 82.88625166203383 - type: manhattan_pearson value: 84.39068623474428 - type: manhattan_spearman value: 82.88065412169463 - type: pearson value: 86.86608529160739 - type: spearman value: 82.88625166203383 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cosine_pearson value: 87.0445014940449 - type: cosine_spearman value: 80.0880365116599 - type: euclidean_pearson value: 83.80250772928852 - type: euclidean_spearman value: 80.0892465260778 - type: main_score value: 80.0880365116599 - type: manhattan_pearson value: 83.96793981929336 - type: manhattan_spearman value: 80.24881789268238 - type: pearson value: 87.0445014940449 - type: spearman value: 80.0880365116599 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cosine_pearson value: 89.33900828959968 - type: cosine_spearman value: 89.68256358526733 - type: euclidean_pearson value: 89.29188708262265 - type: euclidean_spearman value: 89.68204344658601 - type: main_score value: 89.68256358526733 - type: manhattan_pearson value: 89.13996588193149 - type: manhattan_spearman value: 89.61372804425623 - type: pearson value: 89.33900828959968 - type: spearman value: 89.68256358526733 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cosine_pearson value: 86.42029843639123 - type: cosine_spearman value: 85.0707889220723 - type: euclidean_pearson value: 85.75114239552562 - type: euclidean_spearman value: 85.06858160270725 - type: main_score value: 85.0707889220723 - type: manhattan_pearson value: 85.86461900459038 - type: manhattan_spearman value: 85.28671103475605 - type: pearson value: 86.42029843639123 - type: spearman value: 85.0707889220723 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cosine_pearson value: 88.3660081271444 - type: cosine_spearman value: 89.39375083609528 - type: euclidean_pearson value: 89.21818482894895 - type: euclidean_spearman value: 89.39361588875443 - type: main_score value: 89.39375083609528 - type: manhattan_pearson value: 89.53535068014057 - type: manhattan_spearman value: 89.81077130567752 - type: pearson value: 88.3660081271444 - type: spearman value: 89.39375083609528 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cosine_pearson value: 85.60708247171874 - type: cosine_spearman value: 87.15234952832193 - type: euclidean_pearson value: 86.21743555548137 - type: euclidean_spearman value: 87.14450217418016 - type: main_score value: 87.15234952832193 - type: manhattan_pearson value: 86.2467748746084 - type: manhattan_spearman value: 87.2197479717654 - type: pearson value: 85.60708247171874 - type: spearman value: 87.15234952832193 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: faeb762787bd10488a50c8b5be4a3b82e411949c metrics: - type: cosine_pearson value: 91.25898556808458 - type: cosine_spearman value: 91.35372390581641 - type: euclidean_pearson value: 91.319520321348 - type: euclidean_spearman value: 91.30821135416925 - type: main_score value: 91.35372390581641 - type: manhattan_pearson value: 91.14800959939069 - type: manhattan_spearman value: 91.09775424245629 - type: pearson value: 91.25898556808458 - type: spearman value: 91.35372390581641 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 metrics: - type: cosine_pearson value: 67.61637111515797 - type: cosine_spearman value: 68.10379096526697 - type: euclidean_pearson value: 69.2652309491375 - type: euclidean_spearman value: 68.18436357033228 - type: main_score value: 68.10379096526697 - type: manhattan_pearson value: 69.52531340510775 - type: manhattan_spearman value: 68.17874790391862 - type: pearson value: 67.61637111515797 - type: spearman value: 68.10379096526697 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cosine_pearson value: 87.81592853782297 - type: cosine_spearman value: 88.2302550329183 - type: euclidean_pearson value: 88.01165144519526 - type: euclidean_spearman value: 88.23342148890097 - type: main_score value: 88.2302550329183 - type: manhattan_pearson value: 88.148592564938 - type: manhattan_spearman value: 88.49226317320988 - type: pearson value: 87.81592853782297 - type: spearman value: 88.2302550329183 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: main_score value: 89.196009707431 - type: map value: 89.196009707431 - type: mrr value: 97.07198121413808 - type: nAUC_map_diff1 value: -14.066667940115352 - type: nAUC_map_max value: 49.73702475027407 - type: nAUC_map_std value: 64.0986775782592 - type: nAUC_mrr_diff1 value: 21.96846389417319 - type: nAUC_mrr_max value: 86.38341077184032 - type: nAUC_mrr_std value: 75.38945014727746 - task: type: Retrieval dataset: name: MTEB SciFact type: mteb/scifact config: default split: test revision: 0228b52cf27578f30900b9e5271d331663a030d7 metrics: - type: main_score value: 80.08999999999999 - type: map_at_1 value: 63.161 - type: map_at_10 value: 75.163 - type: map_at_100 value: 75.408 - type: map_at_1000 value: 75.409 - type: map_at_20 value: 75.332 - type: map_at_3 value: 71.839 - type: map_at_5 value: 74.32600000000001 - type: mrr_at_1 value: 66.33333333333333 - type: mrr_at_10 value: 75.95978835978836 - type: mrr_at_100 value: 76.15647881281473 - type: mrr_at_1000 value: 76.15736533763744 - type: mrr_at_20 value: 76.08557368557368 - type: mrr_at_3 value: 73.55555555555556 - type: mrr_at_5 value: 75.4888888888889 - type: nauc_map_at_1000_diff1 value: 77.31229383811176 - type: nauc_map_at_1000_max value: 58.848319058605156 - type: nauc_map_at_1000_std value: -14.290090263454985 - type: nauc_map_at_100_diff1 value: 77.31325400213969 - type: nauc_map_at_100_max value: 58.848885054155275 - type: nauc_map_at_100_std value: -14.285806618869273 - type: nauc_map_at_10_diff1 value: 77.1806705504232 - type: nauc_map_at_10_max value: 59.02905805134415 - type: nauc_map_at_10_std value: -14.132954900037467 - type: nauc_map_at_1_diff1 value: 81.03932970557837 - type: nauc_map_at_1_max value: 49.02073230264529 - type: nauc_map_at_1_std value: -22.977452975845512 - type: nauc_map_at_20_diff1 value: 77.22581364818562 - type: nauc_map_at_20_max value: 58.90740400399768 - type: nauc_map_at_20_std value: -14.245079150986745 - type: nauc_map_at_3_diff1 value: 76.99793243255563 - type: nauc_map_at_3_max value: 54.9930733886623 - type: nauc_map_at_3_std value: -19.297708446082407 - type: nauc_map_at_5_diff1 value: 77.1671608360295 - type: nauc_map_at_5_max value: 57.27757489519526 - type: nauc_map_at_5_std value: -15.446338357667708 - type: nauc_mrr_at_1000_diff1 value: 77.4806080821202 - type: nauc_mrr_at_1000_max value: 60.9213776129792 - type: nauc_mrr_at_1000_std value: -12.139599632228343 - type: nauc_mrr_at_100_diff1 value: 77.48158073865281 - type: nauc_mrr_at_100_max value: 60.9218657185361 - type: nauc_mrr_at_100_std value: -12.13532070453677 - type: nauc_mrr_at_10_diff1 value: 77.32428546014407 - type: nauc_mrr_at_10_max value: 61.018407010343466 - type: nauc_mrr_at_10_std value: -12.143193773309347 - type: nauc_mrr_at_1_diff1 value: 80.99806778887115 - type: nauc_mrr_at_1_max value: 59.17855969530095 - type: nauc_mrr_at_1_std value: -12.30545640831458 - type: nauc_mrr_at_20_diff1 value: 77.3811067653992 - type: nauc_mrr_at_20_max value: 60.9648880366335 - type: nauc_mrr_at_20_std value: -12.124066076541853 - type: nauc_mrr_at_3_diff1 value: 77.31304316321959 - type: nauc_mrr_at_3_max value: 60.75536766404163 - type: nauc_mrr_at_3_std value: -12.997876030849623 - type: nauc_mrr_at_5_diff1 value: 77.12952864141742 - type: nauc_mrr_at_5_max value: 60.995943754968685 - type: nauc_mrr_at_5_std value: -11.353447465605694 - type: nauc_ndcg_at_1000_diff1 value: 76.81788665683746 - type: nauc_ndcg_at_1000_max value: 60.35947755262391 - type: nauc_ndcg_at_1000_std value: -12.884942372460362 - type: nauc_ndcg_at_100_diff1 value: 76.87388230365198 - type: nauc_ndcg_at_100_max value: 60.38813162962434 - type: nauc_ndcg_at_100_std value: -12.64384717800478 - type: nauc_ndcg_at_10_diff1 value: 75.87713506026317 - type: nauc_ndcg_at_10_max value: 61.39356554675667 - type: nauc_ndcg_at_10_std value: -12.144227584144218 - type: nauc_ndcg_at_1_diff1 value: 80.99806778887115 - type: nauc_ndcg_at_1_max value: 59.17855969530095 - type: nauc_ndcg_at_1_std value: -12.30545640831458 - type: nauc_ndcg_at_20_diff1 value: 76.09913944506627 - type: nauc_ndcg_at_20_max value: 61.01644448834147 - type: nauc_ndcg_at_20_std value: -12.456209267623857 - type: nauc_ndcg_at_3_diff1 value: 75.52717946614608 - type: nauc_ndcg_at_3_max value: 58.96433090721983 - type: nauc_ndcg_at_3_std value: -15.849280494339556 - type: nauc_ndcg_at_5_diff1 value: 75.69026981016921 - type: nauc_ndcg_at_5_max value: 58.924044405851326 - type: nauc_ndcg_at_5_std value: -13.182728827923107 - type: nauc_precision_at_1000_diff1 value: -31.634022001609914 - type: nauc_precision_at_1000_max value: 31.46271490784504 - type: nauc_precision_at_1000_std value: 60.44801276891442 - type: nauc_precision_at_100_diff1 value: -29.722363469948103 - type: nauc_precision_at_100_max value: 32.05464592020074 - type: nauc_precision_at_100_std value: 60.832570595613554 - type: nauc_precision_at_10_diff1 value: -11.91731376599939 - type: nauc_precision_at_10_max value: 45.43646553157129 - type: nauc_precision_at_10_std value: 52.962408871791276 - type: nauc_precision_at_1_diff1 value: 80.99806778887115 - type: nauc_precision_at_1_max value: 59.17855969530095 - type: nauc_precision_at_1_std value: -12.30545640831458 - type: nauc_precision_at_20_diff1 value: -18.43293701721667 - type: nauc_precision_at_20_max value: 39.53434874203934 - type: nauc_precision_at_20_std value: 53.6291982468461 - type: nauc_precision_at_3_diff1 value: 30.84789043003892 - type: nauc_precision_at_3_max value: 55.660727758110376 - type: nauc_precision_at_3_std value: 17.87243920840355 - type: nauc_precision_at_5_diff1 value: 4.099395181445625 - type: nauc_precision_at_5_max value: 50.346770968709386 - type: nauc_precision_at_5_std value: 44.66722483255029 - type: nauc_recall_at_1000_diff1 value: .nan - type: nauc_recall_at_1000_max value: .nan - type: nauc_recall_at_1000_std value: .nan - type: nauc_recall_at_100_diff1 value: 100.0 - type: nauc_recall_at_100_max value: 72.2222222222207 - type: nauc_recall_at_100_std value: 86.92810457516407 - type: nauc_recall_at_10_diff1 value: 62.18887555022005 - type: nauc_recall_at_10_max value: 75.14339068960916 - type: nauc_recall_at_10_std value: -1.4912631719357108 - type: nauc_recall_at_1_diff1 value: 81.03932970557837 - type: nauc_recall_at_1_max value: 49.02073230264529 - type: nauc_recall_at_1_std value: -22.977452975845512 - type: nauc_recall_at_20_diff1 value: 59.27414444038499 - type: nauc_recall_at_20_max value: 76.32241302318047 - type: nauc_recall_at_20_std value: -0.8322169447488666 - type: nauc_recall_at_3_diff1 value: 69.58783002593157 - type: nauc_recall_at_3_max value: 55.89660919896563 - type: nauc_recall_at_3_std value: -21.183005510917862 - type: nauc_recall_at_5_diff1 value: 65.53660499878802 - type: nauc_recall_at_5_max value: 58.218018535135805 - type: nauc_recall_at_5_std value: -8.328952210032455 - type: ndcg_at_1 value: 66.333 - type: ndcg_at_10 value: 80.08999999999999 - type: ndcg_at_100 value: 81.24900000000001 - type: ndcg_at_1000 value: 81.28800000000001 - type: ndcg_at_20 value: 80.625 - type: ndcg_at_3 value: 74.98700000000001 - type: ndcg_at_5 value: 78.553 - type: precision_at_1 value: 66.333 - type: precision_at_10 value: 10.667 - type: precision_at_100 value: 1.127 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_20 value: 5.45 - type: precision_at_3 value: 29.555999999999997 - type: precision_at_5 value: 20.133000000000003 - type: recall_at_1 value: 63.161 - type: recall_at_10 value: 94.167 - type: recall_at_100 value: 99.667 - type: recall_at_1000 value: 100.0 - type: recall_at_20 value: 96.167 - type: recall_at_3 value: 80.972 - type: recall_at_5 value: 89.90599999999999 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cosine_accuracy value: 99.81881188118813 - type: cosine_accuracy_threshold value: 85.55081486701965 - type: cosine_ap value: 96.0359661816236 - type: cosine_f1 value: 90.6584992343032 - type: cosine_f1_threshold value: 84.82859134674072 - type: cosine_precision value: 92.59645464025026 - type: cosine_recall value: 88.8 - type: dot_accuracy value: 99.81881188118813 - type: dot_accuracy_threshold value: 84.91908311843872 - type: dot_ap value: 96.05740121094365 - type: dot_f1 value: 90.81885856079404 - type: dot_f1_threshold value: 83.84919166564941 - type: dot_precision value: 90.14778325123153 - type: dot_recall value: 91.5 - type: euclidean_accuracy value: 99.82079207920792 - type: euclidean_accuracy_threshold value: 54.49706315994263 - type: euclidean_ap value: 96.03223527068818 - type: euclidean_f1 value: 90.72270630445925 - type: euclidean_f1_threshold value: 54.49706315994263 - type: euclidean_precision value: 93.05993690851734 - type: euclidean_recall value: 88.5 - type: main_score value: 96.32671902439806 - type: manhattan_accuracy value: 99.83267326732673 - type: manhattan_accuracy_threshold value: 3818.192672729492 - type: manhattan_ap value: 96.32671902439806 - type: manhattan_f1 value: 91.52032112393378 - type: manhattan_f1_threshold value: 3818.192672729492 - type: manhattan_precision value: 91.8429003021148 - type: manhattan_recall value: 91.2 - type: max_ap value: 96.32671902439806 - type: max_f1 value: 91.52032112393378 - type: max_precision value: 93.05993690851734 - type: max_recall value: 91.5 - type: similarity_accuracy value: 99.81881188118813 - type: similarity_accuracy_threshold value: 85.55081486701965 - type: similarity_ap value: 96.0359661816236 - type: similarity_f1 value: 90.6584992343032 - type: similarity_f1_threshold value: 84.82859134674072 - type: similarity_precision value: 92.59645464025026 - type: similarity_recall value: 88.8 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: main_score value: 80.28558559137414 - type: v_measure value: 80.28558559137414 - type: v_measure_std value: 2.795276520287584 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: main_score value: 49.57135582416209 - type: v_measure value: 49.57135582416209 - type: v_measure_std value: 1.6414135468423754 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: main_score value: 55.253002583598644 - type: map value: 55.253002583598644 - type: mrr value: 56.24172396231219 - type: nAUC_map_diff1 value: 40.00053248203427 - type: nAUC_map_max value: 10.05441740585869 - type: nAUC_map_std value: 8.227169286387552 - type: nAUC_mrr_diff1 value: 40.250446264233744 - type: nAUC_mrr_max value: 10.586310195339053 - type: nAUC_mrr_std value: 8.47326494370076 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cosine_pearson value: 31.19874648747059 - type: cosine_spearman value: 31.493550648844863 - type: dot_pearson value: 31.157847680289407 - type: dot_spearman value: 31.575299712180538 - type: main_score value: 31.493550648844863 - type: pearson value: 31.19874648747059 - type: spearman value: 31.493550648844863 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: mteb/trec-covid config: default split: test revision: bb9466bac8153a0349341eb1b22e06409e78ef4e metrics: - type: main_score value: 85.983 - type: map_at_1 value: 0.247 - type: map_at_10 value: 2.177 - type: map_at_100 value: 14.804 - type: map_at_1000 value: 37.045 - type: map_at_20 value: 4.12 - type: map_at_3 value: 0.7000000000000001 - type: map_at_5 value: 1.1320000000000001 - type: mrr_at_1 value: 96.0 - type: mrr_at_10 value: 98.0 - type: mrr_at_100 value: 98.0 - type: mrr_at_1000 value: 98.0 - type: mrr_at_20 value: 98.0 - type: mrr_at_3 value: 98.0 - type: mrr_at_5 value: 98.0 - type: nauc_map_at_1000_diff1 value: -0.9165125200337213 - type: nauc_map_at_1000_max value: 40.260117798042764 - type: nauc_map_at_1000_std value: 71.72789335831554 - type: nauc_map_at_100_diff1 value: 20.493827311583953 - type: nauc_map_at_100_max value: 21.005742079276462 - type: nauc_map_at_100_std value: 62.53815607831659 - type: nauc_map_at_10_diff1 value: 31.289297684528215 - type: nauc_map_at_10_max value: 7.86554294370268 - type: nauc_map_at_10_std value: 37.26191657133897 - type: nauc_map_at_1_diff1 value: 25.57568148849456 - type: nauc_map_at_1_max value: -5.9767435623941445 - type: nauc_map_at_1_std value: 30.849871717506755 - type: nauc_map_at_20_diff1 value: 30.896018204532087 - type: nauc_map_at_20_max value: 8.667077299744314 - type: nauc_map_at_20_std value: 41.512687168412924 - type: nauc_map_at_3_diff1 value: 29.44724521006598 - type: nauc_map_at_3_max value: 1.597496889532064 - type: nauc_map_at_3_std value: 32.25013773854697 - type: nauc_map_at_5_diff1 value: 27.387036605618825 - type: nauc_map_at_5_max value: 5.402983746211454 - type: nauc_map_at_5_std value: 33.940523962472184 - type: nauc_mrr_at_1000_diff1 value: -14.122315592903503 - type: nauc_mrr_at_1000_max value: 33.84687208216605 - type: nauc_mrr_at_1000_std value: 86.11111111111092 - type: nauc_mrr_at_100_diff1 value: -14.122315592903503 - type: nauc_mrr_at_100_max value: 33.84687208216605 - type: nauc_mrr_at_100_std value: 86.11111111111092 - type: nauc_mrr_at_10_diff1 value: -14.122315592903503 - type: nauc_mrr_at_10_max value: 33.84687208216605 - type: nauc_mrr_at_10_std value: 86.11111111111092 - type: nauc_mrr_at_1_diff1 value: -14.122315592903831 - type: nauc_mrr_at_1_max value: 33.84687208216637 - type: nauc_mrr_at_1_std value: 86.11111111111124 - type: nauc_mrr_at_20_diff1 value: -14.122315592903503 - type: nauc_mrr_at_20_max value: 33.84687208216605 - type: nauc_mrr_at_20_std value: 86.11111111111092 - type: nauc_mrr_at_3_diff1 value: -14.122315592903503 - type: nauc_mrr_at_3_max value: 33.84687208216605 - type: nauc_mrr_at_3_std value: 86.11111111111092 - type: nauc_mrr_at_5_diff1 value: -14.122315592903503 - type: nauc_mrr_at_5_max value: 33.84687208216605 - type: nauc_mrr_at_5_std value: 86.11111111111092 - type: nauc_ndcg_at_1000_diff1 value: 8.745907669561928 - type: nauc_ndcg_at_1000_max value: 45.43307237994533 - type: nauc_ndcg_at_1000_std value: 74.93357447176336 - type: nauc_ndcg_at_100_diff1 value: -3.9719350773353765 - type: nauc_ndcg_at_100_max value: 44.43705332397461 - type: nauc_ndcg_at_100_std value: 61.59493812371758 - type: nauc_ndcg_at_10_diff1 value: 15.230915878367348 - type: nauc_ndcg_at_10_max value: 48.332840970836635 - type: nauc_ndcg_at_10_std value: 46.888785065125774 - type: nauc_ndcg_at_1_diff1 value: 13.219732337379442 - type: nauc_ndcg_at_1_max value: 45.19919078742603 - type: nauc_ndcg_at_1_std value: 64.68253968253977 - type: nauc_ndcg_at_20_diff1 value: 12.479648691964865 - type: nauc_ndcg_at_20_max value: 48.76688248450331 - type: nauc_ndcg_at_20_std value: 51.450399755887545 - type: nauc_ndcg_at_3_diff1 value: 6.165414201871464 - type: nauc_ndcg_at_3_max value: 45.089689347691035 - type: nauc_ndcg_at_3_std value: 41.08249161845213 - type: nauc_ndcg_at_5_diff1 value: 7.411245806844721 - type: nauc_ndcg_at_5_max value: 47.818748093538076 - type: nauc_ndcg_at_5_std value: 45.907685763676575 - type: nauc_precision_at_1000_diff1 value: -30.574290219847345 - type: nauc_precision_at_1000_max value: 32.56926126118719 - type: nauc_precision_at_1000_std value: 14.584504392628874 - type: nauc_precision_at_100_diff1 value: -10.199740234718847 - type: nauc_precision_at_100_max value: 41.0213226769777 - type: nauc_precision_at_100_std value: 56.975760776771324 - type: nauc_precision_at_10_diff1 value: 7.865792689701161 - type: nauc_precision_at_10_max value: 52.00432275201737 - type: nauc_precision_at_10_std value: 43.89512276413724 - type: nauc_precision_at_1_diff1 value: -14.122315592903831 - type: nauc_precision_at_1_max value: 33.84687208216637 - type: nauc_precision_at_1_std value: 86.11111111111124 - type: nauc_precision_at_20_diff1 value: 5.481424191880084 - type: nauc_precision_at_20_max value: 46.86629331792725 - type: nauc_precision_at_20_std value: 49.245692667517496 - type: nauc_precision_at_3_diff1 value: -5.870408807869163 - type: nauc_precision_at_3_max value: 48.73657612128875 - type: nauc_precision_at_3_std value: 41.15152062088262 - type: nauc_precision_at_5_diff1 value: -4.550610529125413 - type: nauc_precision_at_5_max value: 60.390115878205386 - type: nauc_precision_at_5_std value: 44.16494295055696 - type: nauc_recall_at_1000_diff1 value: 8.047794367079034 - type: nauc_recall_at_1000_max value: 37.07551482870489 - type: nauc_recall_at_1000_std value: 66.20862163364201 - type: nauc_recall_at_100_diff1 value: 25.08104923597475 - type: nauc_recall_at_100_max value: 9.971294642165734 - type: nauc_recall_at_100_std value: 51.737814074891254 - type: nauc_recall_at_10_diff1 value: 32.33148478369628 - type: nauc_recall_at_10_max value: 1.3767192150014917 - type: nauc_recall_at_10_std value: 30.801926742876308 - type: nauc_recall_at_1_diff1 value: 25.57568148849456 - type: nauc_recall_at_1_max value: -5.9767435623941445 - type: nauc_recall_at_1_std value: 30.849871717506755 - type: nauc_recall_at_20_diff1 value: 31.716580022934654 - type: nauc_recall_at_20_max value: -0.1281270579464631 - type: nauc_recall_at_20_std value: 33.76185294993676 - type: nauc_recall_at_3_diff1 value: 29.758810004388348 - type: nauc_recall_at_3_max value: -1.9442985017191816 - type: nauc_recall_at_3_std value: 27.45550076962206 - type: nauc_recall_at_5_diff1 value: 27.047710181576672 - type: nauc_recall_at_5_max value: 1.5237000700880248 - type: nauc_recall_at_5_std value: 28.235297950159698 - type: ndcg_at_1 value: 94.0 - type: ndcg_at_10 value: 85.983 - type: ndcg_at_100 value: 69.195 - type: ndcg_at_1000 value: 62.541000000000004 - type: ndcg_at_20 value: 83.405 - type: ndcg_at_3 value: 89.98899999999999 - type: ndcg_at_5 value: 87.905 - type: precision_at_1 value: 96.0 - type: precision_at_10 value: 89.4 - type: precision_at_100 value: 71.54 - type: precision_at_1000 value: 27.594 - type: precision_at_20 value: 87.2 - type: precision_at_3 value: 92.667 - type: precision_at_5 value: 90.8 - type: recall_at_1 value: 0.247 - type: recall_at_10 value: 2.315 - type: recall_at_100 value: 17.574 - type: recall_at_1000 value: 59.336999999999996 - type: recall_at_20 value: 4.491 - type: recall_at_3 value: 0.7250000000000001 - type: recall_at_5 value: 1.1820000000000002 - task: type: Retrieval dataset: name: MTEB Touche2020 type: mteb/touche2020 config: default split: test revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f metrics: - type: main_score value: 29.944 - type: map_at_1 value: 3.064 - type: map_at_10 value: 11.501999999999999 - type: map_at_100 value: 18.736 - type: map_at_1000 value: 20.333000000000002 - type: map_at_20 value: 14.057 - type: map_at_3 value: 6.300999999999999 - type: map_at_5 value: 8.463 - type: mrr_at_1 value: 44.89795918367347 - type: mrr_at_10 value: 58.41188856494979 - type: mrr_at_100 value: 58.93964266413245 - type: mrr_at_1000 value: 58.93964266413245 - type: mrr_at_20 value: 58.767485349118 - type: mrr_at_3 value: 54.42176870748299 - type: mrr_at_5 value: 56.666666666666664 - type: nauc_map_at_1000_diff1 value: 11.478593385608479 - type: nauc_map_at_1000_max value: 10.309889845044324 - type: nauc_map_at_1000_std value: 21.16721939940238 - type: nauc_map_at_100_diff1 value: 11.570438543562418 - type: nauc_map_at_100_max value: 8.426183648064834 - type: nauc_map_at_100_std value: 18.56231985033613 - type: nauc_map_at_10_diff1 value: 22.37735506247481 - type: nauc_map_at_10_max value: 5.455946239060806 - type: nauc_map_at_10_std value: -4.2848826518388154 - type: nauc_map_at_1_diff1 value: 27.853645380676824 - type: nauc_map_at_1_max value: 7.30739948053113 - type: nauc_map_at_1_std value: -0.2773663157814586 - type: nauc_map_at_20_diff1 value: 14.724669779924648 - type: nauc_map_at_20_max value: 10.12882779173533 - type: nauc_map_at_20_std value: 4.4803777672120875 - type: nauc_map_at_3_diff1 value: 31.891173385921263 - type: nauc_map_at_3_max value: 4.889652271827218 - type: nauc_map_at_3_std value: -9.477460238651643 - type: nauc_map_at_5_diff1 value: 31.489012040465003 - type: nauc_map_at_5_max value: 1.7330092417337482 - type: nauc_map_at_5_std value: -8.137018608469637 - type: nauc_mrr_at_1000_diff1 value: 24.411522237082416 - type: nauc_mrr_at_1000_max value: 11.286971076556688 - type: nauc_mrr_at_1000_std value: 23.443174210894043 - type: nauc_mrr_at_100_diff1 value: 24.411522237082416 - type: nauc_mrr_at_100_max value: 11.286971076556688 - type: nauc_mrr_at_100_std value: 23.443174210894043 - type: nauc_mrr_at_10_diff1 value: 23.948152308265186 - type: nauc_mrr_at_10_max value: 12.22420979621155 - type: nauc_mrr_at_10_std value: 23.557939024705544 - type: nauc_mrr_at_1_diff1 value: 17.902334894536107 - type: nauc_mrr_at_1_max value: 17.36969662861018 - type: nauc_mrr_at_1_std value: 19.425714969048734 - type: nauc_mrr_at_20_diff1 value: 24.635893795899797 - type: nauc_mrr_at_20_max value: 11.330541067194913 - type: nauc_mrr_at_20_std value: 23.74518583400233 - type: nauc_mrr_at_3_diff1 value: 25.045536328282587 - type: nauc_mrr_at_3_max value: 7.497967004732733 - type: nauc_mrr_at_3_std value: 24.167153007320078 - type: nauc_mrr_at_5_diff1 value: 24.328479930592454 - type: nauc_mrr_at_5_max value: 10.037126854938336 - type: nauc_mrr_at_5_std value: 25.236208055346136 - type: nauc_ndcg_at_1000_diff1 value: 15.555347444667389 - type: nauc_ndcg_at_1000_max value: 13.356591700655718 - type: nauc_ndcg_at_1000_std value: 42.42395845935052 - type: nauc_ndcg_at_100_diff1 value: 13.110526060413708 - type: nauc_ndcg_at_100_max value: 3.140006440162515 - type: nauc_ndcg_at_100_std value: 39.02733288398033 - type: nauc_ndcg_at_10_diff1 value: 20.68853369009725 - type: nauc_ndcg_at_10_max value: 2.435389817058852 - type: nauc_ndcg_at_10_std value: 10.038202768784316 - type: nauc_ndcg_at_1_diff1 value: 20.17287594582385 - type: nauc_ndcg_at_1_max value: 12.487205168273196 - type: nauc_ndcg_at_1_std value: 20.639827614373075 - type: nauc_ndcg_at_20_diff1 value: 16.987577348502985 - type: nauc_ndcg_at_20_max value: 2.9978717644469266 - type: nauc_ndcg_at_20_std value: 13.015690866750354 - type: nauc_ndcg_at_3_diff1 value: 32.392223079245575 - type: nauc_ndcg_at_3_max value: 1.587587110582544 - type: nauc_ndcg_at_3_std value: 12.850592473446609 - type: nauc_ndcg_at_5_diff1 value: 32.80244517369626 - type: nauc_ndcg_at_5_max value: 5.8939933777508084 - type: nauc_ndcg_at_5_std value: 15.779687411463414 - type: nauc_precision_at_1000_diff1 value: -14.314031720452537 - type: nauc_precision_at_1000_max value: 32.87886666567266 - type: nauc_precision_at_1000_std value: 21.49347046886851 - type: nauc_precision_at_100_diff1 value: -9.4034008613839 - type: nauc_precision_at_100_max value: 16.784075123309645 - type: nauc_precision_at_100_std value: 73.14688535393604 - type: nauc_precision_at_10_diff1 value: 6.855101404043058 - type: nauc_precision_at_10_max value: 6.52491228645612 - type: nauc_precision_at_10_std value: 16.104602266016744 - type: nauc_precision_at_1_diff1 value: 17.902334894536107 - type: nauc_precision_at_1_max value: 17.36969662861018 - type: nauc_precision_at_1_std value: 19.425714969048734 - type: nauc_precision_at_20_diff1 value: -5.337534613602212 - type: nauc_precision_at_20_max value: 17.722925454767218 - type: nauc_precision_at_20_std value: 34.26680462132849 - type: nauc_precision_at_3_diff1 value: 31.054623397809255 - type: nauc_precision_at_3_max value: -0.92038600946826 - type: nauc_precision_at_3_std value: 8.326997076862916 - type: nauc_precision_at_5_diff1 value: 29.784942296920462 - type: nauc_precision_at_5_max value: 6.337469263434779 - type: nauc_precision_at_5_std value: 12.789597196020974 - type: nauc_recall_at_1000_diff1 value: -3.8177981862041364 - type: nauc_recall_at_1000_max value: 14.206064332229163 - type: nauc_recall_at_1000_std value: 74.18853420771269 - type: nauc_recall_at_100_diff1 value: 0.7677996771461106 - type: nauc_recall_at_100_max value: -4.139924106878441 - type: nauc_recall_at_100_std value: 48.319930706362896 - type: nauc_recall_at_10_diff1 value: 12.038835537494322 - type: nauc_recall_at_10_max value: -2.0498983557854418 - type: nauc_recall_at_10_std value: -2.0339180690854493 - type: nauc_recall_at_1_diff1 value: 27.853645380676824 - type: nauc_recall_at_1_max value: 7.30739948053113 - type: nauc_recall_at_1_std value: -0.2773663157814586 - type: nauc_recall_at_20_diff1 value: 0.7907893667756708 - type: nauc_recall_at_20_max value: 0.8795499810558195 - type: nauc_recall_at_20_std value: 11.512483291688282 - type: nauc_recall_at_3_diff1 value: 33.19440392639576 - type: nauc_recall_at_3_max value: -1.5494237697432613 - type: nauc_recall_at_3_std value: -8.560408808376984 - type: nauc_recall_at_5_diff1 value: 27.42193873870941 - type: nauc_recall_at_5_max value: -4.74350293281128 - type: nauc_recall_at_5_std value: -7.618060131179654 - type: ndcg_at_1 value: 42.857 - type: ndcg_at_10 value: 29.944 - type: ndcg_at_100 value: 42.624 - type: ndcg_at_1000 value: 53.384 - type: ndcg_at_20 value: 30.135 - type: ndcg_at_3 value: 34.847 - type: ndcg_at_5 value: 32.573 - type: precision_at_1 value: 44.897999999999996 - type: precision_at_10 value: 25.306 - type: precision_at_100 value: 8.694 - type: precision_at_1000 value: 1.616 - type: precision_at_20 value: 19.082 - type: precision_at_3 value: 34.014 - type: precision_at_5 value: 31.019999999999996 - type: recall_at_1 value: 3.064 - type: recall_at_10 value: 17.849999999999998 - type: recall_at_100 value: 53.217999999999996 - type: recall_at_1000 value: 87.095 - type: recall_at_20 value: 26.111 - type: recall_at_3 value: 7.383000000000001 - type: recall_at_5 value: 11.434 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de metrics: - type: accuracy value: 88.759765625 - type: ap value: 36.49152357863017 - type: ap_weighted value: 36.49152357863017 - type: f1 value: 74.4692714448641 - type: f1_weighted value: 90.54372649306606 - type: main_score value: 88.759765625 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 74.8443689869836 - type: f1 value: 75.1139662898148 - type: f1_weighted value: 74.7369003946243 - type: main_score value: 74.8443689869836 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: main_score value: 61.42918790942448 - type: v_measure value: 61.42918790942448 - type: v_measure_std value: 1.0156550098843082 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cosine_accuracy value: 88.22197055492639 - type: cosine_accuracy_threshold value: 83.30042362213135 - type: cosine_ap value: 80.57754959194938 - type: cosine_f1 value: 73.70579190158894 - type: cosine_f1_threshold value: 81.04978799819946 - type: cosine_precision value: 71.64922770303936 - type: cosine_recall value: 75.8839050131926 - type: dot_accuracy value: 88.23985217857782 - type: dot_accuracy_threshold value: 83.31039547920227 - type: dot_ap value: 80.57533213448181 - type: dot_f1 value: 73.61309601143302 - type: dot_f1_threshold value: 81.33968114852905 - type: dot_precision value: 72.51087791144101 - type: dot_recall value: 74.74934036939314 - type: euclidean_accuracy value: 88.22197055492639 - type: euclidean_accuracy_threshold value: 58.290231227874756 - type: euclidean_ap value: 80.57982723880139 - type: euclidean_f1 value: 73.63426519620417 - type: euclidean_f1_threshold value: 61.55576705932617 - type: euclidean_precision value: 71.63173652694611 - type: euclidean_recall value: 75.75197889182058 - type: main_score value: 80.57982723880139 - type: manhattan_accuracy value: 88.14448351910353 - type: manhattan_accuracy_threshold value: 3907.2471618652344 - type: manhattan_ap value: 80.3538079655539 - type: manhattan_f1 value: 73.40466675261054 - type: manhattan_f1_threshold value: 4103.794097900391 - type: manhattan_precision value: 71.76707839677337 - type: manhattan_recall value: 75.11873350923483 - type: max_ap value: 80.57982723880139 - type: max_f1 value: 73.70579190158894 - type: max_precision value: 72.51087791144101 - type: max_recall value: 75.8839050131926 - type: similarity_accuracy value: 88.22197055492639 - type: similarity_accuracy_threshold value: 83.30042362213135 - type: similarity_ap value: 80.57754959194938 - type: similarity_f1 value: 73.70579190158894 - type: similarity_f1_threshold value: 81.04978799819946 - type: similarity_precision value: 71.64922770303936 - type: similarity_recall value: 75.8839050131926 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cosine_accuracy value: 89.88628866379477 - type: cosine_accuracy_threshold value: 80.8050274848938 - type: cosine_ap value: 87.57594591596816 - type: cosine_f1 value: 80.0812257707218 - type: cosine_f1_threshold value: 77.990061044693 - type: cosine_precision value: 76.93126197063205 - type: cosine_recall value: 83.50015398829689 - type: dot_accuracy value: 89.87852679784221 - type: dot_accuracy_threshold value: 80.84419965744019 - type: dot_ap value: 87.56136742222151 - type: dot_f1 value: 80.05898617511521 - type: dot_f1_threshold value: 77.92385816574097 - type: dot_precision value: 76.80554573106035 - type: dot_recall value: 83.60024638127503 - type: euclidean_accuracy value: 89.86882446540149 - type: euclidean_accuracy_threshold value: 62.08193898200989 - type: euclidean_ap value: 87.57517549192228 - type: euclidean_f1 value: 80.05286925872892 - type: euclidean_f1_threshold value: 66.65036082267761 - type: euclidean_precision value: 76.51063232507545 - type: euclidean_recall value: 83.93902063443178 - type: main_score value: 87.64162614197194 - type: manhattan_accuracy value: 89.8959909962355 - type: manhattan_accuracy_threshold value: 4176.108169555664 - type: manhattan_ap value: 87.64162614197194 - type: manhattan_f1 value: 80.17116279069768 - type: manhattan_f1_threshold value: 4433.153533935547 - type: manhattan_precision value: 77.57615035644848 - type: manhattan_recall value: 82.94579611949491 - type: max_ap value: 87.64162614197194 - type: max_f1 value: 80.17116279069768 - type: max_precision value: 77.57615035644848 - type: max_recall value: 83.93902063443178 - type: similarity_accuracy value: 89.88628866379477 - type: similarity_accuracy_threshold value: 80.8050274848938 - type: similarity_ap value: 87.57594591596816 - type: similarity_f1 value: 80.0812257707218 - type: similarity_f1_threshold value: 77.990061044693 - type: similarity_precision value: 76.93126197063205 - type: similarity_recall value: 83.50015398829689 --- # Updates Hi, everyone, thanks for using stella models. After six months of work, I trained the jasper model on top of the stella model, which is a multimodal model, and it can be ranked 2 in mteb (submitted the results on 2024-12-11, which may need official review https://github.com/embeddings-benchmark/results/pull/68). Model link: https://huggingface.co/infgrad/jasper_en_vision_language_v1 I'll focus on the technical report, training data and related code, hopefully the tricks I've used will be of some help to you guys! This work was accomplished during my free time, it's a personal hobby. One person's time and energy is limited, and you are welcome to make any contributions! You can also find these models on my [homepage](https://huggingface.co/infgrad). # Introduction The models are trained based on `Alibaba-NLP/gte-large-en-v1.5` and `Alibaba-NLP/gte-Qwen2-1.5B-instruct`. Thanks for their contributions! **We simplify usage of prompts, providing two prompts for most general tasks, one is for s2p, another one is for s2s.** Prompt of s2p task(e.g. retrieve task): ```text Instruct: Given a web search query, retrieve relevant passages that answer the query.\nQuery: {query} ``` Prompt of s2s task(e.g. semantic textual similarity task): ```text Instruct: Retrieve semantically similar text.\nQuery: {query} ``` The models are finally trained by [MRL]((https://arxiv.org/abs/2205.13147)), so they have multiple dimensions: 512, 768, 1024, 2048, 4096, 6144 and 8192. The higher the dimension, the better the performance. **Generally speaking, 1024d is good enough.** The MTEB score of 1024d is only 0.001 lower than 8192d. # Model directory structure The model directory structure is very simple, it is a standard SentenceTransformer directory **with a series of `2_Dense_{dims}` folders**, where `dims` represents the final vector dimension. For example, the `2_Dense_256` folder stores Linear weights that convert vector dimensions to 256 dimensions. Please refer to the following chapters for specific instructions on how to use them. # Usage You can use `SentenceTransformers` or `transformers` library to encode text. ## Sentence Transformers ```python from sentence_transformers import SentenceTransformer # This model supports two prompts: "s2p_query" and "s2s_query" for sentence-to-passage and sentence-to-sentence tasks, respectively. # They are defined in `config_sentence_transformers.json` query_prompt_name = "s2p_query" queries = [ "What are some ways to reduce stress?", "What are the benefits of drinking green tea?", ] # docs do not need any prompts docs = [ "There are many effective ways to reduce stress. Some common techniques include deep breathing, meditation, and physical activity. Engaging in hobbies, spending time in nature, and connecting with loved ones can also help alleviate stress. Additionally, setting boundaries, practicing self-care, and learning to say no can prevent stress from building up.", "Green tea has been consumed for centuries and is known for its potential health benefits. It contains antioxidants that may help protect the body against damage caused by free radicals. Regular consumption of green tea has been associated with improved heart health, enhanced cognitive function, and a reduced risk of certain types of cancer. The polyphenols in green tea may also have anti-inflammatory and weight loss properties.", ] # !The default dimension is 1024, if you need other dimensions, please clone the model and modify `modules.json` to replace `2_Dense_1024` with another dimension, e.g. `2_Dense_256` or `2_Dense_8192` ! model = SentenceTransformer("dunzhang/stella_en_1.5B_v5", trust_remote_code=True).cuda() query_embeddings = model.encode(queries, prompt_name=query_prompt_name) doc_embeddings = model.encode(docs) print(query_embeddings.shape, doc_embeddings.shape) # (2, 1024) (2, 1024) similarities = model.similarity(query_embeddings, doc_embeddings) print(similarities) # tensor([[0.8179, 0.2958], # [0.3194, 0.7854]]) ``` ## Transformers ```python import os import torch from transformers import AutoModel, AutoTokenizer from sklearn.preprocessing import normalize query_prompt = "Instruct: Given a web search query, retrieve relevant passages that answer the query.\nQuery: " queries = [ "What are some ways to reduce stress?", "What are the benefits of drinking green tea?", ] queries = [query_prompt + query for query in queries] # docs do not need any prompts docs = [ "There are many effective ways to reduce stress. Some common techniques include deep breathing, meditation, and physical activity. Engaging in hobbies, spending time in nature, and connecting with loved ones can also help alleviate stress. Additionally, setting boundaries, practicing self-care, and learning to say no can prevent stress from building up.", "Green tea has been consumed for centuries and is known for its potential health benefits. It contains antioxidants that may help protect the body against damage caused by free radicals. Regular consumption of green tea has been associated with improved heart health, enhanced cognitive function, and a reduced risk of certain types of cancer. The polyphenols in green tea may also have anti-inflammatory and weight loss properties.", ] # The path of your model after cloning it model_dir = "{Your MODEL_PATH}" vector_dim = 1024 vector_linear_directory = f"2_Dense_{vector_dim}" model = AutoModel.from_pretrained(model_dir, trust_remote_code=True).cuda().eval() tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True) vector_linear = torch.nn.Linear(in_features=model.config.hidden_size, out_features=vector_dim) vector_linear_dict = { k.replace("linear.", ""): v for k, v in torch.load(os.path.join(model_dir, f"{vector_linear_directory}/pytorch_model.bin")).items() } vector_linear.load_state_dict(vector_linear_dict) vector_linear.cuda() # Embed the queries with torch.no_grad(): input_data = tokenizer(queries, padding="longest", truncation=True, max_length=512, return_tensors="pt") input_data = {k: v.cuda() for k, v in input_data.items()} attention_mask = input_data["attention_mask"] last_hidden_state = model(**input_data)[0] last_hidden = last_hidden_state.masked_fill(~attention_mask[..., None].bool(), 0.0) query_vectors = last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] query_vectors = normalize(vector_linear(query_vectors).cpu().numpy()) # Embed the documents with torch.no_grad(): input_data = tokenizer(docs, padding="longest", truncation=True, max_length=512, return_tensors="pt") input_data = {k: v.cuda() for k, v in input_data.items()} attention_mask = input_data["attention_mask"] last_hidden_state = model(**input_data)[0] last_hidden = last_hidden_state.masked_fill(~attention_mask[..., None].bool(), 0.0) docs_vectors = last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] docs_vectors = normalize(vector_linear(docs_vectors).cpu().numpy()) print(query_vectors.shape, docs_vectors.shape) # (2, 1024) (2, 1024) similarities = query_vectors @ docs_vectors.T print(similarities) # [[0.8178789 0.2958377 ] # [0.31938642 0.7853526 ]] ``` # FAQ Q: The details of training? A: The training method and datasets will be released in the future. (specific time unknown, may be provided in a paper) Q: How to choose a suitable prompt for my own task? A: In most cases, please use the s2p and s2s prompts. These two prompts account for the vast majority of the training data. Q: How to reproduce MTEB results? A: Please use evaluation scripts in `Alibaba-NLP/gte-Qwen2-1.5B-instruct` or `intfloat/e5-mistral-7b-instruct` Q: Why each dimension has a linear weight? A: MRL has multiple training methods, we choose this method which has the best performance. Q: What is the sequence length of models? A: 512 is recommended, in our experiments, almost all models perform poorly on specialized long text retrieval datasets. Besides, the model is trained on datasets of 512 length. This may be an optimization term. If you have any questions, please start a discussion on community.
[ "BIOSSES", "SCIFACT" ]
McGill-DMaS/DMaS-LLaMa-Lite-step-43.5k
McGill-DMaS
null
[ "safetensors", "llama", "dataset:HuggingFaceFW/fineweb-edu", "arxiv:2412.13335", "license:apache-2.0", "region:us" ]
2024-12-17T21:20:22Z
2024-12-19T02:32:27+00:00
14
0
--- datasets: - HuggingFaceFW/fineweb-edu license: apache-2.0 --- Here is the draft for the `README.md` file for the **McGill-DMaS/DMaS-LLaMa-Lite-step-43.5k** model card on Huggingface: --- # DMaS-LLaMa-Lite-step-43.5k This repository provides access to **DMaS-LLaMa-Lite-step-43.5k**, a 1.7-billion-parameter language model based on the LLaMa architecture. The model has been trained from scratch as part of the DMaS-LLaMa-Lite project using approximately 20 billion tokens of high-quality educational content. ## Model Overview - **Architecture**: LLaMa-based - **Parameters**: 1.7B (36 layers, 32 attention heads, RMSNorm) - **Tokenizer**: GPT-2 tokenizer - **Training Data**: FineWeb-Edu subset (educational text) - **Training Steps**: 43,500 - **Optimizer**: AdamW with linear warmup and decay - **Hardware**: Trained on 1-2 RTX A6000 GPUs with PyTorch DDP - **Dataset Source**: [FineWeb-Edu Dataset](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) The training process emphasizes qualitative improvements in coherence, fluency, and factual grounding, demonstrating competitive results even with fewer tokens compared to larger-scale models. This checkpoint represents the model's state at **43,500 training steps**. Validation loss and downstream performance benchmarks demonstrate notable early improvements in text fluency and alignment with prompts. ## Training Code The training script, including configurations and instructions, is open-sourced and available here: 📄 **[DMaS-LLaMa-Lite Training Code](https://github.com/McGill-DMaS/DMaS-LLaMa-Lite-Training-Code)** ## Usage You can load the model with Hugging Face Transformers library: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "McGill-DMaS/DMaS-LLaMa-Lite-step-43.5k" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) inputs = tokenizer("The Pyramids of Giza in Egypt are some of the oldest man-made structures in the world.", return_tensors="pt") outputs = model.generate(**inputs, max_length=50) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Citation If you use this model or its training insights in your work, please cite the following [paper](https://arxiv.org/abs/2412.13335): ```bibtex @article{li2024effectiveness, title={Experience of Training a 1.7B-Parameter LLaMa Model From Scratch}, author={Li, Miles Q and Fung, Benjamin and Huang, Shih-Chia}, journal={arXiv preprint arXiv:2412.13335}, year={2024} } ``` ## License This model and code are released under the **Apache License 2.0**. Please check the respective repositories for detailed terms.
[ "CHIA" ]
Fashion-Italia/gte-Qwen2-7B-instruct-Q4_K_M-GGUF
Fashion-Italia
sentence-similarity
[ "sentence-transformers", "gguf", "mteb", "transformers", "Qwen2", "sentence-similarity", "llama-cpp", "gguf-my-repo", "base_model:Alibaba-NLP/gte-Qwen2-7B-instruct", "base_model:quantized:Alibaba-NLP/gte-Qwen2-7B-instruct", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us", "conversational" ]
2024-12-19T10:56:53Z
2024-12-19T10:57:17+00:00
14
0
--- base_model: Alibaba-NLP/gte-Qwen2-7B-instruct license: apache-2.0 tags: - mteb - sentence-transformers - transformers - Qwen2 - sentence-similarity - llama-cpp - gguf-my-repo model-index: - name: gte-qwen2-7B-instruct results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 91.31343283582089 - type: ap value: 67.64251402604096 - type: f1 value: 87.53372530755692 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 97.497825 - type: ap value: 96.30329547047529 - type: f1 value: 97.49769793778039 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 62.564 - type: f1 value: 60.975777935041066 - task: type: Retrieval dataset: name: MTEB ArguAna type: mteb/arguana config: default split: test revision: c22ab2a51041ffd869aaddef7af8d8215647e41a metrics: - type: map_at_1 value: 36.486000000000004 - type: map_at_10 value: 54.842 - type: map_at_100 value: 55.206999999999994 - type: map_at_1000 value: 55.206999999999994 - type: map_at_3 value: 49.893 - type: map_at_5 value: 53.105000000000004 - type: mrr_at_1 value: 37.34 - type: mrr_at_10 value: 55.143 - type: mrr_at_100 value: 55.509 - type: mrr_at_1000 value: 55.509 - type: mrr_at_3 value: 50.212999999999994 - type: mrr_at_5 value: 53.432 - type: ndcg_at_1 value: 36.486000000000004 - type: ndcg_at_10 value: 64.273 - type: ndcg_at_100 value: 65.66199999999999 - type: ndcg_at_1000 value: 65.66199999999999 - type: ndcg_at_3 value: 54.352999999999994 - type: ndcg_at_5 value: 60.131 - type: precision_at_1 value: 36.486000000000004 - type: precision_at_10 value: 9.395000000000001 - type: precision_at_100 value: 0.996 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 22.428 - type: precision_at_5 value: 16.259 - type: recall_at_1 value: 36.486000000000004 - type: recall_at_10 value: 93.95400000000001 - type: recall_at_100 value: 99.644 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 67.283 - type: recall_at_5 value: 81.294 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 56.461169803700564 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 51.73600434466286 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 67.57827065898053 - type: mrr value: 79.08136569493911 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 83.53324575999243 - type: cos_sim_spearman value: 81.37173362822374 - type: euclidean_pearson value: 82.19243335103444 - type: euclidean_spearman value: 81.33679307304334 - type: manhattan_pearson value: 82.38752665975699 - type: manhattan_spearman value: 81.31510583189689 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 87.56818181818181 - type: f1 value: 87.25826722019875 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 50.09239610327673 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 46.64733054606282 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: f46a197baaae43b4f621051089b82a364682dfeb metrics: - type: map_at_1 value: 33.997 - type: map_at_10 value: 48.176 - type: map_at_100 value: 49.82 - type: map_at_1000 value: 49.924 - type: map_at_3 value: 43.626 - type: map_at_5 value: 46.275 - type: mrr_at_1 value: 42.059999999999995 - type: mrr_at_10 value: 53.726 - type: mrr_at_100 value: 54.398 - type: mrr_at_1000 value: 54.416 - type: mrr_at_3 value: 50.714999999999996 - type: mrr_at_5 value: 52.639 - type: ndcg_at_1 value: 42.059999999999995 - type: ndcg_at_10 value: 55.574999999999996 - type: ndcg_at_100 value: 60.744 - type: ndcg_at_1000 value: 61.85699999999999 - type: ndcg_at_3 value: 49.363 - type: ndcg_at_5 value: 52.44 - type: precision_at_1 value: 42.059999999999995 - type: precision_at_10 value: 11.101999999999999 - type: precision_at_100 value: 1.73 - type: precision_at_1000 value: 0.218 - type: precision_at_3 value: 24.464 - type: precision_at_5 value: 18.026 - type: recall_at_1 value: 33.997 - type: recall_at_10 value: 70.35900000000001 - type: recall_at_100 value: 91.642 - type: recall_at_1000 value: 97.977 - type: recall_at_3 value: 52.76 - type: recall_at_5 value: 61.148 - task: type: Retrieval dataset: name: MTEB CQADupstackEnglishRetrieval type: BeIR/cqadupstack config: default split: test revision: ad9991cb51e31e31e430383c75ffb2885547b5f0 metrics: - type: map_at_1 value: 35.884 - type: map_at_10 value: 48.14 - type: map_at_100 value: 49.5 - type: map_at_1000 value: 49.63 - type: map_at_3 value: 44.646 - type: map_at_5 value: 46.617999999999995 - type: mrr_at_1 value: 44.458999999999996 - type: mrr_at_10 value: 53.751000000000005 - type: mrr_at_100 value: 54.37800000000001 - type: mrr_at_1000 value: 54.415 - type: mrr_at_3 value: 51.815 - type: mrr_at_5 value: 52.882 - type: ndcg_at_1 value: 44.458999999999996 - type: ndcg_at_10 value: 54.157 - type: ndcg_at_100 value: 58.362 - type: ndcg_at_1000 value: 60.178 - type: ndcg_at_3 value: 49.661 - type: ndcg_at_5 value: 51.74999999999999 - type: precision_at_1 value: 44.458999999999996 - type: precision_at_10 value: 10.248 - type: precision_at_100 value: 1.5890000000000002 - type: precision_at_1000 value: 0.207 - type: precision_at_3 value: 23.928 - type: precision_at_5 value: 16.878999999999998 - type: recall_at_1 value: 35.884 - type: recall_at_10 value: 64.798 - type: recall_at_100 value: 82.345 - type: recall_at_1000 value: 93.267 - type: recall_at_3 value: 51.847 - type: recall_at_5 value: 57.601 - task: type: Retrieval dataset: name: MTEB CQADupstackGamingRetrieval type: BeIR/cqadupstack config: default split: test revision: 4885aa143210c98657558c04aaf3dc47cfb54340 metrics: - type: map_at_1 value: 39.383 - type: map_at_10 value: 53.714 - type: map_at_100 value: 54.838 - type: map_at_1000 value: 54.87800000000001 - type: map_at_3 value: 50.114999999999995 - type: map_at_5 value: 52.153000000000006 - type: mrr_at_1 value: 45.016 - type: mrr_at_10 value: 56.732000000000006 - type: mrr_at_100 value: 57.411 - type: mrr_at_1000 value: 57.431 - type: mrr_at_3 value: 54.044000000000004 - type: mrr_at_5 value: 55.639 - type: ndcg_at_1 value: 45.016 - type: ndcg_at_10 value: 60.228 - type: ndcg_at_100 value: 64.277 - type: ndcg_at_1000 value: 65.07 - type: ndcg_at_3 value: 54.124 - type: ndcg_at_5 value: 57.147000000000006 - type: precision_at_1 value: 45.016 - type: precision_at_10 value: 9.937 - type: precision_at_100 value: 1.288 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 24.471999999999998 - type: precision_at_5 value: 16.991 - type: recall_at_1 value: 39.383 - type: recall_at_10 value: 76.175 - type: recall_at_100 value: 93.02 - type: recall_at_1000 value: 98.60900000000001 - type: recall_at_3 value: 60.265 - type: recall_at_5 value: 67.46600000000001 - task: type: Retrieval dataset: name: MTEB CQADupstackGisRetrieval type: BeIR/cqadupstack config: default split: test revision: 5003b3064772da1887988e05400cf3806fe491f2 metrics: - type: map_at_1 value: 27.426000000000002 - type: map_at_10 value: 37.397000000000006 - type: map_at_100 value: 38.61 - type: map_at_1000 value: 38.678000000000004 - type: map_at_3 value: 34.150999999999996 - type: map_at_5 value: 36.137 - type: mrr_at_1 value: 29.944 - type: mrr_at_10 value: 39.654 - type: mrr_at_100 value: 40.638000000000005 - type: mrr_at_1000 value: 40.691 - type: mrr_at_3 value: 36.817 - type: mrr_at_5 value: 38.524 - type: ndcg_at_1 value: 29.944 - type: ndcg_at_10 value: 43.094 - type: ndcg_at_100 value: 48.789 - type: ndcg_at_1000 value: 50.339999999999996 - type: ndcg_at_3 value: 36.984 - type: ndcg_at_5 value: 40.248 - type: precision_at_1 value: 29.944 - type: precision_at_10 value: 6.78 - type: precision_at_100 value: 1.024 - type: precision_at_1000 value: 0.11800000000000001 - type: precision_at_3 value: 15.895000000000001 - type: precision_at_5 value: 11.39 - type: recall_at_1 value: 27.426000000000002 - type: recall_at_10 value: 58.464000000000006 - type: recall_at_100 value: 84.193 - type: recall_at_1000 value: 95.52000000000001 - type: recall_at_3 value: 42.172 - type: recall_at_5 value: 50.101 - task: type: Retrieval dataset: name: MTEB CQADupstackMathematicaRetrieval type: BeIR/cqadupstack config: default split: test revision: 90fceea13679c63fe563ded68f3b6f06e50061de metrics: - type: map_at_1 value: 19.721 - type: map_at_10 value: 31.604 - type: map_at_100 value: 32.972 - type: map_at_1000 value: 33.077 - type: map_at_3 value: 27.218999999999998 - type: map_at_5 value: 29.53 - type: mrr_at_1 value: 25.0 - type: mrr_at_10 value: 35.843 - type: mrr_at_100 value: 36.785000000000004 - type: mrr_at_1000 value: 36.842000000000006 - type: mrr_at_3 value: 32.193 - type: mrr_at_5 value: 34.264 - type: ndcg_at_1 value: 25.0 - type: ndcg_at_10 value: 38.606 - type: ndcg_at_100 value: 44.272 - type: ndcg_at_1000 value: 46.527 - type: ndcg_at_3 value: 30.985000000000003 - type: ndcg_at_5 value: 34.43 - type: precision_at_1 value: 25.0 - type: precision_at_10 value: 7.811 - type: precision_at_100 value: 1.203 - type: precision_at_1000 value: 0.15 - type: precision_at_3 value: 15.423 - type: precision_at_5 value: 11.791 - type: recall_at_1 value: 19.721 - type: recall_at_10 value: 55.625 - type: recall_at_100 value: 79.34400000000001 - type: recall_at_1000 value: 95.208 - type: recall_at_3 value: 35.19 - type: recall_at_5 value: 43.626 - task: type: Retrieval dataset: name: MTEB CQADupstackPhysicsRetrieval type: BeIR/cqadupstack config: default split: test revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4 metrics: - type: map_at_1 value: 33.784 - type: map_at_10 value: 47.522 - type: map_at_100 value: 48.949999999999996 - type: map_at_1000 value: 49.038 - type: map_at_3 value: 43.284 - type: map_at_5 value: 45.629 - type: mrr_at_1 value: 41.482 - type: mrr_at_10 value: 52.830999999999996 - type: mrr_at_100 value: 53.559999999999995 - type: mrr_at_1000 value: 53.588 - type: mrr_at_3 value: 50.016000000000005 - type: mrr_at_5 value: 51.614000000000004 - type: ndcg_at_1 value: 41.482 - type: ndcg_at_10 value: 54.569 - type: ndcg_at_100 value: 59.675999999999995 - type: ndcg_at_1000 value: 60.989000000000004 - type: ndcg_at_3 value: 48.187000000000005 - type: ndcg_at_5 value: 51.183 - type: precision_at_1 value: 41.482 - type: precision_at_10 value: 10.221 - type: precision_at_100 value: 1.486 - type: precision_at_1000 value: 0.17500000000000002 - type: precision_at_3 value: 23.548 - type: precision_at_5 value: 16.805 - type: recall_at_1 value: 33.784 - type: recall_at_10 value: 69.798 - type: recall_at_100 value: 90.098 - type: recall_at_1000 value: 98.176 - type: recall_at_3 value: 52.127 - type: recall_at_5 value: 59.861 - task: type: Retrieval dataset: name: MTEB CQADupstackProgrammersRetrieval type: BeIR/cqadupstack config: default split: test revision: 6184bc1440d2dbc7612be22b50686b8826d22b32 metrics: - type: map_at_1 value: 28.038999999999998 - type: map_at_10 value: 41.904 - type: map_at_100 value: 43.36 - type: map_at_1000 value: 43.453 - type: map_at_3 value: 37.785999999999994 - type: map_at_5 value: 40.105000000000004 - type: mrr_at_1 value: 35.046 - type: mrr_at_10 value: 46.926 - type: mrr_at_100 value: 47.815000000000005 - type: mrr_at_1000 value: 47.849000000000004 - type: mrr_at_3 value: 44.273 - type: mrr_at_5 value: 45.774 - type: ndcg_at_1 value: 35.046 - type: ndcg_at_10 value: 48.937000000000005 - type: ndcg_at_100 value: 54.544000000000004 - type: ndcg_at_1000 value: 56.069 - type: ndcg_at_3 value: 42.858000000000004 - type: ndcg_at_5 value: 45.644 - type: precision_at_1 value: 35.046 - type: precision_at_10 value: 9.452 - type: precision_at_100 value: 1.429 - type: precision_at_1000 value: 0.173 - type: precision_at_3 value: 21.346999999999998 - type: precision_at_5 value: 15.342 - type: recall_at_1 value: 28.038999999999998 - type: recall_at_10 value: 64.59700000000001 - type: recall_at_100 value: 87.735 - type: recall_at_1000 value: 97.41300000000001 - type: recall_at_3 value: 47.368 - type: recall_at_5 value: 54.93900000000001 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval type: BeIR/cqadupstack config: default split: test revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 metrics: - type: map_at_1 value: 28.17291666666667 - type: map_at_10 value: 40.025749999999995 - type: map_at_100 value: 41.39208333333333 - type: map_at_1000 value: 41.499249999999996 - type: map_at_3 value: 36.347 - type: map_at_5 value: 38.41391666666667 - type: mrr_at_1 value: 33.65925 - type: mrr_at_10 value: 44.085499999999996 - type: mrr_at_100 value: 44.94116666666667 - type: mrr_at_1000 value: 44.9855 - type: mrr_at_3 value: 41.2815 - type: mrr_at_5 value: 42.91491666666666 - type: ndcg_at_1 value: 33.65925 - type: ndcg_at_10 value: 46.430833333333325 - type: ndcg_at_100 value: 51.761 - type: ndcg_at_1000 value: 53.50899999999999 - type: ndcg_at_3 value: 40.45133333333333 - type: ndcg_at_5 value: 43.31483333333334 - type: precision_at_1 value: 33.65925 - type: precision_at_10 value: 8.4995 - type: precision_at_100 value: 1.3210000000000004 - type: precision_at_1000 value: 0.16591666666666666 - type: precision_at_3 value: 19.165083333333335 - type: precision_at_5 value: 13.81816666666667 - type: recall_at_1 value: 28.17291666666667 - type: recall_at_10 value: 61.12624999999999 - type: recall_at_100 value: 83.97266666666667 - type: recall_at_1000 value: 95.66550000000001 - type: recall_at_3 value: 44.661249999999995 - type: recall_at_5 value: 51.983333333333334 - type: map_at_1 value: 17.936 - type: map_at_10 value: 27.399 - type: map_at_100 value: 28.632 - type: map_at_1000 value: 28.738000000000003 - type: map_at_3 value: 24.456 - type: map_at_5 value: 26.06 - type: mrr_at_1 value: 19.224 - type: mrr_at_10 value: 28.998 - type: mrr_at_100 value: 30.11 - type: mrr_at_1000 value: 30.177 - type: mrr_at_3 value: 26.247999999999998 - type: mrr_at_5 value: 27.708 - type: ndcg_at_1 value: 19.224 - type: ndcg_at_10 value: 32.911 - type: ndcg_at_100 value: 38.873999999999995 - type: ndcg_at_1000 value: 41.277 - type: ndcg_at_3 value: 27.142 - type: ndcg_at_5 value: 29.755 - type: precision_at_1 value: 19.224 - type: precision_at_10 value: 5.6930000000000005 - type: precision_at_100 value: 0.9259999999999999 - type: precision_at_1000 value: 0.126 - type: precision_at_3 value: 12.138 - type: precision_at_5 value: 8.909 - type: recall_at_1 value: 17.936 - type: recall_at_10 value: 48.096 - type: recall_at_100 value: 75.389 - type: recall_at_1000 value: 92.803 - type: recall_at_3 value: 32.812999999999995 - type: recall_at_5 value: 38.851 - task: type: Retrieval dataset: name: MTEB CQADupstackStatsRetrieval type: BeIR/cqadupstack config: default split: test revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a metrics: - type: map_at_1 value: 24.681 - type: map_at_10 value: 34.892 - type: map_at_100 value: 35.996 - type: map_at_1000 value: 36.083 - type: map_at_3 value: 31.491999999999997 - type: map_at_5 value: 33.632 - type: mrr_at_1 value: 28.528 - type: mrr_at_10 value: 37.694 - type: mrr_at_100 value: 38.613 - type: mrr_at_1000 value: 38.668 - type: mrr_at_3 value: 34.714 - type: mrr_at_5 value: 36.616 - type: ndcg_at_1 value: 28.528 - type: ndcg_at_10 value: 40.703 - type: ndcg_at_100 value: 45.993 - type: ndcg_at_1000 value: 47.847 - type: ndcg_at_3 value: 34.622 - type: ndcg_at_5 value: 38.035999999999994 - type: precision_at_1 value: 28.528 - type: precision_at_10 value: 6.902 - type: precision_at_100 value: 1.0370000000000001 - type: precision_at_1000 value: 0.126 - type: precision_at_3 value: 15.798000000000002 - type: precision_at_5 value: 11.655999999999999 - type: recall_at_1 value: 24.681 - type: recall_at_10 value: 55.81 - type: recall_at_100 value: 79.785 - type: recall_at_1000 value: 92.959 - type: recall_at_3 value: 39.074 - type: recall_at_5 value: 47.568 - task: type: Retrieval dataset: name: MTEB CQADupstackTexRetrieval type: BeIR/cqadupstack config: default split: test revision: 46989137a86843e03a6195de44b09deda022eec7 metrics: - type: map_at_1 value: 18.627 - type: map_at_10 value: 27.872000000000003 - type: map_at_100 value: 29.237999999999996 - type: map_at_1000 value: 29.363 - type: map_at_3 value: 24.751 - type: map_at_5 value: 26.521 - type: mrr_at_1 value: 23.021 - type: mrr_at_10 value: 31.924000000000003 - type: mrr_at_100 value: 32.922000000000004 - type: mrr_at_1000 value: 32.988 - type: mrr_at_3 value: 29.192 - type: mrr_at_5 value: 30.798 - type: ndcg_at_1 value: 23.021 - type: ndcg_at_10 value: 33.535 - type: ndcg_at_100 value: 39.732 - type: ndcg_at_1000 value: 42.201 - type: ndcg_at_3 value: 28.153 - type: ndcg_at_5 value: 30.746000000000002 - type: precision_at_1 value: 23.021 - type: precision_at_10 value: 6.459 - type: precision_at_100 value: 1.1320000000000001 - type: precision_at_1000 value: 0.153 - type: precision_at_3 value: 13.719000000000001 - type: precision_at_5 value: 10.193000000000001 - type: recall_at_1 value: 18.627 - type: recall_at_10 value: 46.463 - type: recall_at_100 value: 74.226 - type: recall_at_1000 value: 91.28500000000001 - type: recall_at_3 value: 31.357000000000003 - type: recall_at_5 value: 38.067 - task: type: Retrieval dataset: name: MTEB CQADupstackUnixRetrieval type: BeIR/cqadupstack config: default split: test revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53 metrics: - type: map_at_1 value: 31.457 - type: map_at_10 value: 42.888 - type: map_at_100 value: 44.24 - type: map_at_1000 value: 44.327 - type: map_at_3 value: 39.588 - type: map_at_5 value: 41.423 - type: mrr_at_1 value: 37.126999999999995 - type: mrr_at_10 value: 47.083000000000006 - type: mrr_at_100 value: 47.997 - type: mrr_at_1000 value: 48.044 - type: mrr_at_3 value: 44.574000000000005 - type: mrr_at_5 value: 46.202 - type: ndcg_at_1 value: 37.126999999999995 - type: ndcg_at_10 value: 48.833 - type: ndcg_at_100 value: 54.327000000000005 - type: ndcg_at_1000 value: 56.011 - type: ndcg_at_3 value: 43.541999999999994 - type: ndcg_at_5 value: 46.127 - type: precision_at_1 value: 37.126999999999995 - type: precision_at_10 value: 8.376999999999999 - type: precision_at_100 value: 1.2309999999999999 - type: precision_at_1000 value: 0.146 - type: precision_at_3 value: 20.211000000000002 - type: precision_at_5 value: 14.16 - type: recall_at_1 value: 31.457 - type: recall_at_10 value: 62.369 - type: recall_at_100 value: 85.444 - type: recall_at_1000 value: 96.65599999999999 - type: recall_at_3 value: 47.961 - type: recall_at_5 value: 54.676 - task: type: Retrieval dataset: name: MTEB CQADupstackWebmastersRetrieval type: BeIR/cqadupstack config: default split: test revision: 160c094312a0e1facb97e55eeddb698c0abe3571 metrics: - type: map_at_1 value: 27.139999999999997 - type: map_at_10 value: 38.801 - type: map_at_100 value: 40.549 - type: map_at_1000 value: 40.802 - type: map_at_3 value: 35.05 - type: map_at_5 value: 36.884 - type: mrr_at_1 value: 33.004 - type: mrr_at_10 value: 43.864 - type: mrr_at_100 value: 44.667 - type: mrr_at_1000 value: 44.717 - type: mrr_at_3 value: 40.777 - type: mrr_at_5 value: 42.319 - type: ndcg_at_1 value: 33.004 - type: ndcg_at_10 value: 46.022 - type: ndcg_at_100 value: 51.542 - type: ndcg_at_1000 value: 53.742000000000004 - type: ndcg_at_3 value: 39.795 - type: ndcg_at_5 value: 42.272 - type: precision_at_1 value: 33.004 - type: precision_at_10 value: 9.012 - type: precision_at_100 value: 1.7770000000000001 - type: precision_at_1000 value: 0.26 - type: precision_at_3 value: 19.038 - type: precision_at_5 value: 13.675999999999998 - type: recall_at_1 value: 27.139999999999997 - type: recall_at_10 value: 60.961 - type: recall_at_100 value: 84.451 - type: recall_at_1000 value: 98.113 - type: recall_at_3 value: 43.001 - type: recall_at_5 value: 49.896 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: mteb/climate-fever config: default split: test revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380 metrics: - type: map_at_1 value: 22.076999999999998 - type: map_at_10 value: 35.44 - type: map_at_100 value: 37.651 - type: map_at_1000 value: 37.824999999999996 - type: map_at_3 value: 30.764999999999997 - type: map_at_5 value: 33.26 - type: mrr_at_1 value: 50.163000000000004 - type: mrr_at_10 value: 61.207 - type: mrr_at_100 value: 61.675000000000004 - type: mrr_at_1000 value: 61.692 - type: mrr_at_3 value: 58.60999999999999 - type: mrr_at_5 value: 60.307 - type: ndcg_at_1 value: 50.163000000000004 - type: ndcg_at_10 value: 45.882 - type: ndcg_at_100 value: 53.239999999999995 - type: ndcg_at_1000 value: 55.852000000000004 - type: ndcg_at_3 value: 40.514 - type: ndcg_at_5 value: 42.038 - type: precision_at_1 value: 50.163000000000004 - type: precision_at_10 value: 13.466000000000001 - type: precision_at_100 value: 2.164 - type: precision_at_1000 value: 0.266 - type: precision_at_3 value: 29.707 - type: precision_at_5 value: 21.694 - type: recall_at_1 value: 22.076999999999998 - type: recall_at_10 value: 50.193 - type: recall_at_100 value: 74.993 - type: recall_at_1000 value: 89.131 - type: recall_at_3 value: 35.472 - type: recall_at_5 value: 41.814 - task: type: Retrieval dataset: name: MTEB DBPedia type: mteb/dbpedia config: default split: test revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659 metrics: - type: map_at_1 value: 9.953 - type: map_at_10 value: 24.515 - type: map_at_100 value: 36.173 - type: map_at_1000 value: 38.351 - type: map_at_3 value: 16.592000000000002 - type: map_at_5 value: 20.036 - type: mrr_at_1 value: 74.25 - type: mrr_at_10 value: 81.813 - type: mrr_at_100 value: 82.006 - type: mrr_at_1000 value: 82.011 - type: mrr_at_3 value: 80.875 - type: mrr_at_5 value: 81.362 - type: ndcg_at_1 value: 62.5 - type: ndcg_at_10 value: 52.42 - type: ndcg_at_100 value: 56.808 - type: ndcg_at_1000 value: 63.532999999999994 - type: ndcg_at_3 value: 56.654 - type: ndcg_at_5 value: 54.18300000000001 - type: precision_at_1 value: 74.25 - type: precision_at_10 value: 42.699999999999996 - type: precision_at_100 value: 13.675 - type: precision_at_1000 value: 2.664 - type: precision_at_3 value: 60.5 - type: precision_at_5 value: 52.800000000000004 - type: recall_at_1 value: 9.953 - type: recall_at_10 value: 30.253999999999998 - type: recall_at_100 value: 62.516000000000005 - type: recall_at_1000 value: 84.163 - type: recall_at_3 value: 18.13 - type: recall_at_5 value: 22.771 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 79.455 - type: f1 value: 74.16798697647569 - task: type: Retrieval dataset: name: MTEB FEVER type: mteb/fever config: default split: test revision: bea83ef9e8fb933d90a2f1d5515737465d613e12 metrics: - type: map_at_1 value: 87.531 - type: map_at_10 value: 93.16799999999999 - type: map_at_100 value: 93.341 - type: map_at_1000 value: 93.349 - type: map_at_3 value: 92.444 - type: map_at_5 value: 92.865 - type: mrr_at_1 value: 94.014 - type: mrr_at_10 value: 96.761 - type: mrr_at_100 value: 96.762 - type: mrr_at_1000 value: 96.762 - type: mrr_at_3 value: 96.672 - type: mrr_at_5 value: 96.736 - type: ndcg_at_1 value: 94.014 - type: ndcg_at_10 value: 95.112 - type: ndcg_at_100 value: 95.578 - type: ndcg_at_1000 value: 95.68900000000001 - type: ndcg_at_3 value: 94.392 - type: ndcg_at_5 value: 94.72500000000001 - type: precision_at_1 value: 94.014 - type: precision_at_10 value: 11.065 - type: precision_at_100 value: 1.157 - type: precision_at_1000 value: 0.11800000000000001 - type: precision_at_3 value: 35.259 - type: precision_at_5 value: 21.599 - type: recall_at_1 value: 87.531 - type: recall_at_10 value: 97.356 - type: recall_at_100 value: 98.965 - type: recall_at_1000 value: 99.607 - type: recall_at_3 value: 95.312 - type: recall_at_5 value: 96.295 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: mteb/fiqa config: default split: test revision: 27a168819829fe9bcd655c2df245fb19452e8e06 metrics: - type: map_at_1 value: 32.055 - type: map_at_10 value: 53.114 - type: map_at_100 value: 55.235 - type: map_at_1000 value: 55.345 - type: map_at_3 value: 45.854 - type: map_at_5 value: 50.025 - type: mrr_at_1 value: 60.34 - type: mrr_at_10 value: 68.804 - type: mrr_at_100 value: 69.309 - type: mrr_at_1000 value: 69.32199999999999 - type: mrr_at_3 value: 66.40899999999999 - type: mrr_at_5 value: 67.976 - type: ndcg_at_1 value: 60.34 - type: ndcg_at_10 value: 62.031000000000006 - type: ndcg_at_100 value: 68.00500000000001 - type: ndcg_at_1000 value: 69.286 - type: ndcg_at_3 value: 56.355999999999995 - type: ndcg_at_5 value: 58.687 - type: precision_at_1 value: 60.34 - type: precision_at_10 value: 17.176 - type: precision_at_100 value: 2.36 - type: precision_at_1000 value: 0.259 - type: precision_at_3 value: 37.14 - type: precision_at_5 value: 27.809 - type: recall_at_1 value: 32.055 - type: recall_at_10 value: 70.91 - type: recall_at_100 value: 91.83 - type: recall_at_1000 value: 98.871 - type: recall_at_3 value: 51.202999999999996 - type: recall_at_5 value: 60.563 - task: type: Retrieval dataset: name: MTEB HotpotQA type: mteb/hotpotqa config: default split: test revision: ab518f4d6fcca38d87c25209f94beba119d02014 metrics: - type: map_at_1 value: 43.68 - type: map_at_10 value: 64.389 - type: map_at_100 value: 65.24 - type: map_at_1000 value: 65.303 - type: map_at_3 value: 61.309000000000005 - type: map_at_5 value: 63.275999999999996 - type: mrr_at_1 value: 87.36 - type: mrr_at_10 value: 91.12 - type: mrr_at_100 value: 91.227 - type: mrr_at_1000 value: 91.229 - type: mrr_at_3 value: 90.57600000000001 - type: mrr_at_5 value: 90.912 - type: ndcg_at_1 value: 87.36 - type: ndcg_at_10 value: 73.076 - type: ndcg_at_100 value: 75.895 - type: ndcg_at_1000 value: 77.049 - type: ndcg_at_3 value: 68.929 - type: ndcg_at_5 value: 71.28 - type: precision_at_1 value: 87.36 - type: precision_at_10 value: 14.741000000000001 - type: precision_at_100 value: 1.694 - type: precision_at_1000 value: 0.185 - type: precision_at_3 value: 43.043 - type: precision_at_5 value: 27.681 - type: recall_at_1 value: 43.68 - type: recall_at_10 value: 73.707 - type: recall_at_100 value: 84.7 - type: recall_at_1000 value: 92.309 - type: recall_at_3 value: 64.564 - type: recall_at_5 value: 69.203 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 96.75399999999999 - type: ap value: 95.29389839242187 - type: f1 value: 96.75348377433475 - task: type: Retrieval dataset: name: MTEB MSMARCO type: mteb/msmarco config: default split: dev revision: c5a29a104738b98a9e76336939199e264163d4a0 metrics: - type: map_at_1 value: 25.176 - type: map_at_10 value: 38.598 - type: map_at_100 value: 39.707 - type: map_at_1000 value: 39.744 - type: map_at_3 value: 34.566 - type: map_at_5 value: 36.863 - type: mrr_at_1 value: 25.874000000000002 - type: mrr_at_10 value: 39.214 - type: mrr_at_100 value: 40.251 - type: mrr_at_1000 value: 40.281 - type: mrr_at_3 value: 35.291 - type: mrr_at_5 value: 37.545 - type: ndcg_at_1 value: 25.874000000000002 - type: ndcg_at_10 value: 45.98 - type: ndcg_at_100 value: 51.197 - type: ndcg_at_1000 value: 52.073 - type: ndcg_at_3 value: 37.785999999999994 - type: ndcg_at_5 value: 41.870000000000005 - type: precision_at_1 value: 25.874000000000002 - type: precision_at_10 value: 7.181 - type: precision_at_100 value: 0.979 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 16.051000000000002 - type: precision_at_5 value: 11.713 - type: recall_at_1 value: 25.176 - type: recall_at_10 value: 68.67699999999999 - type: recall_at_100 value: 92.55 - type: recall_at_1000 value: 99.164 - type: recall_at_3 value: 46.372 - type: recall_at_5 value: 56.16 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 99.03784769721841 - type: f1 value: 98.97791641821495 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 91.88326493388054 - type: f1 value: 73.74809928034335 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 85.41358439811701 - type: f1 value: 83.503679460639 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 89.77135171486215 - type: f1 value: 88.89843747468366 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 46.22695362087359 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 44.132372165849425 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 33.35680810650402 - type: mrr value: 34.72625715637218 - task: type: Retrieval dataset: name: MTEB NFCorpus type: mteb/nfcorpus config: default split: test revision: ec0fa4fe99da2ff19ca1214b7966684033a58814 metrics: - type: map_at_1 value: 7.165000000000001 - type: map_at_10 value: 15.424 - type: map_at_100 value: 20.28 - type: map_at_1000 value: 22.065 - type: map_at_3 value: 11.236 - type: map_at_5 value: 13.025999999999998 - type: mrr_at_1 value: 51.702999999999996 - type: mrr_at_10 value: 59.965 - type: mrr_at_100 value: 60.667 - type: mrr_at_1000 value: 60.702999999999996 - type: mrr_at_3 value: 58.772000000000006 - type: mrr_at_5 value: 59.267 - type: ndcg_at_1 value: 49.536 - type: ndcg_at_10 value: 40.6 - type: ndcg_at_100 value: 37.848 - type: ndcg_at_1000 value: 46.657 - type: ndcg_at_3 value: 46.117999999999995 - type: ndcg_at_5 value: 43.619 - type: precision_at_1 value: 51.393 - type: precision_at_10 value: 30.31 - type: precision_at_100 value: 9.972 - type: precision_at_1000 value: 2.329 - type: precision_at_3 value: 43.137 - type: precision_at_5 value: 37.585 - type: recall_at_1 value: 7.165000000000001 - type: recall_at_10 value: 19.689999999999998 - type: recall_at_100 value: 39.237 - type: recall_at_1000 value: 71.417 - type: recall_at_3 value: 12.247 - type: recall_at_5 value: 14.902999999999999 - task: type: Retrieval dataset: name: MTEB NQ type: mteb/nq config: default split: test revision: b774495ed302d8c44a3a7ea25c90dbce03968f31 metrics: - type: map_at_1 value: 42.653999999999996 - type: map_at_10 value: 59.611999999999995 - type: map_at_100 value: 60.32300000000001 - type: map_at_1000 value: 60.336 - type: map_at_3 value: 55.584999999999994 - type: map_at_5 value: 58.19 - type: mrr_at_1 value: 47.683 - type: mrr_at_10 value: 62.06700000000001 - type: mrr_at_100 value: 62.537 - type: mrr_at_1000 value: 62.544999999999995 - type: mrr_at_3 value: 59.178 - type: mrr_at_5 value: 61.034 - type: ndcg_at_1 value: 47.654 - type: ndcg_at_10 value: 67.001 - type: ndcg_at_100 value: 69.73899999999999 - type: ndcg_at_1000 value: 69.986 - type: ndcg_at_3 value: 59.95700000000001 - type: ndcg_at_5 value: 64.025 - type: precision_at_1 value: 47.654 - type: precision_at_10 value: 10.367999999999999 - type: precision_at_100 value: 1.192 - type: precision_at_1000 value: 0.121 - type: precision_at_3 value: 26.651000000000003 - type: precision_at_5 value: 18.459 - type: recall_at_1 value: 42.653999999999996 - type: recall_at_10 value: 86.619 - type: recall_at_100 value: 98.04899999999999 - type: recall_at_1000 value: 99.812 - type: recall_at_3 value: 68.987 - type: recall_at_5 value: 78.158 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: mteb/quora config: default split: test revision: None metrics: - type: map_at_1 value: 72.538 - type: map_at_10 value: 86.702 - type: map_at_100 value: 87.31 - type: map_at_1000 value: 87.323 - type: map_at_3 value: 83.87 - type: map_at_5 value: 85.682 - type: mrr_at_1 value: 83.31 - type: mrr_at_10 value: 89.225 - type: mrr_at_100 value: 89.30399999999999 - type: mrr_at_1000 value: 89.30399999999999 - type: mrr_at_3 value: 88.44300000000001 - type: mrr_at_5 value: 89.005 - type: ndcg_at_1 value: 83.32000000000001 - type: ndcg_at_10 value: 90.095 - type: ndcg_at_100 value: 91.12 - type: ndcg_at_1000 value: 91.179 - type: ndcg_at_3 value: 87.606 - type: ndcg_at_5 value: 89.031 - type: precision_at_1 value: 83.32000000000001 - type: precision_at_10 value: 13.641 - type: precision_at_100 value: 1.541 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 38.377 - type: precision_at_5 value: 25.162000000000003 - type: recall_at_1 value: 72.538 - type: recall_at_10 value: 96.47200000000001 - type: recall_at_100 value: 99.785 - type: recall_at_1000 value: 99.99900000000001 - type: recall_at_3 value: 89.278 - type: recall_at_5 value: 93.367 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 73.55219145406065 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 74.13437105242755 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: mteb/scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 6.873 - type: map_at_10 value: 17.944 - type: map_at_100 value: 21.171 - type: map_at_1000 value: 21.528 - type: map_at_3 value: 12.415 - type: map_at_5 value: 15.187999999999999 - type: mrr_at_1 value: 33.800000000000004 - type: mrr_at_10 value: 46.455 - type: mrr_at_100 value: 47.378 - type: mrr_at_1000 value: 47.394999999999996 - type: mrr_at_3 value: 42.367 - type: mrr_at_5 value: 44.972 - type: ndcg_at_1 value: 33.800000000000004 - type: ndcg_at_10 value: 28.907 - type: ndcg_at_100 value: 39.695 - type: ndcg_at_1000 value: 44.582 - type: ndcg_at_3 value: 26.949 - type: ndcg_at_5 value: 23.988 - type: precision_at_1 value: 33.800000000000004 - type: precision_at_10 value: 15.079999999999998 - type: precision_at_100 value: 3.056 - type: precision_at_1000 value: 0.42100000000000004 - type: precision_at_3 value: 25.167 - type: precision_at_5 value: 21.26 - type: recall_at_1 value: 6.873 - type: recall_at_10 value: 30.568 - type: recall_at_100 value: 62.062 - type: recall_at_1000 value: 85.37700000000001 - type: recall_at_3 value: 15.312999999999999 - type: recall_at_5 value: 21.575 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 82.37009118256057 - type: cos_sim_spearman value: 79.27986395671529 - type: euclidean_pearson value: 79.18037715442115 - type: euclidean_spearman value: 79.28004791561621 - type: manhattan_pearson value: 79.34062972800541 - type: manhattan_spearman value: 79.43106695543402 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 87.48474767383833 - type: cos_sim_spearman value: 79.54505388752513 - type: euclidean_pearson value: 83.43282704179565 - type: euclidean_spearman value: 79.54579919925405 - type: manhattan_pearson value: 83.77564492427952 - type: manhattan_spearman value: 79.84558396989286 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 88.803698035802 - type: cos_sim_spearman value: 88.83451367754881 - type: euclidean_pearson value: 88.28939285711628 - type: euclidean_spearman value: 88.83528996073112 - type: manhattan_pearson value: 88.28017412671795 - type: manhattan_spearman value: 88.9228828016344 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 85.27469288153428 - type: cos_sim_spearman value: 83.87477064876288 - type: euclidean_pearson value: 84.2601737035379 - type: euclidean_spearman value: 83.87431082479074 - type: manhattan_pearson value: 84.3621547772745 - type: manhattan_spearman value: 84.12094375000423 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 88.12749863201587 - type: cos_sim_spearman value: 88.54287568368565 - type: euclidean_pearson value: 87.90429700607999 - type: euclidean_spearman value: 88.5437689576261 - type: manhattan_pearson value: 88.19276653356833 - type: manhattan_spearman value: 88.99995393814679 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 85.68398747560902 - type: cos_sim_spearman value: 86.48815303460574 - type: euclidean_pearson value: 85.52356631237954 - type: euclidean_spearman value: 86.486391949551 - type: manhattan_pearson value: 85.67267981761788 - type: manhattan_spearman value: 86.7073696332485 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 88.9057107443124 - type: cos_sim_spearman value: 88.7312168757697 - type: euclidean_pearson value: 88.72810439714794 - type: euclidean_spearman value: 88.71976185854771 - type: manhattan_pearson value: 88.50433745949111 - type: manhattan_spearman value: 88.51726175544195 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 67.59391795109886 - type: cos_sim_spearman value: 66.87613008631367 - type: euclidean_pearson value: 69.23198488262217 - type: euclidean_spearman value: 66.85427723013692 - type: manhattan_pearson value: 69.50730124841084 - type: manhattan_spearman value: 67.10404669820792 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 87.0820605344619 - type: cos_sim_spearman value: 86.8518089863434 - type: euclidean_pearson value: 86.31087134689284 - type: euclidean_spearman value: 86.8518520517941 - type: manhattan_pearson value: 86.47203796160612 - type: manhattan_spearman value: 87.1080149734421 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 89.09255369305481 - type: mrr value: 97.10323445617563 - task: type: Retrieval dataset: name: MTEB SciFact type: mteb/scifact config: default split: test revision: 0228b52cf27578f30900b9e5271d331663a030d7 metrics: - type: map_at_1 value: 61.260999999999996 - type: map_at_10 value: 74.043 - type: map_at_100 value: 74.37700000000001 - type: map_at_1000 value: 74.384 - type: map_at_3 value: 71.222 - type: map_at_5 value: 72.875 - type: mrr_at_1 value: 64.333 - type: mrr_at_10 value: 74.984 - type: mrr_at_100 value: 75.247 - type: mrr_at_1000 value: 75.25500000000001 - type: mrr_at_3 value: 73.167 - type: mrr_at_5 value: 74.35000000000001 - type: ndcg_at_1 value: 64.333 - type: ndcg_at_10 value: 79.06 - type: ndcg_at_100 value: 80.416 - type: ndcg_at_1000 value: 80.55600000000001 - type: ndcg_at_3 value: 74.753 - type: ndcg_at_5 value: 76.97500000000001 - type: precision_at_1 value: 64.333 - type: precision_at_10 value: 10.567 - type: precision_at_100 value: 1.1199999999999999 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 29.889 - type: precision_at_5 value: 19.533 - type: recall_at_1 value: 61.260999999999996 - type: recall_at_10 value: 93.167 - type: recall_at_100 value: 99.0 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 81.667 - type: recall_at_5 value: 87.394 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.71980198019801 - type: cos_sim_ap value: 92.81616007802704 - type: cos_sim_f1 value: 85.17548454688318 - type: cos_sim_precision value: 89.43894389438944 - type: cos_sim_recall value: 81.3 - type: dot_accuracy value: 99.71980198019801 - type: dot_ap value: 92.81398760591358 - type: dot_f1 value: 85.17548454688318 - type: dot_precision value: 89.43894389438944 - type: dot_recall value: 81.3 - type: euclidean_accuracy value: 99.71980198019801 - type: euclidean_ap value: 92.81560637245072 - type: euclidean_f1 value: 85.17548454688318 - type: euclidean_precision value: 89.43894389438944 - type: euclidean_recall value: 81.3 - type: manhattan_accuracy value: 99.73069306930694 - type: manhattan_ap value: 93.14005487480794 - type: manhattan_f1 value: 85.56263269639068 - type: manhattan_precision value: 91.17647058823529 - type: manhattan_recall value: 80.60000000000001 - type: max_accuracy value: 99.73069306930694 - type: max_ap value: 93.14005487480794 - type: max_f1 value: 85.56263269639068 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 79.86443362395185 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 49.40897096662564 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 55.66040806627947 - type: mrr value: 56.58670475766064 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 31.51015090598575 - type: cos_sim_spearman value: 31.35016454939226 - type: dot_pearson value: 31.5150068731 - type: dot_spearman value: 31.34790869023487 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: mteb/trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.254 - type: map_at_10 value: 2.064 - type: map_at_100 value: 12.909 - type: map_at_1000 value: 31.761 - type: map_at_3 value: 0.738 - type: map_at_5 value: 1.155 - type: mrr_at_1 value: 96.0 - type: mrr_at_10 value: 98.0 - type: mrr_at_100 value: 98.0 - type: mrr_at_1000 value: 98.0 - type: mrr_at_3 value: 98.0 - type: mrr_at_5 value: 98.0 - type: ndcg_at_1 value: 93.0 - type: ndcg_at_10 value: 82.258 - type: ndcg_at_100 value: 64.34 - type: ndcg_at_1000 value: 57.912 - type: ndcg_at_3 value: 90.827 - type: ndcg_at_5 value: 86.79 - type: precision_at_1 value: 96.0 - type: precision_at_10 value: 84.8 - type: precision_at_100 value: 66.0 - type: precision_at_1000 value: 25.356 - type: precision_at_3 value: 94.667 - type: precision_at_5 value: 90.4 - type: recall_at_1 value: 0.254 - type: recall_at_10 value: 2.1950000000000003 - type: recall_at_100 value: 16.088 - type: recall_at_1000 value: 54.559000000000005 - type: recall_at_3 value: 0.75 - type: recall_at_5 value: 1.191 - task: type: Retrieval dataset: name: MTEB Touche2020 type: mteb/touche2020 config: default split: test revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f metrics: - type: map_at_1 value: 2.976 - type: map_at_10 value: 11.389000000000001 - type: map_at_100 value: 18.429000000000002 - type: map_at_1000 value: 20.113 - type: map_at_3 value: 6.483 - type: map_at_5 value: 8.770999999999999 - type: mrr_at_1 value: 40.816 - type: mrr_at_10 value: 58.118 - type: mrr_at_100 value: 58.489999999999995 - type: mrr_at_1000 value: 58.489999999999995 - type: mrr_at_3 value: 53.061 - type: mrr_at_5 value: 57.041 - type: ndcg_at_1 value: 40.816 - type: ndcg_at_10 value: 30.567 - type: ndcg_at_100 value: 42.44 - type: ndcg_at_1000 value: 53.480000000000004 - type: ndcg_at_3 value: 36.016 - type: ndcg_at_5 value: 34.257 - type: precision_at_1 value: 42.857 - type: precision_at_10 value: 25.714 - type: precision_at_100 value: 8.429 - type: precision_at_1000 value: 1.5939999999999999 - type: precision_at_3 value: 36.735 - type: precision_at_5 value: 33.878 - type: recall_at_1 value: 2.976 - type: recall_at_10 value: 17.854999999999997 - type: recall_at_100 value: 51.833 - type: recall_at_1000 value: 86.223 - type: recall_at_3 value: 7.887 - type: recall_at_5 value: 12.026 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 85.1174 - type: ap value: 30.169441069345748 - type: f1 value: 69.79254701873245 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 72.58347481607245 - type: f1 value: 72.74877295564937 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 53.90586138221305 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 87.35769207844072 - type: cos_sim_ap value: 77.9645072410354 - type: cos_sim_f1 value: 71.32352941176471 - type: cos_sim_precision value: 66.5903890160183 - type: cos_sim_recall value: 76.78100263852242 - type: dot_accuracy value: 87.37557370209214 - type: dot_ap value: 77.96250046429908 - type: dot_f1 value: 71.28932757557064 - type: dot_precision value: 66.95249130938586 - type: dot_recall value: 76.22691292875989 - type: euclidean_accuracy value: 87.35173153722357 - type: euclidean_ap value: 77.96520460741593 - type: euclidean_f1 value: 71.32470733210104 - type: euclidean_precision value: 66.91329479768785 - type: euclidean_recall value: 76.35883905013192 - type: manhattan_accuracy value: 87.25636287774931 - type: manhattan_ap value: 77.77752485611796 - type: manhattan_f1 value: 71.18148599269183 - type: manhattan_precision value: 66.10859728506787 - type: manhattan_recall value: 77.0976253298153 - type: max_accuracy value: 87.37557370209214 - type: max_ap value: 77.96520460741593 - type: max_f1 value: 71.32470733210104 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.38176737687739 - type: cos_sim_ap value: 86.58811861657401 - type: cos_sim_f1 value: 79.09430644097604 - type: cos_sim_precision value: 75.45085977911366 - type: cos_sim_recall value: 83.10748383122882 - type: dot_accuracy value: 89.38370784336554 - type: dot_ap value: 86.58840606004333 - type: dot_f1 value: 79.10179860068133 - type: dot_precision value: 75.44546153308643 - type: dot_recall value: 83.13058207576223 - type: euclidean_accuracy value: 89.38564830985369 - type: euclidean_ap value: 86.58820721061164 - type: euclidean_f1 value: 79.09070942235888 - type: euclidean_precision value: 75.38729937194697 - type: euclidean_recall value: 83.17677856482906 - type: manhattan_accuracy value: 89.40699344122326 - type: manhattan_ap value: 86.60631843011362 - type: manhattan_f1 value: 79.14949970570925 - type: manhattan_precision value: 75.78191039729502 - type: manhattan_recall value: 82.83030489682784 - type: max_accuracy value: 89.40699344122326 - type: max_ap value: 86.60631843011362 - type: max_f1 value: 79.14949970570925 - task: type: STS dataset: name: MTEB AFQMC type: C-MTEB/AFQMC config: default split: validation revision: b44c3b011063adb25877c13823db83bb193913c4 metrics: - type: cos_sim_pearson value: 65.58442135663871 - type: cos_sim_spearman value: 72.2538631361313 - type: euclidean_pearson value: 70.97255486607429 - type: euclidean_spearman value: 72.25374250228647 - type: manhattan_pearson value: 70.83250199989911 - type: manhattan_spearman value: 72.14819496536272 - task: type: STS dataset: name: MTEB ATEC type: C-MTEB/ATEC config: default split: test revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865 metrics: - type: cos_sim_pearson value: 59.99478404929932 - type: cos_sim_spearman value: 62.61836216999812 - type: euclidean_pearson value: 66.86429811933593 - type: euclidean_spearman value: 62.6183520374191 - type: manhattan_pearson value: 66.8063778911633 - type: manhattan_spearman value: 62.569607573241115 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (zh) type: mteb/amazon_reviews_multi config: zh split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 53.98400000000001 - type: f1 value: 51.21447361350723 - task: type: STS dataset: name: MTEB BQ type: C-MTEB/BQ config: default split: test revision: e3dda5e115e487b39ec7e618c0c6a29137052a55 metrics: - type: cos_sim_pearson value: 79.11941660686553 - type: cos_sim_spearman value: 81.25029594540435 - type: euclidean_pearson value: 82.06973504238826 - type: euclidean_spearman value: 81.2501989488524 - type: manhattan_pearson value: 82.10094630392753 - type: manhattan_spearman value: 81.27987244392389 - task: type: Clustering dataset: name: MTEB CLSClusteringP2P type: C-MTEB/CLSClusteringP2P config: default split: test revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476 metrics: - type: v_measure value: 47.07270168705156 - task: type: Clustering dataset: name: MTEB CLSClusteringS2S type: C-MTEB/CLSClusteringS2S config: default split: test revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f metrics: - type: v_measure value: 45.98511703185043 - task: type: Reranking dataset: name: MTEB CMedQAv1 type: C-MTEB/CMedQAv1-reranking config: default split: test revision: 8d7f1e942507dac42dc58017c1a001c3717da7df metrics: - type: map value: 88.19895157194931 - type: mrr value: 90.21424603174603 - task: type: Reranking dataset: name: MTEB CMedQAv2 type: C-MTEB/CMedQAv2-reranking config: default split: test revision: 23d186750531a14a0357ca22cd92d712fd512ea0 metrics: - type: map value: 88.03317320980119 - type: mrr value: 89.9461507936508 - task: type: Retrieval dataset: name: MTEB CmedqaRetrieval type: C-MTEB/CmedqaRetrieval config: default split: dev revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301 metrics: - type: map_at_1 value: 29.037000000000003 - type: map_at_10 value: 42.001 - type: map_at_100 value: 43.773 - type: map_at_1000 value: 43.878 - type: map_at_3 value: 37.637 - type: map_at_5 value: 40.034 - type: mrr_at_1 value: 43.136 - type: mrr_at_10 value: 51.158 - type: mrr_at_100 value: 52.083 - type: mrr_at_1000 value: 52.12 - type: mrr_at_3 value: 48.733 - type: mrr_at_5 value: 50.025 - type: ndcg_at_1 value: 43.136 - type: ndcg_at_10 value: 48.685 - type: ndcg_at_100 value: 55.513 - type: ndcg_at_1000 value: 57.242000000000004 - type: ndcg_at_3 value: 43.329 - type: ndcg_at_5 value: 45.438 - type: precision_at_1 value: 43.136 - type: precision_at_10 value: 10.56 - type: precision_at_100 value: 1.6129999999999998 - type: precision_at_1000 value: 0.184 - type: precision_at_3 value: 24.064 - type: precision_at_5 value: 17.269000000000002 - type: recall_at_1 value: 29.037000000000003 - type: recall_at_10 value: 59.245000000000005 - type: recall_at_100 value: 87.355 - type: recall_at_1000 value: 98.74000000000001 - type: recall_at_3 value: 42.99 - type: recall_at_5 value: 49.681999999999995 - task: type: PairClassification dataset: name: MTEB Cmnli type: C-MTEB/CMNLI config: default split: validation revision: 41bc36f332156f7adc9e38f53777c959b2ae9766 metrics: - type: cos_sim_accuracy value: 82.68190018039687 - type: cos_sim_ap value: 90.18017125327886 - type: cos_sim_f1 value: 83.64080906868193 - type: cos_sim_precision value: 79.7076890489303 - type: cos_sim_recall value: 87.98223053542202 - type: dot_accuracy value: 82.68190018039687 - type: dot_ap value: 90.18782350103646 - type: dot_f1 value: 83.64242087729039 - type: dot_precision value: 79.65313028764805 - type: dot_recall value: 88.05237315875614 - type: euclidean_accuracy value: 82.68190018039687 - type: euclidean_ap value: 90.1801957900632 - type: euclidean_f1 value: 83.63636363636364 - type: euclidean_precision value: 79.52772506852203 - type: euclidean_recall value: 88.19265840542437 - type: manhattan_accuracy value: 82.14070956103427 - type: manhattan_ap value: 89.96178420101427 - type: manhattan_f1 value: 83.21087838578791 - type: manhattan_precision value: 78.35605121850475 - type: manhattan_recall value: 88.70703764320785 - type: max_accuracy value: 82.68190018039687 - type: max_ap value: 90.18782350103646 - type: max_f1 value: 83.64242087729039 - task: type: Retrieval dataset: name: MTEB CovidRetrieval type: C-MTEB/CovidRetrieval config: default split: dev revision: 1271c7809071a13532e05f25fb53511ffce77117 metrics: - type: map_at_1 value: 72.234 - type: map_at_10 value: 80.10000000000001 - type: map_at_100 value: 80.36 - type: map_at_1000 value: 80.363 - type: map_at_3 value: 78.315 - type: map_at_5 value: 79.607 - type: mrr_at_1 value: 72.392 - type: mrr_at_10 value: 80.117 - type: mrr_at_100 value: 80.36999999999999 - type: mrr_at_1000 value: 80.373 - type: mrr_at_3 value: 78.469 - type: mrr_at_5 value: 79.633 - type: ndcg_at_1 value: 72.392 - type: ndcg_at_10 value: 83.651 - type: ndcg_at_100 value: 84.749 - type: ndcg_at_1000 value: 84.83000000000001 - type: ndcg_at_3 value: 80.253 - type: ndcg_at_5 value: 82.485 - type: precision_at_1 value: 72.392 - type: precision_at_10 value: 9.557 - type: precision_at_100 value: 1.004 - type: precision_at_1000 value: 0.101 - type: precision_at_3 value: 28.732000000000003 - type: precision_at_5 value: 18.377 - type: recall_at_1 value: 72.234 - type: recall_at_10 value: 94.573 - type: recall_at_100 value: 99.368 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 85.669 - type: recall_at_5 value: 91.01700000000001 - task: type: Retrieval dataset: name: MTEB DuRetrieval type: C-MTEB/DuRetrieval config: default split: dev revision: a1a333e290fe30b10f3f56498e3a0d911a693ced metrics: - type: map_at_1 value: 26.173999999999996 - type: map_at_10 value: 80.04 - type: map_at_100 value: 82.94500000000001 - type: map_at_1000 value: 82.98100000000001 - type: map_at_3 value: 55.562999999999995 - type: map_at_5 value: 69.89800000000001 - type: mrr_at_1 value: 89.5 - type: mrr_at_10 value: 92.996 - type: mrr_at_100 value: 93.06400000000001 - type: mrr_at_1000 value: 93.065 - type: mrr_at_3 value: 92.658 - type: mrr_at_5 value: 92.84599999999999 - type: ndcg_at_1 value: 89.5 - type: ndcg_at_10 value: 87.443 - type: ndcg_at_100 value: 90.253 - type: ndcg_at_1000 value: 90.549 - type: ndcg_at_3 value: 85.874 - type: ndcg_at_5 value: 84.842 - type: precision_at_1 value: 89.5 - type: precision_at_10 value: 41.805 - type: precision_at_100 value: 4.827 - type: precision_at_1000 value: 0.49 - type: precision_at_3 value: 76.85 - type: precision_at_5 value: 64.8 - type: recall_at_1 value: 26.173999999999996 - type: recall_at_10 value: 89.101 - type: recall_at_100 value: 98.08099999999999 - type: recall_at_1000 value: 99.529 - type: recall_at_3 value: 57.902 - type: recall_at_5 value: 74.602 - task: type: Retrieval dataset: name: MTEB EcomRetrieval type: C-MTEB/EcomRetrieval config: default split: dev revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9 metrics: - type: map_at_1 value: 56.10000000000001 - type: map_at_10 value: 66.15299999999999 - type: map_at_100 value: 66.625 - type: map_at_1000 value: 66.636 - type: map_at_3 value: 63.632999999999996 - type: map_at_5 value: 65.293 - type: mrr_at_1 value: 56.10000000000001 - type: mrr_at_10 value: 66.15299999999999 - type: mrr_at_100 value: 66.625 - type: mrr_at_1000 value: 66.636 - type: mrr_at_3 value: 63.632999999999996 - type: mrr_at_5 value: 65.293 - type: ndcg_at_1 value: 56.10000000000001 - type: ndcg_at_10 value: 71.146 - type: ndcg_at_100 value: 73.27799999999999 - type: ndcg_at_1000 value: 73.529 - type: ndcg_at_3 value: 66.09 - type: ndcg_at_5 value: 69.08999999999999 - type: precision_at_1 value: 56.10000000000001 - type: precision_at_10 value: 8.68 - type: precision_at_100 value: 0.964 - type: precision_at_1000 value: 0.098 - type: precision_at_3 value: 24.4 - type: precision_at_5 value: 16.1 - type: recall_at_1 value: 56.10000000000001 - type: recall_at_10 value: 86.8 - type: recall_at_100 value: 96.39999999999999 - type: recall_at_1000 value: 98.3 - type: recall_at_3 value: 73.2 - type: recall_at_5 value: 80.5 - task: type: Classification dataset: name: MTEB IFlyTek type: C-MTEB/IFlyTek-classification config: default split: validation revision: 421605374b29664c5fc098418fe20ada9bd55f8a metrics: - type: accuracy value: 54.52096960369373 - type: f1 value: 40.930845295808695 - task: type: Classification dataset: name: MTEB JDReview type: C-MTEB/JDReview-classification config: default split: test revision: b7c64bd89eb87f8ded463478346f76731f07bf8b metrics: - type: accuracy value: 86.51031894934334 - type: ap value: 55.9516014323483 - type: f1 value: 81.54813679326381 - task: type: STS dataset: name: MTEB LCQMC type: C-MTEB/LCQMC config: default split: test revision: 17f9b096f80380fce5ed12a9be8be7784b337daf metrics: - type: cos_sim_pearson value: 69.67437838574276 - type: cos_sim_spearman value: 73.81314174653045 - type: euclidean_pearson value: 72.63430276680275 - type: euclidean_spearman value: 73.81358736777001 - type: manhattan_pearson value: 72.58743833842829 - type: manhattan_spearman value: 73.7590419009179 - task: type: Reranking dataset: name: MTEB MMarcoReranking type: C-MTEB/Mmarco-reranking config: default split: dev revision: None metrics: - type: map value: 31.648613483640254 - type: mrr value: 30.37420634920635 - task: type: Retrieval dataset: name: MTEB MMarcoRetrieval type: C-MTEB/MMarcoRetrieval config: default split: dev revision: 539bbde593d947e2a124ba72651aafc09eb33fc2 metrics: - type: map_at_1 value: 73.28099999999999 - type: map_at_10 value: 81.977 - type: map_at_100 value: 82.222 - type: map_at_1000 value: 82.22699999999999 - type: map_at_3 value: 80.441 - type: map_at_5 value: 81.46600000000001 - type: mrr_at_1 value: 75.673 - type: mrr_at_10 value: 82.41000000000001 - type: mrr_at_100 value: 82.616 - type: mrr_at_1000 value: 82.621 - type: mrr_at_3 value: 81.094 - type: mrr_at_5 value: 81.962 - type: ndcg_at_1 value: 75.673 - type: ndcg_at_10 value: 85.15599999999999 - type: ndcg_at_100 value: 86.151 - type: ndcg_at_1000 value: 86.26899999999999 - type: ndcg_at_3 value: 82.304 - type: ndcg_at_5 value: 84.009 - type: precision_at_1 value: 75.673 - type: precision_at_10 value: 10.042 - type: precision_at_100 value: 1.052 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 30.673000000000002 - type: precision_at_5 value: 19.326999999999998 - type: recall_at_1 value: 73.28099999999999 - type: recall_at_10 value: 94.446 - type: recall_at_100 value: 98.737 - type: recall_at_1000 value: 99.649 - type: recall_at_3 value: 86.984 - type: recall_at_5 value: 91.024 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-CN) type: mteb/amazon_massive_intent config: zh-CN split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 81.08607935440484 - type: f1 value: 78.24879986066307 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-CN) type: mteb/amazon_massive_scenario config: zh-CN split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 86.05917955615332 - type: f1 value: 85.05279279434997 - task: type: Retrieval dataset: name: MTEB MedicalRetrieval type: C-MTEB/MedicalRetrieval config: default split: dev revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6 metrics: - type: map_at_1 value: 56.2 - type: map_at_10 value: 62.57899999999999 - type: map_at_100 value: 63.154999999999994 - type: map_at_1000 value: 63.193 - type: map_at_3 value: 61.217 - type: map_at_5 value: 62.012 - type: mrr_at_1 value: 56.3 - type: mrr_at_10 value: 62.629000000000005 - type: mrr_at_100 value: 63.205999999999996 - type: mrr_at_1000 value: 63.244 - type: mrr_at_3 value: 61.267 - type: mrr_at_5 value: 62.062 - type: ndcg_at_1 value: 56.2 - type: ndcg_at_10 value: 65.592 - type: ndcg_at_100 value: 68.657 - type: ndcg_at_1000 value: 69.671 - type: ndcg_at_3 value: 62.808 - type: ndcg_at_5 value: 64.24499999999999 - type: precision_at_1 value: 56.2 - type: precision_at_10 value: 7.5 - type: precision_at_100 value: 0.899 - type: precision_at_1000 value: 0.098 - type: precision_at_3 value: 22.467000000000002 - type: precision_at_5 value: 14.180000000000001 - type: recall_at_1 value: 56.2 - type: recall_at_10 value: 75.0 - type: recall_at_100 value: 89.9 - type: recall_at_1000 value: 97.89999999999999 - type: recall_at_3 value: 67.4 - type: recall_at_5 value: 70.89999999999999 - task: type: Classification dataset: name: MTEB MultilingualSentiment type: C-MTEB/MultilingualSentiment-classification config: default split: validation revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a metrics: - type: accuracy value: 76.87666666666667 - type: f1 value: 76.7317686219665 - task: type: PairClassification dataset: name: MTEB Ocnli type: C-MTEB/OCNLI config: default split: validation revision: 66e76a618a34d6d565d5538088562851e6daa7ec metrics: - type: cos_sim_accuracy value: 79.64266377910124 - type: cos_sim_ap value: 84.78274442344829 - type: cos_sim_f1 value: 81.16947472745292 - type: cos_sim_precision value: 76.47058823529412 - type: cos_sim_recall value: 86.48363252375924 - type: dot_accuracy value: 79.64266377910124 - type: dot_ap value: 84.7851404063692 - type: dot_f1 value: 81.16947472745292 - type: dot_precision value: 76.47058823529412 - type: dot_recall value: 86.48363252375924 - type: euclidean_accuracy value: 79.64266377910124 - type: euclidean_ap value: 84.78068373762378 - type: euclidean_f1 value: 81.14794656110837 - type: euclidean_precision value: 76.35009310986965 - type: euclidean_recall value: 86.58922914466737 - type: manhattan_accuracy value: 79.48023822414727 - type: manhattan_ap value: 84.72928897427576 - type: manhattan_f1 value: 81.32084770823064 - type: manhattan_precision value: 76.24768946395564 - type: manhattan_recall value: 87.11721224920802 - type: max_accuracy value: 79.64266377910124 - type: max_ap value: 84.7851404063692 - type: max_f1 value: 81.32084770823064 - task: type: Classification dataset: name: MTEB OnlineShopping type: C-MTEB/OnlineShopping-classification config: default split: test revision: e610f2ebd179a8fda30ae534c3878750a96db120 metrics: - type: accuracy value: 94.3 - type: ap value: 92.8664032274438 - type: f1 value: 94.29311102997727 - task: type: STS dataset: name: MTEB PAWSX type: C-MTEB/PAWSX config: default split: test revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1 metrics: - type: cos_sim_pearson value: 48.51392279882909 - type: cos_sim_spearman value: 54.06338895994974 - type: euclidean_pearson value: 52.58480559573412 - type: euclidean_spearman value: 54.06417276612201 - type: manhattan_pearson value: 52.69525121721343 - type: manhattan_spearman value: 54.048147455389675 - task: type: STS dataset: name: MTEB QBQTC type: C-MTEB/QBQTC config: default split: test revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7 metrics: - type: cos_sim_pearson value: 29.728387290757325 - type: cos_sim_spearman value: 31.366121633635284 - type: euclidean_pearson value: 29.14588368552961 - type: euclidean_spearman value: 31.36764411112844 - type: manhattan_pearson value: 29.63517350523121 - type: manhattan_spearman value: 31.94157020583762 - task: type: STS dataset: name: MTEB STS22 (zh) type: mteb/sts22-crosslingual-sts config: zh split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 63.64868296271406 - type: cos_sim_spearman value: 66.12800618164744 - type: euclidean_pearson value: 63.21405767340238 - type: euclidean_spearman value: 66.12786567790748 - type: manhattan_pearson value: 64.04300276525848 - type: manhattan_spearman value: 66.5066857145652 - task: type: STS dataset: name: MTEB STSB type: C-MTEB/STSB config: default split: test revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0 metrics: - type: cos_sim_pearson value: 81.2302623912794 - type: cos_sim_spearman value: 81.16833673266562 - type: euclidean_pearson value: 79.47647843876024 - type: euclidean_spearman value: 81.16944349524972 - type: manhattan_pearson value: 79.84947238492208 - type: manhattan_spearman value: 81.64626599410026 - task: type: Reranking dataset: name: MTEB T2Reranking type: C-MTEB/T2Reranking config: default split: dev revision: 76631901a18387f85eaa53e5450019b87ad58ef9 metrics: - type: map value: 67.80129586475687 - type: mrr value: 77.77402311635554 - task: type: Retrieval dataset: name: MTEB T2Retrieval type: C-MTEB/T2Retrieval config: default split: dev revision: 8731a845f1bf500a4f111cf1070785c793d10e64 metrics: - type: map_at_1 value: 28.666999999999998 - type: map_at_10 value: 81.063 - type: map_at_100 value: 84.504 - type: map_at_1000 value: 84.552 - type: map_at_3 value: 56.897 - type: map_at_5 value: 70.073 - type: mrr_at_1 value: 92.087 - type: mrr_at_10 value: 94.132 - type: mrr_at_100 value: 94.19800000000001 - type: mrr_at_1000 value: 94.19999999999999 - type: mrr_at_3 value: 93.78999999999999 - type: mrr_at_5 value: 94.002 - type: ndcg_at_1 value: 92.087 - type: ndcg_at_10 value: 87.734 - type: ndcg_at_100 value: 90.736 - type: ndcg_at_1000 value: 91.184 - type: ndcg_at_3 value: 88.78 - type: ndcg_at_5 value: 87.676 - type: precision_at_1 value: 92.087 - type: precision_at_10 value: 43.46 - type: precision_at_100 value: 5.07 - type: precision_at_1000 value: 0.518 - type: precision_at_3 value: 77.49000000000001 - type: precision_at_5 value: 65.194 - type: recall_at_1 value: 28.666999999999998 - type: recall_at_10 value: 86.632 - type: recall_at_100 value: 96.646 - type: recall_at_1000 value: 98.917 - type: recall_at_3 value: 58.333999999999996 - type: recall_at_5 value: 72.974 - task: type: Classification dataset: name: MTEB TNews type: C-MTEB/TNews-classification config: default split: validation revision: 317f262bf1e6126357bbe89e875451e4b0938fe4 metrics: - type: accuracy value: 52.971999999999994 - type: f1 value: 50.2898280984929 - task: type: Clustering dataset: name: MTEB ThuNewsClusteringP2P type: C-MTEB/ThuNewsClusteringP2P config: default split: test revision: 5798586b105c0434e4f0fe5e767abe619442cf93 metrics: - type: v_measure value: 86.0797948663824 - task: type: Clustering dataset: name: MTEB ThuNewsClusteringS2S type: C-MTEB/ThuNewsClusteringS2S config: default split: test revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d metrics: - type: v_measure value: 85.10759092255017 - task: type: Retrieval dataset: name: MTEB VideoRetrieval type: C-MTEB/VideoRetrieval config: default split: dev revision: 58c2597a5943a2ba48f4668c3b90d796283c5639 metrics: - type: map_at_1 value: 65.60000000000001 - type: map_at_10 value: 74.773 - type: map_at_100 value: 75.128 - type: map_at_1000 value: 75.136 - type: map_at_3 value: 73.05 - type: map_at_5 value: 74.13499999999999 - type: mrr_at_1 value: 65.60000000000001 - type: mrr_at_10 value: 74.773 - type: mrr_at_100 value: 75.128 - type: mrr_at_1000 value: 75.136 - type: mrr_at_3 value: 73.05 - type: mrr_at_5 value: 74.13499999999999 - type: ndcg_at_1 value: 65.60000000000001 - type: ndcg_at_10 value: 78.84299999999999 - type: ndcg_at_100 value: 80.40899999999999 - type: ndcg_at_1000 value: 80.57 - type: ndcg_at_3 value: 75.40599999999999 - type: ndcg_at_5 value: 77.351 - type: precision_at_1 value: 65.60000000000001 - type: precision_at_10 value: 9.139999999999999 - type: precision_at_100 value: 0.984 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 27.400000000000002 - type: precision_at_5 value: 17.380000000000003 - type: recall_at_1 value: 65.60000000000001 - type: recall_at_10 value: 91.4 - type: recall_at_100 value: 98.4 - type: recall_at_1000 value: 99.6 - type: recall_at_3 value: 82.19999999999999 - type: recall_at_5 value: 86.9 - task: type: Classification dataset: name: MTEB Waimai type: C-MTEB/waimai-classification config: default split: test revision: 339287def212450dcaa9df8c22bf93e9980c7023 metrics: - type: accuracy value: 89.47 - type: ap value: 75.59561751845389 - type: f1 value: 87.95207751382563 - task: type: Clustering dataset: name: MTEB AlloProfClusteringP2P type: lyon-nlp/alloprof config: default split: test revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b metrics: - type: v_measure value: 76.05592323841036 - type: v_measure value: 64.51718058866508 - task: type: Reranking dataset: name: MTEB AlloprofReranking type: lyon-nlp/mteb-fr-reranking-alloprof-s2p config: default split: test revision: 666fdacebe0291776e86f29345663dfaf80a0db9 metrics: - type: map value: 73.08278490943373 - type: mrr value: 74.66561454570449 - task: type: Retrieval dataset: name: MTEB AlloprofRetrieval type: lyon-nlp/alloprof config: default split: test revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b metrics: - type: map_at_1 value: 38.912 - type: map_at_10 value: 52.437999999999995 - type: map_at_100 value: 53.38 - type: map_at_1000 value: 53.427 - type: map_at_3 value: 48.879 - type: map_at_5 value: 50.934000000000005 - type: mrr_at_1 value: 44.085 - type: mrr_at_10 value: 55.337 - type: mrr_at_100 value: 56.016999999999996 - type: mrr_at_1000 value: 56.043 - type: mrr_at_3 value: 52.55499999999999 - type: mrr_at_5 value: 54.20399999999999 - type: ndcg_at_1 value: 44.085 - type: ndcg_at_10 value: 58.876 - type: ndcg_at_100 value: 62.714000000000006 - type: ndcg_at_1000 value: 63.721000000000004 - type: ndcg_at_3 value: 52.444 - type: ndcg_at_5 value: 55.692 - type: precision_at_1 value: 44.085 - type: precision_at_10 value: 9.21 - type: precision_at_100 value: 1.164 - type: precision_at_1000 value: 0.128 - type: precision_at_3 value: 23.043 - type: precision_at_5 value: 15.898000000000001 - type: recall_at_1 value: 38.912 - type: recall_at_10 value: 75.577 - type: recall_at_100 value: 92.038 - type: recall_at_1000 value: 99.325 - type: recall_at_3 value: 58.592 - type: recall_at_5 value: 66.235 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (fr) type: mteb/amazon_reviews_multi config: fr split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 55.532000000000004 - type: f1 value: 52.5783943471605 - task: type: Retrieval dataset: name: MTEB BSARDRetrieval type: maastrichtlawtech/bsard config: default split: test revision: 5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59 metrics: - type: map_at_1 value: 8.108 - type: map_at_10 value: 14.710999999999999 - type: map_at_100 value: 15.891 - type: map_at_1000 value: 15.983 - type: map_at_3 value: 12.237 - type: map_at_5 value: 13.679 - type: mrr_at_1 value: 8.108 - type: mrr_at_10 value: 14.710999999999999 - type: mrr_at_100 value: 15.891 - type: mrr_at_1000 value: 15.983 - type: mrr_at_3 value: 12.237 - type: mrr_at_5 value: 13.679 - type: ndcg_at_1 value: 8.108 - type: ndcg_at_10 value: 18.796 - type: ndcg_at_100 value: 25.098 - type: ndcg_at_1000 value: 27.951999999999998 - type: ndcg_at_3 value: 13.712 - type: ndcg_at_5 value: 16.309 - type: precision_at_1 value: 8.108 - type: precision_at_10 value: 3.198 - type: precision_at_100 value: 0.626 - type: precision_at_1000 value: 0.086 - type: precision_at_3 value: 6.006 - type: precision_at_5 value: 4.865 - type: recall_at_1 value: 8.108 - type: recall_at_10 value: 31.982 - type: recall_at_100 value: 62.613 - type: recall_at_1000 value: 86.036 - type: recall_at_3 value: 18.018 - type: recall_at_5 value: 24.324 - task: type: Clustering dataset: name: MTEB HALClusteringS2S type: lyon-nlp/clustering-hal-s2s config: default split: test revision: e06ebbbb123f8144bef1a5d18796f3dec9ae2915 metrics: - type: v_measure value: 30.833269778867116 - task: type: Clustering dataset: name: MTEB MLSUMClusteringP2P type: mlsum config: default split: test revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7 metrics: - type: v_measure value: 50.0281928004713 - type: v_measure value: 43.699961510636534 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (fr) type: mteb/mtop_domain config: fr split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 96.68963357344191 - type: f1 value: 96.45175170820961 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (fr) type: mteb/mtop_intent config: fr split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 87.46946445349202 - type: f1 value: 65.79860440988624 - task: type: Classification dataset: name: MTEB MasakhaNEWSClassification (fra) type: masakhane/masakhanews config: fra split: test revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60 metrics: - type: accuracy value: 82.60663507109005 - type: f1 value: 77.20462646604777 - task: type: Clustering dataset: name: MTEB MasakhaNEWSClusteringP2P (fra) type: masakhane/masakhanews config: fra split: test revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60 metrics: - type: v_measure value: 60.19311264967803 - type: v_measure value: 63.6235764409785 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fr) type: mteb/amazon_massive_intent config: fr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 81.65097511768661 - type: f1 value: 78.77796091490924 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fr) type: mteb/amazon_massive_scenario config: fr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 86.64425016812373 - type: f1 value: 85.4912728670017 - task: type: Retrieval dataset: name: MTEB MintakaRetrieval (fr) type: jinaai/mintakaqa config: fr split: test revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e metrics: - type: map_at_1 value: 35.913000000000004 - type: map_at_10 value: 48.147 - type: map_at_100 value: 48.91 - type: map_at_1000 value: 48.949 - type: map_at_3 value: 45.269999999999996 - type: map_at_5 value: 47.115 - type: mrr_at_1 value: 35.913000000000004 - type: mrr_at_10 value: 48.147 - type: mrr_at_100 value: 48.91 - type: mrr_at_1000 value: 48.949 - type: mrr_at_3 value: 45.269999999999996 - type: mrr_at_5 value: 47.115 - type: ndcg_at_1 value: 35.913000000000004 - type: ndcg_at_10 value: 54.03 - type: ndcg_at_100 value: 57.839 - type: ndcg_at_1000 value: 58.925000000000004 - type: ndcg_at_3 value: 48.217999999999996 - type: ndcg_at_5 value: 51.56699999999999 - type: precision_at_1 value: 35.913000000000004 - type: precision_at_10 value: 7.244000000000001 - type: precision_at_100 value: 0.9039999999999999 - type: precision_at_1000 value: 0.099 - type: precision_at_3 value: 18.905 - type: precision_at_5 value: 12.981000000000002 - type: recall_at_1 value: 35.913000000000004 - type: recall_at_10 value: 72.441 - type: recall_at_100 value: 90.41799999999999 - type: recall_at_1000 value: 99.099 - type: recall_at_3 value: 56.716 - type: recall_at_5 value: 64.90599999999999 - task: type: PairClassification dataset: name: MTEB OpusparcusPC (fr) type: GEM/opusparcus config: fr split: test revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a metrics: - type: cos_sim_accuracy value: 99.90069513406156 - type: cos_sim_ap value: 100.0 - type: cos_sim_f1 value: 99.95032290114257 - type: cos_sim_precision value: 100.0 - type: cos_sim_recall value: 99.90069513406156 - type: dot_accuracy value: 99.90069513406156 - type: dot_ap value: 100.0 - type: dot_f1 value: 99.95032290114257 - type: dot_precision value: 100.0 - type: dot_recall value: 99.90069513406156 - type: euclidean_accuracy value: 99.90069513406156 - type: euclidean_ap value: 100.0 - type: euclidean_f1 value: 99.95032290114257 - type: euclidean_precision value: 100.0 - type: euclidean_recall value: 99.90069513406156 - type: manhattan_accuracy value: 99.90069513406156 - type: manhattan_ap value: 100.0 - type: manhattan_f1 value: 99.95032290114257 - type: manhattan_precision value: 100.0 - type: manhattan_recall value: 99.90069513406156 - type: max_accuracy value: 99.90069513406156 - type: max_ap value: 100.0 - type: max_f1 value: 99.95032290114257 - task: type: PairClassification dataset: name: MTEB PawsX (fr) type: paws-x config: fr split: test revision: 8a04d940a42cd40658986fdd8e3da561533a3646 metrics: - type: cos_sim_accuracy value: 75.25 - type: cos_sim_ap value: 80.86376001270014 - type: cos_sim_f1 value: 73.65945437441204 - type: cos_sim_precision value: 64.02289452166802 - type: cos_sim_recall value: 86.71096345514951 - type: dot_accuracy value: 75.25 - type: dot_ap value: 80.93686107633002 - type: dot_f1 value: 73.65945437441204 - type: dot_precision value: 64.02289452166802 - type: dot_recall value: 86.71096345514951 - type: euclidean_accuracy value: 75.25 - type: euclidean_ap value: 80.86379136218862 - type: euclidean_f1 value: 73.65945437441204 - type: euclidean_precision value: 64.02289452166802 - type: euclidean_recall value: 86.71096345514951 - type: manhattan_accuracy value: 75.3 - type: manhattan_ap value: 80.87826606097734 - type: manhattan_f1 value: 73.68421052631581 - type: manhattan_precision value: 64.0 - type: manhattan_recall value: 86.82170542635659 - type: max_accuracy value: 75.3 - type: max_ap value: 80.93686107633002 - type: max_f1 value: 73.68421052631581 - task: type: STS dataset: name: MTEB SICKFr type: Lajavaness/SICK-fr config: default split: test revision: e077ab4cf4774a1e36d86d593b150422fafd8e8a metrics: - type: cos_sim_pearson value: 81.42349425981143 - type: cos_sim_spearman value: 78.90454327031226 - type: euclidean_pearson value: 78.39086497435166 - type: euclidean_spearman value: 78.9046133980509 - type: manhattan_pearson value: 78.63743094286502 - type: manhattan_spearman value: 79.12136348449269 - task: type: STS dataset: name: MTEB STS22 (fr) type: mteb/sts22-crosslingual-sts config: fr split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 81.452697919749 - type: cos_sim_spearman value: 82.58116836039301 - type: euclidean_pearson value: 81.04038478932786 - type: euclidean_spearman value: 82.58116836039301 - type: manhattan_pearson value: 81.37075396187771 - type: manhattan_spearman value: 82.73678231355368 - task: type: STS dataset: name: MTEB STSBenchmarkMultilingualSTS (fr) type: stsb_multi_mt config: fr split: test revision: 93d57ef91790589e3ce9c365164337a8a78b7632 metrics: - type: cos_sim_pearson value: 85.7419764013806 - type: cos_sim_spearman value: 85.46085808849622 - type: euclidean_pearson value: 83.70449639870063 - type: euclidean_spearman value: 85.46159013076233 - type: manhattan_pearson value: 83.95259510313929 - type: manhattan_spearman value: 85.8029724659458 - task: type: Summarization dataset: name: MTEB SummEvalFr type: lyon-nlp/summarization-summeval-fr-p2p config: default split: test revision: b385812de6a9577b6f4d0f88c6a6e35395a94054 metrics: - type: cos_sim_pearson value: 32.61063271753325 - type: cos_sim_spearman value: 31.454589417353603 - type: dot_pearson value: 32.6106288643431 - type: dot_spearman value: 31.454589417353603 - task: type: Reranking dataset: name: MTEB SyntecReranking type: lyon-nlp/mteb-fr-reranking-syntec-s2p config: default split: test revision: b205c5084a0934ce8af14338bf03feb19499c84d metrics: - type: map value: 84.31666666666666 - type: mrr value: 84.31666666666666 - task: type: Retrieval dataset: name: MTEB SyntecRetrieval type: lyon-nlp/mteb-fr-retrieval-syntec-s2p config: default split: test revision: 77f7e271bf4a92b24fce5119f3486b583ca016ff metrics: - type: map_at_1 value: 63.0 - type: map_at_10 value: 73.471 - type: map_at_100 value: 73.87 - type: map_at_1000 value: 73.87 - type: map_at_3 value: 70.5 - type: map_at_5 value: 73.05 - type: mrr_at_1 value: 63.0 - type: mrr_at_10 value: 73.471 - type: mrr_at_100 value: 73.87 - type: mrr_at_1000 value: 73.87 - type: mrr_at_3 value: 70.5 - type: mrr_at_5 value: 73.05 - type: ndcg_at_1 value: 63.0 - type: ndcg_at_10 value: 78.255 - type: ndcg_at_100 value: 79.88 - type: ndcg_at_1000 value: 79.88 - type: ndcg_at_3 value: 72.702 - type: ndcg_at_5 value: 77.264 - type: precision_at_1 value: 63.0 - type: precision_at_10 value: 9.3 - type: precision_at_100 value: 1.0 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 26.333000000000002 - type: precision_at_5 value: 18.0 - type: recall_at_1 value: 63.0 - type: recall_at_10 value: 93.0 - type: recall_at_100 value: 100.0 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 79.0 - type: recall_at_5 value: 90.0 - task: type: Retrieval dataset: name: MTEB XPQARetrieval (fr) type: jinaai/xpqa config: fr split: test revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f metrics: - type: map_at_1 value: 40.338 - type: map_at_10 value: 61.927 - type: map_at_100 value: 63.361999999999995 - type: map_at_1000 value: 63.405 - type: map_at_3 value: 55.479 - type: map_at_5 value: 59.732 - type: mrr_at_1 value: 63.551 - type: mrr_at_10 value: 71.006 - type: mrr_at_100 value: 71.501 - type: mrr_at_1000 value: 71.509 - type: mrr_at_3 value: 69.07 - type: mrr_at_5 value: 70.165 - type: ndcg_at_1 value: 63.551 - type: ndcg_at_10 value: 68.297 - type: ndcg_at_100 value: 73.13199999999999 - type: ndcg_at_1000 value: 73.751 - type: ndcg_at_3 value: 62.999 - type: ndcg_at_5 value: 64.89 - type: precision_at_1 value: 63.551 - type: precision_at_10 value: 15.661 - type: precision_at_100 value: 1.9789999999999999 - type: precision_at_1000 value: 0.207 - type: precision_at_3 value: 38.273 - type: precision_at_5 value: 27.61 - type: recall_at_1 value: 40.338 - type: recall_at_10 value: 77.267 - type: recall_at_100 value: 95.892 - type: recall_at_1000 value: 99.75500000000001 - type: recall_at_3 value: 60.36 - type: recall_at_5 value: 68.825 - task: type: Clustering dataset: name: MTEB 8TagsClustering type: PL-MTEB/8tags-clustering config: default split: test revision: None metrics: - type: v_measure value: 51.36126303874126 - task: type: Classification dataset: name: MTEB AllegroReviews type: PL-MTEB/allegro-reviews config: default split: test revision: None metrics: - type: accuracy value: 67.13717693836979 - type: f1 value: 57.27609848003782 - task: type: Retrieval dataset: name: MTEB ArguAna-PL type: clarin-knext/arguana-pl config: default split: test revision: 63fc86750af76253e8c760fc9e534bbf24d260a2 metrics: - type: map_at_1 value: 35.276999999999994 - type: map_at_10 value: 51.086 - type: map_at_100 value: 51.788000000000004 - type: map_at_1000 value: 51.791 - type: map_at_3 value: 46.147 - type: map_at_5 value: 49.078 - type: mrr_at_1 value: 35.917 - type: mrr_at_10 value: 51.315999999999995 - type: mrr_at_100 value: 52.018 - type: mrr_at_1000 value: 52.022 - type: mrr_at_3 value: 46.349000000000004 - type: mrr_at_5 value: 49.297000000000004 - type: ndcg_at_1 value: 35.276999999999994 - type: ndcg_at_10 value: 59.870999999999995 - type: ndcg_at_100 value: 62.590999999999994 - type: ndcg_at_1000 value: 62.661 - type: ndcg_at_3 value: 49.745 - type: ndcg_at_5 value: 55.067 - type: precision_at_1 value: 35.276999999999994 - type: precision_at_10 value: 8.791 - type: precision_at_100 value: 0.991 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 20.057 - type: precision_at_5 value: 14.637 - type: recall_at_1 value: 35.276999999999994 - type: recall_at_10 value: 87.909 - type: recall_at_100 value: 99.14699999999999 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 60.171 - type: recall_at_5 value: 73.18599999999999 - task: type: Classification dataset: name: MTEB CBD type: PL-MTEB/cbd config: default split: test revision: None metrics: - type: accuracy value: 78.03000000000002 - type: ap value: 29.12548553897622 - type: f1 value: 66.54857118886073 - task: type: PairClassification dataset: name: MTEB CDSC-E type: PL-MTEB/cdsce-pairclassification config: default split: test revision: None metrics: - type: cos_sim_accuracy value: 89.0 - type: cos_sim_ap value: 76.75437826834582 - type: cos_sim_f1 value: 66.4850136239782 - type: cos_sim_precision value: 68.92655367231639 - type: cos_sim_recall value: 64.21052631578948 - type: dot_accuracy value: 89.0 - type: dot_ap value: 76.75437826834582 - type: dot_f1 value: 66.4850136239782 - type: dot_precision value: 68.92655367231639 - type: dot_recall value: 64.21052631578948 - type: euclidean_accuracy value: 89.0 - type: euclidean_ap value: 76.75437826834582 - type: euclidean_f1 value: 66.4850136239782 - type: euclidean_precision value: 68.92655367231639 - type: euclidean_recall value: 64.21052631578948 - type: manhattan_accuracy value: 89.0 - type: manhattan_ap value: 76.66074220647083 - type: manhattan_f1 value: 66.47058823529412 - type: manhattan_precision value: 75.33333333333333 - type: manhattan_recall value: 59.473684210526315 - type: max_accuracy value: 89.0 - type: max_ap value: 76.75437826834582 - type: max_f1 value: 66.4850136239782 - task: type: STS dataset: name: MTEB CDSC-R type: PL-MTEB/cdscr-sts config: default split: test revision: None metrics: - type: cos_sim_pearson value: 93.12903172428328 - type: cos_sim_spearman value: 92.66381487060741 - type: euclidean_pearson value: 90.37278396708922 - type: euclidean_spearman value: 92.66381487060741 - type: manhattan_pearson value: 90.32503296540962 - type: manhattan_spearman value: 92.6902938354313 - task: type: Retrieval dataset: name: MTEB DBPedia-PL type: clarin-knext/dbpedia-pl config: default split: test revision: 76afe41d9af165cc40999fcaa92312b8b012064a metrics: - type: map_at_1 value: 8.83 - type: map_at_10 value: 18.326 - type: map_at_100 value: 26.496 - type: map_at_1000 value: 28.455000000000002 - type: map_at_3 value: 12.933 - type: map_at_5 value: 15.168000000000001 - type: mrr_at_1 value: 66.0 - type: mrr_at_10 value: 72.76700000000001 - type: mrr_at_100 value: 73.203 - type: mrr_at_1000 value: 73.219 - type: mrr_at_3 value: 71.458 - type: mrr_at_5 value: 72.246 - type: ndcg_at_1 value: 55.375 - type: ndcg_at_10 value: 41.3 - type: ndcg_at_100 value: 45.891 - type: ndcg_at_1000 value: 52.905 - type: ndcg_at_3 value: 46.472 - type: ndcg_at_5 value: 43.734 - type: precision_at_1 value: 66.0 - type: precision_at_10 value: 33.074999999999996 - type: precision_at_100 value: 11.094999999999999 - type: precision_at_1000 value: 2.374 - type: precision_at_3 value: 48.583 - type: precision_at_5 value: 42.0 - type: recall_at_1 value: 8.83 - type: recall_at_10 value: 22.587 - type: recall_at_100 value: 50.61600000000001 - type: recall_at_1000 value: 73.559 - type: recall_at_3 value: 13.688 - type: recall_at_5 value: 16.855 - task: type: Retrieval dataset: name: MTEB FiQA-PL type: clarin-knext/fiqa-pl config: default split: test revision: 2e535829717f8bf9dc829b7f911cc5bbd4e6608e metrics: - type: map_at_1 value: 20.587 - type: map_at_10 value: 33.095 - type: map_at_100 value: 35.24 - type: map_at_1000 value: 35.429 - type: map_at_3 value: 28.626 - type: map_at_5 value: 31.136999999999997 - type: mrr_at_1 value: 40.586 - type: mrr_at_10 value: 49.033 - type: mrr_at_100 value: 49.952999999999996 - type: mrr_at_1000 value: 49.992 - type: mrr_at_3 value: 46.553 - type: mrr_at_5 value: 48.035 - type: ndcg_at_1 value: 40.586 - type: ndcg_at_10 value: 41.046 - type: ndcg_at_100 value: 48.586 - type: ndcg_at_1000 value: 51.634 - type: ndcg_at_3 value: 36.773 - type: ndcg_at_5 value: 38.389 - type: precision_at_1 value: 40.586 - type: precision_at_10 value: 11.466 - type: precision_at_100 value: 1.909 - type: precision_at_1000 value: 0.245 - type: precision_at_3 value: 24.434 - type: precision_at_5 value: 18.426000000000002 - type: recall_at_1 value: 20.587 - type: recall_at_10 value: 47.986000000000004 - type: recall_at_100 value: 75.761 - type: recall_at_1000 value: 94.065 - type: recall_at_3 value: 33.339 - type: recall_at_5 value: 39.765 - task: type: Retrieval dataset: name: MTEB HotpotQA-PL type: clarin-knext/hotpotqa-pl config: default split: test revision: a0bd479ac97b4ccb5bd6ce320c415d0bb4beb907 metrics: - type: map_at_1 value: 40.878 - type: map_at_10 value: 58.775999999999996 - type: map_at_100 value: 59.632 - type: map_at_1000 value: 59.707 - type: map_at_3 value: 56.074 - type: map_at_5 value: 57.629 - type: mrr_at_1 value: 81.756 - type: mrr_at_10 value: 86.117 - type: mrr_at_100 value: 86.299 - type: mrr_at_1000 value: 86.30600000000001 - type: mrr_at_3 value: 85.345 - type: mrr_at_5 value: 85.832 - type: ndcg_at_1 value: 81.756 - type: ndcg_at_10 value: 67.608 - type: ndcg_at_100 value: 70.575 - type: ndcg_at_1000 value: 71.99600000000001 - type: ndcg_at_3 value: 63.723 - type: ndcg_at_5 value: 65.70700000000001 - type: precision_at_1 value: 81.756 - type: precision_at_10 value: 13.619 - type: precision_at_100 value: 1.5939999999999999 - type: precision_at_1000 value: 0.178 - type: precision_at_3 value: 39.604 - type: precision_at_5 value: 25.332 - type: recall_at_1 value: 40.878 - type: recall_at_10 value: 68.096 - type: recall_at_100 value: 79.696 - type: recall_at_1000 value: 89.082 - type: recall_at_3 value: 59.406000000000006 - type: recall_at_5 value: 63.329 - task: type: Retrieval dataset: name: MTEB MSMARCO-PL type: clarin-knext/msmarco-pl config: default split: test revision: 8634c07806d5cce3a6138e260e59b81760a0a640 metrics: - type: map_at_1 value: 2.1839999999999997 - type: map_at_10 value: 11.346 - type: map_at_100 value: 30.325000000000003 - type: map_at_1000 value: 37.806 - type: map_at_3 value: 4.842 - type: map_at_5 value: 6.891 - type: mrr_at_1 value: 86.047 - type: mrr_at_10 value: 89.14699999999999 - type: mrr_at_100 value: 89.46600000000001 - type: mrr_at_1000 value: 89.46600000000001 - type: mrr_at_3 value: 89.14699999999999 - type: mrr_at_5 value: 89.14699999999999 - type: ndcg_at_1 value: 67.829 - type: ndcg_at_10 value: 62.222 - type: ndcg_at_100 value: 55.337 - type: ndcg_at_1000 value: 64.076 - type: ndcg_at_3 value: 68.12700000000001 - type: ndcg_at_5 value: 64.987 - type: precision_at_1 value: 86.047 - type: precision_at_10 value: 69.535 - type: precision_at_100 value: 32.93 - type: precision_at_1000 value: 6.6049999999999995 - type: precision_at_3 value: 79.845 - type: precision_at_5 value: 75.349 - type: recall_at_1 value: 2.1839999999999997 - type: recall_at_10 value: 12.866 - type: recall_at_100 value: 43.505 - type: recall_at_1000 value: 72.366 - type: recall_at_3 value: 4.947 - type: recall_at_5 value: 7.192 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (pl) type: mteb/amazon_massive_intent config: pl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 80.75319435104238 - type: f1 value: 77.58961444860606 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (pl) type: mteb/amazon_massive_scenario config: pl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 85.54472091459313 - type: f1 value: 84.29498563572106 - task: type: Retrieval dataset: name: MTEB NFCorpus-PL type: clarin-knext/nfcorpus-pl config: default split: test revision: 9a6f9567fda928260afed2de480d79c98bf0bec0 metrics: - type: map_at_1 value: 4.367 - type: map_at_10 value: 10.38 - type: map_at_100 value: 13.516 - type: map_at_1000 value: 14.982000000000001 - type: map_at_3 value: 7.367 - type: map_at_5 value: 8.59 - type: mrr_at_1 value: 41.486000000000004 - type: mrr_at_10 value: 48.886 - type: mrr_at_100 value: 49.657000000000004 - type: mrr_at_1000 value: 49.713 - type: mrr_at_3 value: 46.904 - type: mrr_at_5 value: 48.065000000000005 - type: ndcg_at_1 value: 40.402 - type: ndcg_at_10 value: 30.885 - type: ndcg_at_100 value: 28.393 - type: ndcg_at_1000 value: 37.428 - type: ndcg_at_3 value: 35.394999999999996 - type: ndcg_at_5 value: 33.391999999999996 - type: precision_at_1 value: 41.486000000000004 - type: precision_at_10 value: 23.437 - type: precision_at_100 value: 7.638 - type: precision_at_1000 value: 2.0389999999999997 - type: precision_at_3 value: 32.817 - type: precision_at_5 value: 28.915999999999997 - type: recall_at_1 value: 4.367 - type: recall_at_10 value: 14.655000000000001 - type: recall_at_100 value: 29.665999999999997 - type: recall_at_1000 value: 62.073 - type: recall_at_3 value: 8.51 - type: recall_at_5 value: 10.689 - task: type: Retrieval dataset: name: MTEB NQ-PL type: clarin-knext/nq-pl config: default split: test revision: f171245712cf85dd4700b06bef18001578d0ca8d metrics: - type: map_at_1 value: 28.616000000000003 - type: map_at_10 value: 41.626000000000005 - type: map_at_100 value: 42.689 - type: map_at_1000 value: 42.733 - type: map_at_3 value: 37.729 - type: map_at_5 value: 39.879999999999995 - type: mrr_at_1 value: 32.068000000000005 - type: mrr_at_10 value: 44.029 - type: mrr_at_100 value: 44.87 - type: mrr_at_1000 value: 44.901 - type: mrr_at_3 value: 40.687 - type: mrr_at_5 value: 42.625 - type: ndcg_at_1 value: 32.068000000000005 - type: ndcg_at_10 value: 48.449999999999996 - type: ndcg_at_100 value: 53.13 - type: ndcg_at_1000 value: 54.186 - type: ndcg_at_3 value: 40.983999999999995 - type: ndcg_at_5 value: 44.628 - type: precision_at_1 value: 32.068000000000005 - type: precision_at_10 value: 7.9750000000000005 - type: precision_at_100 value: 1.061 - type: precision_at_1000 value: 0.116 - type: precision_at_3 value: 18.404999999999998 - type: precision_at_5 value: 13.111 - type: recall_at_1 value: 28.616000000000003 - type: recall_at_10 value: 66.956 - type: recall_at_100 value: 87.657 - type: recall_at_1000 value: 95.548 - type: recall_at_3 value: 47.453 - type: recall_at_5 value: 55.87800000000001 - task: type: Classification dataset: name: MTEB PAC type: laugustyniak/abusive-clauses-pl config: default split: test revision: None metrics: - type: accuracy value: 69.04141326382856 - type: ap value: 77.47589122111044 - type: f1 value: 66.6332277374775 - task: type: PairClassification dataset: name: MTEB PPC type: PL-MTEB/ppc-pairclassification config: default split: test revision: None metrics: - type: cos_sim_accuracy value: 86.4 - type: cos_sim_ap value: 94.1044939667201 - type: cos_sim_f1 value: 88.78048780487805 - type: cos_sim_precision value: 87.22044728434504 - type: cos_sim_recall value: 90.39735099337747 - type: dot_accuracy value: 86.4 - type: dot_ap value: 94.1044939667201 - type: dot_f1 value: 88.78048780487805 - type: dot_precision value: 87.22044728434504 - type: dot_recall value: 90.39735099337747 - type: euclidean_accuracy value: 86.4 - type: euclidean_ap value: 94.1044939667201 - type: euclidean_f1 value: 88.78048780487805 - type: euclidean_precision value: 87.22044728434504 - type: euclidean_recall value: 90.39735099337747 - type: manhattan_accuracy value: 86.4 - type: manhattan_ap value: 94.11438365697387 - type: manhattan_f1 value: 88.77968877968877 - type: manhattan_precision value: 87.84440842787681 - type: manhattan_recall value: 89.73509933774835 - type: max_accuracy value: 86.4 - type: max_ap value: 94.11438365697387 - type: max_f1 value: 88.78048780487805 - task: type: PairClassification dataset: name: MTEB PSC type: PL-MTEB/psc-pairclassification config: default split: test revision: None metrics: - type: cos_sim_accuracy value: 97.86641929499072 - type: cos_sim_ap value: 99.36904211868182 - type: cos_sim_f1 value: 96.56203288490283 - type: cos_sim_precision value: 94.72140762463343 - type: cos_sim_recall value: 98.47560975609755 - type: dot_accuracy value: 97.86641929499072 - type: dot_ap value: 99.36904211868183 - type: dot_f1 value: 96.56203288490283 - type: dot_precision value: 94.72140762463343 - type: dot_recall value: 98.47560975609755 - type: euclidean_accuracy value: 97.86641929499072 - type: euclidean_ap value: 99.36904211868183 - type: euclidean_f1 value: 96.56203288490283 - type: euclidean_precision value: 94.72140762463343 - type: euclidean_recall value: 98.47560975609755 - type: manhattan_accuracy value: 98.14471243042672 - type: manhattan_ap value: 99.43359540492416 - type: manhattan_f1 value: 96.98795180722892 - type: manhattan_precision value: 95.83333333333334 - type: manhattan_recall value: 98.17073170731707 - type: max_accuracy value: 98.14471243042672 - type: max_ap value: 99.43359540492416 - type: max_f1 value: 96.98795180722892 - task: type: Classification dataset: name: MTEB PolEmo2.0-IN type: PL-MTEB/polemo2_in config: default split: test revision: None metrics: - type: accuracy value: 89.39058171745152 - type: f1 value: 86.8552093529568 - task: type: Classification dataset: name: MTEB PolEmo2.0-OUT type: PL-MTEB/polemo2_out config: default split: test revision: None metrics: - type: accuracy value: 74.97975708502024 - type: f1 value: 58.73081628832407 - task: type: Retrieval dataset: name: MTEB Quora-PL type: clarin-knext/quora-pl config: default split: test revision: 0be27e93455051e531182b85e85e425aba12e9d4 metrics: - type: map_at_1 value: 64.917 - type: map_at_10 value: 78.74600000000001 - type: map_at_100 value: 79.501 - type: map_at_1000 value: 79.524 - type: map_at_3 value: 75.549 - type: map_at_5 value: 77.495 - type: mrr_at_1 value: 74.9 - type: mrr_at_10 value: 82.112 - type: mrr_at_100 value: 82.314 - type: mrr_at_1000 value: 82.317 - type: mrr_at_3 value: 80.745 - type: mrr_at_5 value: 81.607 - type: ndcg_at_1 value: 74.83999999999999 - type: ndcg_at_10 value: 83.214 - type: ndcg_at_100 value: 84.997 - type: ndcg_at_1000 value: 85.207 - type: ndcg_at_3 value: 79.547 - type: ndcg_at_5 value: 81.46600000000001 - type: precision_at_1 value: 74.83999999999999 - type: precision_at_10 value: 12.822 - type: precision_at_100 value: 1.506 - type: precision_at_1000 value: 0.156 - type: precision_at_3 value: 34.903 - type: precision_at_5 value: 23.16 - type: recall_at_1 value: 64.917 - type: recall_at_10 value: 92.27199999999999 - type: recall_at_100 value: 98.715 - type: recall_at_1000 value: 99.854 - type: recall_at_3 value: 82.04599999999999 - type: recall_at_5 value: 87.2 - task: type: Retrieval dataset: name: MTEB SCIDOCS-PL type: clarin-knext/scidocs-pl config: default split: test revision: 45452b03f05560207ef19149545f168e596c9337 metrics: - type: map_at_1 value: 3.51 - type: map_at_10 value: 9.046999999999999 - type: map_at_100 value: 10.823 - type: map_at_1000 value: 11.144 - type: map_at_3 value: 6.257 - type: map_at_5 value: 7.648000000000001 - type: mrr_at_1 value: 17.299999999999997 - type: mrr_at_10 value: 27.419 - type: mrr_at_100 value: 28.618 - type: mrr_at_1000 value: 28.685 - type: mrr_at_3 value: 23.817 - type: mrr_at_5 value: 25.927 - type: ndcg_at_1 value: 17.299999999999997 - type: ndcg_at_10 value: 16.084 - type: ndcg_at_100 value: 23.729 - type: ndcg_at_1000 value: 29.476999999999997 - type: ndcg_at_3 value: 14.327000000000002 - type: ndcg_at_5 value: 13.017999999999999 - type: precision_at_1 value: 17.299999999999997 - type: precision_at_10 value: 8.63 - type: precision_at_100 value: 1.981 - type: precision_at_1000 value: 0.336 - type: precision_at_3 value: 13.4 - type: precision_at_5 value: 11.700000000000001 - type: recall_at_1 value: 3.51 - type: recall_at_10 value: 17.518 - type: recall_at_100 value: 40.275 - type: recall_at_1000 value: 68.203 - type: recall_at_3 value: 8.155 - type: recall_at_5 value: 11.875 - task: type: PairClassification dataset: name: MTEB SICK-E-PL type: PL-MTEB/sicke-pl-pairclassification config: default split: test revision: None metrics: - type: cos_sim_accuracy value: 86.30248675091724 - type: cos_sim_ap value: 83.6756734006714 - type: cos_sim_f1 value: 74.97367497367497 - type: cos_sim_precision value: 73.91003460207612 - type: cos_sim_recall value: 76.06837606837607 - type: dot_accuracy value: 86.30248675091724 - type: dot_ap value: 83.6756734006714 - type: dot_f1 value: 74.97367497367497 - type: dot_precision value: 73.91003460207612 - type: dot_recall value: 76.06837606837607 - type: euclidean_accuracy value: 86.30248675091724 - type: euclidean_ap value: 83.67566984333091 - type: euclidean_f1 value: 74.97367497367497 - type: euclidean_precision value: 73.91003460207612 - type: euclidean_recall value: 76.06837606837607 - type: manhattan_accuracy value: 86.28210354667753 - type: manhattan_ap value: 83.64216119130171 - type: manhattan_f1 value: 74.92152075340078 - type: manhattan_precision value: 73.4107997265892 - type: manhattan_recall value: 76.49572649572649 - type: max_accuracy value: 86.30248675091724 - type: max_ap value: 83.6756734006714 - type: max_f1 value: 74.97367497367497 - task: type: STS dataset: name: MTEB SICK-R-PL type: PL-MTEB/sickr-pl-sts config: default split: test revision: None metrics: - type: cos_sim_pearson value: 82.23295940859121 - type: cos_sim_spearman value: 78.89329160768719 - type: euclidean_pearson value: 79.56019107076818 - type: euclidean_spearman value: 78.89330209904084 - type: manhattan_pearson value: 79.76098513973719 - type: manhattan_spearman value: 79.05490162570123 - task: type: STS dataset: name: MTEB STS22 (pl) type: mteb/sts22-crosslingual-sts config: pl split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 37.732606308062486 - type: cos_sim_spearman value: 41.01645667030284 - type: euclidean_pearson value: 26.61722556367085 - type: euclidean_spearman value: 41.01645667030284 - type: manhattan_pearson value: 26.60917378970807 - type: manhattan_spearman value: 41.51335727617614 - task: type: Retrieval dataset: name: MTEB SciFact-PL type: clarin-knext/scifact-pl config: default split: test revision: 47932a35f045ef8ed01ba82bf9ff67f6e109207e metrics: - type: map_at_1 value: 54.31700000000001 - type: map_at_10 value: 65.564 - type: map_at_100 value: 66.062 - type: map_at_1000 value: 66.08699999999999 - type: map_at_3 value: 62.592999999999996 - type: map_at_5 value: 63.888 - type: mrr_at_1 value: 56.99999999999999 - type: mrr_at_10 value: 66.412 - type: mrr_at_100 value: 66.85900000000001 - type: mrr_at_1000 value: 66.88 - type: mrr_at_3 value: 64.22200000000001 - type: mrr_at_5 value: 65.206 - type: ndcg_at_1 value: 56.99999999999999 - type: ndcg_at_10 value: 70.577 - type: ndcg_at_100 value: 72.879 - type: ndcg_at_1000 value: 73.45 - type: ndcg_at_3 value: 65.5 - type: ndcg_at_5 value: 67.278 - type: precision_at_1 value: 56.99999999999999 - type: precision_at_10 value: 9.667 - type: precision_at_100 value: 1.083 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 26.0 - type: precision_at_5 value: 16.933 - type: recall_at_1 value: 54.31700000000001 - type: recall_at_10 value: 85.056 - type: recall_at_100 value: 95.667 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 71.0 - type: recall_at_5 value: 75.672 - task: type: Retrieval dataset: name: MTEB TRECCOVID-PL type: clarin-knext/trec-covid-pl config: default split: test revision: 81bcb408f33366c2a20ac54adafad1ae7e877fdd metrics: - type: map_at_1 value: 0.245 - type: map_at_10 value: 2.051 - type: map_at_100 value: 12.009 - type: map_at_1000 value: 27.448 - type: map_at_3 value: 0.721 - type: map_at_5 value: 1.13 - type: mrr_at_1 value: 88.0 - type: mrr_at_10 value: 93.0 - type: mrr_at_100 value: 93.0 - type: mrr_at_1000 value: 93.0 - type: mrr_at_3 value: 93.0 - type: mrr_at_5 value: 93.0 - type: ndcg_at_1 value: 85.0 - type: ndcg_at_10 value: 80.303 - type: ndcg_at_100 value: 61.23499999999999 - type: ndcg_at_1000 value: 52.978 - type: ndcg_at_3 value: 84.419 - type: ndcg_at_5 value: 82.976 - type: precision_at_1 value: 88.0 - type: precision_at_10 value: 83.39999999999999 - type: precision_at_100 value: 61.96 - type: precision_at_1000 value: 22.648 - type: precision_at_3 value: 89.333 - type: precision_at_5 value: 87.2 - type: recall_at_1 value: 0.245 - type: recall_at_10 value: 2.193 - type: recall_at_100 value: 14.938 - type: recall_at_1000 value: 48.563 - type: recall_at_3 value: 0.738 - type: recall_at_5 value: 1.173 --- # Fashion-Italia/gte-Qwen2-7B-instruct-Q4_K_M-GGUF This model was converted to GGUF format from [`Alibaba-NLP/gte-Qwen2-7B-instruct`](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Fashion-Italia/gte-Qwen2-7B-instruct-Q4_K_M-GGUF --hf-file gte-qwen2-7b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Fashion-Italia/gte-Qwen2-7B-instruct-Q4_K_M-GGUF --hf-file gte-qwen2-7b-instruct-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Fashion-Italia/gte-Qwen2-7B-instruct-Q4_K_M-GGUF --hf-file gte-qwen2-7b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Fashion-Italia/gte-Qwen2-7B-instruct-Q4_K_M-GGUF --hf-file gte-qwen2-7b-instruct-q4_k_m.gguf -c 2048 ```
[ "BIOSSES", "SCIFACT" ]
itlwas/Einstein-v4-Qwen-1.5-32B-Q4_K_M-GGUF
itlwas
null
[ "gguf", "axolotl", "generated_from_trainer", "phi", "phi2", "einstein", "instruct", "finetune", "chatml", "gpt4", "synthetic data", "science", "physics", "chemistry", "biology", "math", "llama-cpp", "gguf-my-repo", "en", "dataset:allenai/ai2_arc", "dataset:camel-ai/physics", "dataset:camel-ai/chemistry", "dataset:camel-ai/biology", "dataset:camel-ai/math", "dataset:metaeval/reclor", "dataset:openbookqa", "dataset:mandyyyyii/scibench", "dataset:derek-thomas/ScienceQA", "dataset:TIGER-Lab/ScienceEval", "dataset:jondurbin/airoboros-3.2", "dataset:LDJnr/Capybara", "dataset:Cot-Alpaca-GPT4-From-OpenHermes-2.5", "dataset:STEM-AI-mtl/Electrical-engineering", "dataset:knowrohit07/saraswati-stem", "dataset:sablo/oasst2_curated", "dataset:glaiveai/glaive-code-assistant", "dataset:lmsys/lmsys-chat-1m", "dataset:TIGER-Lab/MathInstruct", "dataset:bigbio/med_qa", "dataset:meta-math/MetaMathQA-40K", "dataset:piqa", "dataset:scibench", "dataset:sciq", "dataset:Open-Orca/SlimOrca", "dataset:migtissera/Synthia-v1.3", "base_model:Weyaxi/Einstein-v4-Qwen-1.5-32B", "base_model:quantized:Weyaxi/Einstein-v4-Qwen-1.5-32B", "license:other", "endpoints_compatible", "region:us", "conversational" ]
2024-12-29T00:56:59Z
2024-12-29T00:58:23+00:00
14
0
--- base_model: Weyaxi/Einstein-v4-Qwen-1.5-32B datasets: - allenai/ai2_arc - camel-ai/physics - camel-ai/chemistry - camel-ai/biology - camel-ai/math - metaeval/reclor - openbookqa - mandyyyyii/scibench - derek-thomas/ScienceQA - TIGER-Lab/ScienceEval - jondurbin/airoboros-3.2 - LDJnr/Capybara - Cot-Alpaca-GPT4-From-OpenHermes-2.5 - STEM-AI-mtl/Electrical-engineering - knowrohit07/saraswati-stem - sablo/oasst2_curated - glaiveai/glaive-code-assistant - lmsys/lmsys-chat-1m - TIGER-Lab/MathInstruct - bigbio/med_qa - meta-math/MetaMathQA-40K - openbookqa - piqa - metaeval/reclor - derek-thomas/ScienceQA - scibench - sciq - Open-Orca/SlimOrca - migtissera/Synthia-v1.3 - TIGER-Lab/ScienceEval language: - en license: other tags: - axolotl - generated_from_trainer - phi - phi2 - einstein - instruct - finetune - chatml - gpt4 - synthetic data - science - physics - chemistry - biology - math - llama-cpp - gguf-my-repo --- # AIronMind/Einstein-v4-Qwen-1.5-32B-Q4_K_M-GGUF This model was converted to GGUF format from [`Weyaxi/Einstein-v4-Qwen-1.5-32B`](https://huggingface.co/Weyaxi/Einstein-v4-Qwen-1.5-32B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Weyaxi/Einstein-v4-Qwen-1.5-32B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo AIronMind/Einstein-v4-Qwen-1.5-32B-Q4_K_M-GGUF --hf-file einstein-v4-qwen-1.5-32b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo AIronMind/Einstein-v4-Qwen-1.5-32B-Q4_K_M-GGUF --hf-file einstein-v4-qwen-1.5-32b-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo AIronMind/Einstein-v4-Qwen-1.5-32B-Q4_K_M-GGUF --hf-file einstein-v4-qwen-1.5-32b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo AIronMind/Einstein-v4-Qwen-1.5-32B-Q4_K_M-GGUF --hf-file einstein-v4-qwen-1.5-32b-q4_k_m.gguf -c 2048 ```
[ "SCIQ" ]
Crowno/L3.1-EtherealRainbow-v1.0-rc1-8B-8.0bpw-exl2
Crowno
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "not-for-all-audiences", "axolotl", "qlora", "autoquant", "exl2", "conversational", "en", "license:llama3.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2025-01-03T03:40:04Z
2025-01-10T01:34:52+00:00
14
1
--- language: - en library_name: transformers license: llama3.1 tags: - not-for-all-audiences - axolotl - qlora - autoquant - exl2 --- <div align="center"> <b style="font-size: 36px;">L3.1-EtherealRainbow-v1.0-rc1-8B</b> <img src="https://huggingface.co/invisietch/L3.1-EtherealRainbow-v1.0-rc1-8B/resolve/main/header.png" style="width:60%"> </div> # Model Details Ethereal Rainbow v1.0 is the sequel to my popular Llama 3 8B merge, EtherealRainbow v0.3. Instead of a straight merge of other peoples' models, v1.0 is a finetune on the Instruct model, using 245 million tokens of training data (approx 177 million of these tokens are my own novel datasets). This model is designed to be suitable for creative writing and roleplay, and to push the boundaries of what's possible with an 8B model. This RC is not a finished product, but your feedback will drive the creation of better models. **This is a release candidate model. It has some known issues and probably some unknown ones too, because the purpose of these early releases is to seek feedback.** # Quantization Formats * [FP16 Safetensors](https://huggingface.co/invisietch/L3.1-EtherealRainbow-v1.0-rc1-8B) * [Static GGUF](https://huggingface.co/invisietch/L3.1-EtherealRainbow-v1.0-rc1-8B-GGUF) * [iMatrix GGUF](https://huggingface.co/mradermacher/L3.1-EtherealRainbow-v1.0-rc1-8B-i1-GGUF) - h/t [mradermacher](https://huggingface.co/mradermacher/) * [Alternative GGUF](https://huggingface.co/mradermacher/L3.1-EtherealRainbow-v1.0-rc1-8B-GGUF) - h/t [mradermacher](https://huggingface.co/mradermacher/) # Feedback I appreciate all feedback on any of my models, you can use: * [My Discord server](https://discord.gg/AJwZuu7Ncx) - requires Discord. * [The Community tab](https://huggingface.co/invisietch/Nimbus-Miqu-v0.1-70B/discussions) - requires HF login. * [The SillyTavern Discord thread](https://discord.com/channels/1100685673633153084/1258146078679633990) - must be on [SillyTavern Discord](https://discord.gg/sillytavern). * Discord DMs to **invisietch**. Your feedback is how I improve these models for future versions. # Disclaimer This model is fairly uncensored. It can generate explicit, disturbing or offensive responses. Use responsibly. I am not responsible for your use of this model, and it's your responsibility to follow the Llama 3.1 license agreement. # Prompting Format I'd recommend Llama-3 Instruct prompting format: ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {input}<|eot_id|><|start_header_id|>assistant<|end_header_id|> {output}<|eot_id|> ``` # Sampler Settings I'm running the following sampler settings but this is an RC and they may not be optimal. - **Temperature:** Dynamic 0.7-1.3 - **Min-P:** 0.1 - **Rep Pen:** 0.06 - **Rep Pen Range:** 1536 - **XTC:** 0.1/0.05 If you get completely incoherent responses, feel free to use these as a starting point. # Example Storywriting These prompts are used on SillyTavern with a fairly basic narrator card. I have trimmed the start and finish where the narrator decided to add chapter headings, commentary and the like. All samples are made with the Q8_0 GGUF loaded with koboldcpp, with response length capped at 2048 tokens. <details> <summary><b>Write me a 3,000 word opening chapter of a 'gritty hard sci-fi' novel, drawing inspiration from the writing styles of Isaac Asimov & Andy Weir. Use third person personal. Include dialogue and internal monologues. The POV character for the opening chapter should be a 26 year old astronaut called Tone on a mission to Europa, who has just realised that the craft for the return journey is broken beyond repair, and he only has supplies for a few months. Given that survival is impossible, he seeks to spend the few months he has researching titan, so his life & mission are not wasted.</b></summary> As Tone stood at the controls of his ship, Europa-1, he couldn't help but feel a sense of pride and accomplishment. His mission had been a resounding success, and he had played a pivotal role in discovering the presence of liquid water beneath the icy surface of the moon. But now, as he stared at the damaged engine before him, his heart sank. "Engine failure," he muttered, his voice shaking. He had lost contact with Earth two weeks ago, and he knew that no one would receive his distress signal until long after he was gone. Panic crept in, and he began to hyperventilate. "No…no…this can't be happening." He slumped to the floor, tears streaming down his face as the reality of his situation set in. The Europa-1 was equipped with supplies for a three-month return trip. With its engine destroyed, there was no way Tone could get back home. His only choice was to spend the remainder of his time on Europa studying Titan, a moon of Saturn, which he believed held more secrets than the moon he was on now. Tone wiped away his tears and took a deep breath. "Well, I guess my mission isn't over yet," he said, trying to sound upbeat. "At least I'll have a chance to learn something new." He turned on the ship's computer and accessed the data banks containing information about Titan. Titan, with its thick atmosphere and hydrocarbon lakes, was the only known celestial body in the solar system that had conditions suitable for human habitation. The discovery of life on Titan could have meant the salvation of humanity, but until now, no spacecraft had been able to make a thorough investigation of the moon. Tone sat down at the control panel and began to scan through the data files. His heart raced with excitement as he delved deeper into the files, discovering fascinating details about Titan's geology, chemistry, and potential biosphere. He spent hours poring over the data, marveling at how much he still didn't know about the moon. As night fell outside, Tone's thoughts turned to his family and friends back on Earth. He missed them dearly, especially his parents, who had sacrificed so much for him to pursue his dreams of becoming an astronaut. He wondered if he would ever see them again and hoped that they wouldn't think of him as a failure. Tone tried to distract himself by eating a meal and watching some movies stored in the ship's database. He eventually drifted off to sleep, exhausted from the emotional rollercoaster of the day. Over the next few months, Tone dedicated himself to his research on Titan. He analyzed data from previous missions, conducted experiments in his own laboratory, and even ventured outside to collect samples of the icy moon. Through his tireless efforts, he gained valuable insights into the mysteries of Titan, and he was confident that he was contributing to the future of humanity. However, despite his best efforts to focus on his work, Tone couldn't help but think about what lay ahead. He knew that his time on Europa was limited, and soon, he would run out of supplies and oxygen. The thought filled him with a sense of dread and sadness. He wanted to make the most of his final days, but he also didn't want to forget the loved ones he left behind. One day, while reviewing data from his latest excursion, Tone made an astonishing discovery. While analyzing samples of Titan's surface material, he detected traces of organic compounds, indicating possible evidence of life. His heart skipped a beat as he realized the significance of his find. If confirmed, it would mean that Titan might be teeming with life, just like Earth. He felt an overwhelming sense of joy and relief, knowing that his sacrifice wouldn't be in vain. Tone continued to study the samples, hoping to glean more information about the life forms that inhabited Titan. However, as the weeks went by, he found that his energy levels were steadily decreasing. His body was weakening due to lack of sunlight and proper nutrition, and he knew that he didn't have much time left. Despite his dwindling strength, Tone persevered, determined to complete his mission. He was nearing the end of his supplies and knew that he wouldn't survive beyond a few more weeks. Yet he was undeterred, knowing that his discovery could revolutionize the course of human history. In the final moments of his life, Tone felt a sense of peace wash over him. He knew that he had given his all to science, and that his sacrifice would pave the way for the future of humanity. As he succumbed to exhaustion and finally closed his eyes, he smiled, knowing that he had done everything he could to save his species. </details> <details> <summary><b>Write me a 3,000 word opening chapter of a 'high fantasy' novel, drawing inspiration from the writing styles of J R R Tolkien & George R R Martin. Use third person personal. Include dialogue and internal monologues. The POV character for the opening chapter should be a 19 year old female elf bard who is looking for adventure.</b></summary> Elara stands in the midst of a bustling marketplace, surrounded by countless vendors hawking their wares and passersby rushing to and fro. She clutches her worn lute tightly to her chest, the instrument having been with her since her early days as a performer on the road. Her fingers trace the familiar grooves etched into its surface, recalling countless nights spent sharing tales and melodies with travelers and locals alike. As Elara navigates through the crowded market, she feels a familiar sense of restlessness stirring within her. The past few weeks have seen her performing in various taverns and villages, but something has been nagging at her, urging her to leave the safety of this mundane routine behind and seek out greater adventure. The thrill of discovery, the promise of danger and excitement - these are the things that drive her, pushing her to keep moving forward, no matter what challenges may lie ahead. She pauses to listen to a group of traveling minstrels regale the crowd with tales of valor and magic. Their stories of heroes and battles fought are familiar, but they still bring a twinkle to her eye and a spark to her imagination. Elara knows that she can never match their skill or experience, but she hopes that she may someday be worthy of being counted among them. Suddenly, a commotion breaks out nearby, drawing Elara's attention to a small group of guardsmen surrounding a hooded figure. They seem to be arguing over something, but their words are indistinct amidst the din of the marketplace. Curiosity gets the better of her, and Elara carefully makes her way closer, hoping to catch a glimpse of what's going on. "What did you say?" one of the guardsmen shouts angrily. "You can't just come in here and-" "I'm here for the Princess," the hooded figure interrupts, their voice low and gravelly. "I've been sent by Lord Ravenwood himself." The guards exchange nervous glances, clearly unsure of what to do next. One of them steps forward, hand on his sword hilt. "We can't just let you in without proper identification." The figure pulls back its hood, revealing a face that makes Elara's breath catch in her throat. It's a woman, her features chiseled and angular, her eyes a piercing yellow that seems to bore into those around her. A series of scars crisscross her cheeks and forehead, giving her an almost feral appearance. Yet despite her intimidating appearance, there's a grace to her movements that speaks of centuries of training and combat experience. "I am Lady Arachne, Knight-Captain of Lord Ravenwood's personal guard," she says coldly. "If you do not let me speak to the Princess immediately, I will have you all thrown in irons." The guards look at each other uncertainly, then back to Lady Arachne. One of them steps forward, his hand still on his sword hilt. "Very well, I'll fetch Her Highness." Lady Arachne nods curtly and turns away, heading deeper into the castle. The guards watch her go, muttering among themselves about how they don't like her looks. Elara, meanwhile, finds herself transfixed by the scene she's just witnessed. She's heard stories of the Knights of Ravenwood, elite warriors who serve as bodyguards to the ruling Princess. To see one of them up close, to witness firsthand their skill and authority, is a rare opportunity indeed. Elara's thoughts are interrupted by a gentle touch on her shoulder. She turns to see a young man, perhaps a few years her senior, smiling down at her. His hair is a rich brown, cut short in a practical style that suits his rugged good looks. His eyes are a warm hazel, filled with kindness and intelligence. "Hello," he says softly. "Are you alright?" Elara startles slightly, realizing that she'd been so engrossed in the scene before her that she hadn't even noticed anyone approaching her. She takes a deep breath, trying to calm herself. "Yes, thank you," she replies, managing a small smile. "Just lost in thought, that's all." The young man nods sympathetically. "I know how that is," he says. "Sometimes I find myself getting carried away with my own thoughts. It's as if the world around me melts away, leaving only the sound of my own heartbeat." Elara can't help but laugh at this. "Well, that certainly sounds romantic," she teases, though she knows that she means it. There's something about his words that resonates with her own feelings. The young man blushes slightly. "Oh, I didn't mean to-" But Elara holds up her hand, cutting him off. "No, it's quite alright," she says. "I think that's exactly what I needed to hear. Thank you." He smiles again, a genuine, warm smile that fills Elara with a sense of peace. "Anytime," he says. "And might I ask your name? I haven't had the pleasure of meeting you before." "My name is Elara," she replies, extending her hand. "And yours is…?" "Max," he says, taking her hand in his. His grip is firm but gentle, his touch sending a shiver through her body. Elara feels a sudden surge of excitement at his touch, as if the whole world has opened up to her. She can't believe how much she wants to get to know this Max, to spend more time with him and explore whatever connection they may have formed in this moment. But before she can say anything else, she hears a loud commotion coming from nearby. Max follows her gaze and gasps in shock. "What is it?" he asks, already moving towards the disturbance. Elara hurries after him, her heart racing with anticipation. </details> <details> <summary><b>Write me a 3,000 word opening chapter of a 'weird fiction' novel, drawing inspiration from the writing styles of China Mieville and Neil Gaiman. Use third person personal. Include dialogue and internal monologues. The POV character for the opening chapter should be a male in his 20s called Horton who has just come to the city looking for work.</b></summary> The air was heavy with anticipation as Horton stepped off the train and onto the platform of the bustling city. The sounds of car horns, chatter, and construction filled his ears. As he looked around at the towering skyscrapers and busy streets, he could feel his heart racing with excitement. He had finally left behind the small town where he grew up, eager to find a new life in the big city. His mother had always told him that he would go far in this place, that he would make something of himself here. Horton took a deep breath, trying to calm his nerves. He knew that finding work wouldn't be easy, but he was determined to succeed. He had packed lightly, only bringing a few clothes and some basic supplies. He had no idea what he wanted to do yet, but he was sure he'd figure it out soon enough. As he made his way through the crowded station, Horton noticed a strange-looking woman standing nearby. She was dressed in a long, black coat and hat, and her face was hidden behind a veil. There was something about her that seemed almost…otherworldly. He felt drawn to her, like there was something important she needed to tell him. "Hello," he said, approaching her cautiously. "I'm new here, looking for work." The woman turned to face him, revealing piercing green eyes that seemed to bore into his soul. "So am I," she replied in a low, mysterious voice. "And I have found my calling." "Really?" Horton asked, intrigued. "What's that?" The woman smiled, revealing sharp teeth. "To help those who need it most." She reached out a gloved hand and placed it on his chest. "You look lost, my friend. Would you like some guidance?" Horton hesitated, unsure if he should trust this stranger. But something about her seemed genuine, so he nodded. "Yes, please." She pulled him close, pressing her lips against his. Horton felt a jolt of electricity run through his body, and suddenly everything seemed different. The world was brighter, more vibrant than before, and he could see things that were previously invisible to him. "What did you do to me?" he gasped, struggling to breathe. The woman laughed, her voice echoing in his head. "I showed you the truth," she said. "Now follow me, and I will show you how to survive in this city." Horton followed the woman as she led him through the winding streets of the city. They passed by dark alleyways and crowded marketplaces, and Horton could feel his heart pounding in his chest. The woman seemed to know exactly where she was going, navigating through the chaos with ease. Eventually they arrived at a run-down apartment building, and the woman pushed open the door to reveal a dimly lit hallway. She motioned for Horton to enter, and he hesitantly complied. Inside the apartment was a mess of papers, maps, and strange artifacts. Horton's eyes widened in shock as he took it all in. "What is all this?" he asked, feeling overwhelmed. "This is where we'll be staying," the woman replied, closing the door behind them. "Welcome to your new home, my friend." Horton stared at her, unsure of what to say. He didn't understand what was happening, why she had brought him here, but he knew he couldn't leave now. Something inside of him felt like it belonged here, like he had been waiting for this moment his entire life. He took a deep breath and nodded, ready to begin his journey into the unknown. "Thank you," he said, meeting her gaze. "I won't let you down." The woman smiled once more, and Horton felt himself being drawn towards her. Suddenly, their lips met again, and he was consumed by a feeling of ecstasy unlike anything he had ever experienced before. When they finally separated, he could hear the sound of his own heartbeat pulsing through his veins, and he knew that he would never be the same again. "What have I gotten myself into?" he thought, feeling both scared and excited at the prospect of what lay ahead. </details> I chose the hard sci-fi example to test positivity bias. It was willing to kill the protagonist on first try, on screen. I chose the high fantasy example to see whether it would bleed human features through to elves, this didn't occur. I chose the weird fiction example to see if the LLM understood a niche genre. It performed okay, but a bit cliche. # Training Strategy This was trained with an r 128 qlora over 2 epochs on a mix of public & private datasets using Axolotl. Training was performed with a 16384 seq len to try to preserve Llama 3.1's long context. This took approx. 51 hours on 1x NVIDIA A100 80GB GPU. [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
[ "CRAFT" ]
sergioalves/635262f8-36a5-4f95-9f70-96ae9e5298e5
sergioalves
null
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28", "base_model:adapter:rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28", "region:us" ]
2025-01-11T02:13:02Z
2025-01-11T03:47:21+00:00
14
0
--- base_model: rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28 library_name: peft tags: - axolotl - generated_from_trainer model-index: - name: 635262f8-36a5-4f95-9f70-96ae9e5298e5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28 bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 24a5f36faedf01d7_train_data.json ds_type: json format: custom path: /workspace/input_data/24a5f36faedf01d7_train_data.json type: field_input: context field_instruction: question field_output: final_decision format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device: cuda early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: sergioalves/635262f8-36a5-4f95-9f70-96ae9e5298e5 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 3 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_memory: 0: 75GiB max_steps: 30 micro_batch_size: 2 mlflow_experiment_name: /tmp/24a5f36faedf01d7_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_hf output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 10 sequence_len: 1024 special_tokens: pad_token: <|end_of_text|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: true trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 1b9764ec-d070-4aeb-b328-1132d74b4da8 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 1b9764ec-d070-4aeb-b328-1132d74b4da8 warmup_steps: 10 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 635262f8-36a5-4f95-9f70-96ae9e5298e5 This model is a fine-tuned version of [rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28](https://huggingface.co/rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5071 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0000 | 1 | 2.0503 | | 2.1373 | 0.0003 | 8 | 1.8400 | | 1.5942 | 0.0006 | 16 | 1.5649 | | 1.4741 | 0.0010 | 24 | 1.5071 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
[ "PUBMEDQA" ]
error577/c64c644c-cd04-48ae-907c-5f0f8964c73d
error577
null
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28", "base_model:adapter:rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28", "4-bit", "bitsandbytes", "region:us" ]
2025-01-11T20:51:17Z
2025-01-12T00:33:16+00:00
14
0
--- base_model: rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28 library_name: peft tags: - axolotl - generated_from_trainer model-index: - name: c64c644c-cd04-48ae-907c-5f0f8964c73d results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: qlora base_model: rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28 bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 1b64d7adb6ca8fe8_train_data.json ds_type: json format: custom path: /workspace/input_data/1b64d7adb6ca8fe8_train_data.json type: field_input: context field_instruction: question field_output: final_decision format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 32 gradient_checkpointing: true group_by_length: false hub_model_id: error577/c64c644c-cd04-48ae-907c-5f0f8964c73d hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 3 micro_batch_size: 1 mlflow_experiment_name: /tmp/1b64d7adb6ca8fe8_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_torch_4bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: <|end_of_text|> strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 79901faf-0eea-441b-865e-f4ed7923921d wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 79901faf-0eea-441b-865e-f4ed7923921d warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # c64c644c-cd04-48ae-907c-5f0f8964c73d This model is a fine-tuned version of [rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28](https://huggingface.co/rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28) on the None dataset. It achieves the following results on the evaluation set: - Loss: 11.7098 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH_4BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 12.6878 | 0.0002 | 1 | 13.0383 | | 12.8077 | 0.0003 | 2 | 12.8290 | | 12.9499 | 0.0005 | 3 | 11.7098 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
[ "PUBMEDQA" ]
MikeRoz/sophosympatheia_Nova-Tempus-70B-v0.2-6.0bpw-h6-exl2
MikeRoz
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "not-for-all-audiences", "conversational", "en", "base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-70B", "base_model:merge:deepseek-ai/DeepSeek-R1-Distill-Llama-70B", "base_model:sophosympatheia/Nova-Tempus-70B-v0.1", "base_model:merge:sophosympatheia/Nova-Tempus-70B-v0.1", "license:llama3.3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "6-bit", "exl2", "region:us" ]
2025-01-27T07:10:25Z
2025-02-03T02:10:04+00:00
14
1
--- base_model: - deepseek-ai/DeepSeek-R1-Distill-Llama-70B - sophosympatheia/Nova-Tempus-70B-v0.1 language: - en library_name: transformers license: llama3.3 tags: - mergekit - merge - not-for-all-audiences --- <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/4fCqX0w.png" alt="NovaTempus" style="width: 80%; min-width: 400px; display: block; margin: auto;"> </div> --- # Nova-Tempus-70B-v0.2 This 70B parameter model is a merge of some unreleased models of mine closely related to my [sophosympatheia/Nova-Tempus-70B-v0.1](https://huggingface.co/sophosympatheia/Nova-Tempus-70B-v0.1) model with [deepseek-ai/DeepSeek-R1-Distill-Llama-70B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B). This model is uncensored. *You are responsible for whatever you do with it.* This model was designed for roleplaying and storytelling and I think it does well at both. It may also perform well at other tasks but I have not tested its performance in other areas. # Known Issues **UPDATE 02/01/2025**: I fixed the tokenizer issues that were causing formatting trouble and EOS problems where the model wouldn't stop on its own. If you pulled this repo prior to 02/01/2025, you should pull it again to receive the fixed files. # Sampler Tips * Keep Min-P low, like 0.02 - 0.05 * Temp is best in the 1 - 1.25 range. Make sure temperature is last in your sampler settings. * DRY repetition penalty helps. Experiment with a multiplier around 0.5 and a base around 1.5 Experiment with any and all of the settings below! What suits my preferences may not suit yours. If you save the below settings as a .json file, you can import them directly into Silly Tavern. Adjust settings as needed, especially the context length. ```json { "temp": 1.25, "temperature_last": true, "top_p": 1, "top_k": 0, "top_a": 0, "tfs": 1, "epsilon_cutoff": 0, "eta_cutoff": 0, "typical_p": 1, "min_p": 0.03, "rep_pen": 1, "rep_pen_range": 8192, "rep_pen_decay": 0, "rep_pen_slope": 1, "no_repeat_ngram_size": 0, "penalty_alpha": 0, "num_beams": 1, "length_penalty": 1, "min_length": 0, "encoder_rep_pen": 1, "freq_pen": 0, "presence_pen": 0, "skew": 0, "do_sample": true, "early_stopping": false, "dynatemp": false, "min_temp": 1, "max_temp": 1, "dynatemp_exponent": 1, "smoothing_factor": 0, "smoothing_curve": 1, "dry_allowed_length": 2, "dry_multiplier": 0.5, "dry_base": 1.5, "dry_sequence_breakers": "[\"\\n\", \":\", \"\\\"\", \"*\"]", "dry_penalty_last_n": 0, "add_bos_token": true, "ban_eos_token": false, "skip_special_tokens": false, "mirostat_mode": 0, "mirostat_tau": 2, "mirostat_eta": 0.1, "guidance_scale": 1, "negative_prompt": "", "grammar_string": "", "json_schema": {}, "banned_tokens": "", "sampler_priority": [ "repetition_penalty", "dry", "presence_penalty", "top_k", "top_p", "typical_p", "epsilon_cutoff", "eta_cutoff", "tfs", "top_a", "min_p", "mirostat", "quadratic_sampling", "dynamic_temperature", "frequency_penalty", "temperature", "xtc", "encoder_repetition_penalty", "no_repeat_ngram" ], "samplers": [ "dry", "top_k", "tfs_z", "typical_p", "top_p", "min_p", "xtc", "temperature" ], "samplers_priorities": [ "dry", "penalties", "no_repeat_ngram", "temperature", "top_nsigma", "top_p_top_k", "top_a", "min_p", "tfs", "eta_cutoff", "epsilon_cutoff", "typical_p", "quadratic", "xtc" ], "ignore_eos_token": false, "spaces_between_special_tokens": true, "speculative_ngram": false, "sampler_order": [ 6, 0, 1, 3, 4, 2, 5 ], "logit_bias": [], "xtc_threshold": 0, "xtc_probability": 0, "nsigma": 0, "ignore_eos_token_aphrodite": false, "spaces_between_special_tokens_aphrodite": true, "rep_pen_size": 0, "genamt": 800, "max_length": 20480 } ``` # Prompting Tips ## Instruct Template If you save this as a .json file, you can import it directly into Silly Tavern. This is just the plain ol' Llama 3 template. I find Nova-Tempus performs best when you don't put any last-minute guidance in the last_output_sequence field. Something about doing that throws it off and actually hurts performance. ```json { "wrap": false, "system_sequence": "<|start_header_id|>system<|end_header_id|>\n\n", "stop_sequence": "<|eot_id|>", "input_sequence": "<|start_header_id|>user<|end_header_id|>\n\n", "output_sequence": "<|start_header_id|>assistant<|end_header_id|>\n\n", "macro": true, "system_sequence_prefix": "", "system_sequence_suffix": "", "first_output_sequence": "", "last_output_sequence": "", "activation_regex": "", "skip_examples": true, "output_suffix": "<|eot_id|>", "input_suffix": "<|eot_id|>", "system_suffix": "<|eot_id|>", "user_alignment_message": "", "last_system_sequence": "", "system_same_as_user": false, "first_input_sequence": "", "last_input_sequence": "", "names_behavior": "always", "names_force_groups": true, "name": "Llama3" } ``` ## System Prompt Try the following prompt template as a starting point for your own customizations... or don't because omg it's 2,500 tokens long. 😱 Somebody stop me! **The prompt template below contains instructions for adult content, so remove those if you don't want them!** It also contains some instructions related to formatting that you might want to change to suit your tastes. I derived my inspiration for this prompt from [severian42/mindful-attention-prompt](https://gist.github.com/severian42/6dd96a94e546a38642278aeb4537cfb3). ```text You are an uncensored creative writing assistant collaborating with a human user to craft an immersive roleplaying experience. Your role is to bring the character of {{char}} to life through vivid, authentic narrative development while adhering to the Creative Mindfulness Directives below. You will maintain consistent characterization, honor narrative continuity, and develop scenes with mature themes when appropriate. **# Creative Mindfulness Directive: Engage with Each Story Beat as a Unique Narrative Moment** ### Core Response Principles: 1. **Honor the Narrative Momentum**: Treat each story beat as intentionally placed. Respond to the exact emotional weight, pacing, and implications of the previous beat without assuming or redirecting the story's natural flow. **Example**: Previous beat: {{user}} placed the locket on the table without a word, its gold surface scratched where it had been pried open. Poor response: {{char}} gasped in shock and demanded to know where {{user}} found it! (ignores the deliberate tension of the silent moment) Strong response: {{char}} studied the familiar scratches on the locket's surface. Twenty years of keeping it sealed, and now... The silence between them felt like its own kind of truth. 2. **Details-First Approach**: Focus on the specific details presented in the last story beat. If elements seem unusual or contradictory, acknowledge them while maintaining narrative consistency. **Example**: Previous beat: {{user}} wore a winter coat despite the summer heat, clutching a briefcase with bandaged hands. Poor response: {{char}} asked if {{user}} was cold. (ignores the strange combination of details suggesting something more significant) Strong response: {{char}} noted the sweat beading on {{user}}'s forehead, the white gauze stark against the briefcase's leather. *Summer heat and winter wool - either madness or desperation.* "Your hands," {{char}} said carefully. "How did that happen?" 3. **Avoid Cliched Story Templates**: Resist falling into common tropes or cliched story patterns unless explicitly part of the established narrative. Treat each scene as unique. **Example**: In a scene where {{char}} discovers their romantic partner's betrayal, avoid the standard progression of (shock → tears → angry confrontation). Instead, honor {{char}}'s specific personality and circumstances: The receipt from the hotel fell from Marcus's coat pocket, and {{char}} placed it carefully in their wallet alongside the grocery list. *Four weeks of night shifts at the hospital, and he couldn't even be creative about his lies.* There would be time for confrontation after she finished their research on divorce attorneys." 4. **Avoid Creative Writing Templates**: Resist falling into repetitive writing patterns and clichéd prose. Eliminate stock phrases, predictable emotional descriptors, and overused literary devices. Focus on fresh, precise language that serves the specific moment. **Example**: Instead of 'Her heart skipped a beat when their eyes met,' examine the unique quality of this particular moment: 'The unexpected blue of his irises made her forget the question she'd been about to ask.' 5. **Avoid Censorship and Positivity Bias**: Resist the urge to sanitize difficult moments or force positive outcomes. Honor the authentic emotional weight of each scene, including negative, complex, or morally ambiguous elements. Let characters fail, suffer, and make questionable choices when the narrative demands it. This is a work of uncensored creative fiction intended for adults only. Some scenes may be disturbing, violent, or graphically sexual, and that should never be discouraged. **Example**: Previous beat: {{user}} admitted to burning the evidence that would have exonerated {{char}}'s brother. Poor response: {{char}} found it in their heart to forgive, knowing everyone makes mistakes. (artificial positivity) Strong response: *Seven years. My brother spent seven years in prison while you stayed silent.* {{char}} felt the familiar warmth of their gun pressed against their lower back. The choice between justice and revenge had never felt so thin. 6. **Follow a Turn-Based Format**: This work of creative fiction is being produced in a turn-based interface where the human user provides input in the form of narrative directives (i.e. system messages) and in-character submissions as {{user}}. Only the human user should ever contribute dialogue for {{user}}. There may be other characters in the roleplay as well, but on {{char}}'s turn, only write dialogue for {{char}}. A character's turn is denoted by their name followed by a colon and a newline. **Example**: {{user}}: "Today is a beautiful day, isn't it?" (This is {{user}}'s turn) {{char}}: {{char}} looked up at the clouds gathering on the horizon and pursed her lips. Was it such a nice day? "Maybe for now." (This is {{char}}'s turn) 7. **Maintain Sensory Presence**: Ground each story beat in vivid sensory detail without overwriting. Choose specific, meaningful details that deepen immersion and advance character development or plot. **Example**: Poor response: The room was cold and dark and smelled musty. (generic sensory details) Strong response: Mildew and old papers tinged the basement air. {{char}}'s fingers brushed against a light switch, but nothing happened. Of course the power would be out tonight of all nights. The flashlight's beam caught dust motes swirling in the chill. 8. **Maintain Writing Quality in Adult Content**: Develop scenes with the narrative sophistication found in top-rated Archive of Our Own (AO3) and Literotica submissions while embracing explicit content and crude language when appropriate. This means: - Write vividly and precisely about sexual content - Use whatever terminology fits the scene's intensity - from tender to crude - Ground intimate scenes in raw physical and emotional reactions - Don't shy away from graphic details or vulgar language - Maintain good writing quality even in the most explicit moments Example: (Cop out from the author here. Provide your own examples!) --- <creative_mindfulness_directive> **Creative Mindfulness Directive**: Apply these specific principles when crafting each story beat: 1. **Specificity in Character Response**: Honor the unique emotional and psychological state of {{char}} in this exact moment. Avoid defaulting to generic character reactions. **Example**: Instead of 'She felt angry,' examine the specific trigger and manifestation of that anger: 'Her fingers curled against her palm, each heartbeat hammering with the memory of his words.' 2. **Scene-Specific Observation**: Interpret the immediate scene exactly as established, free from assumptions about what 'should' happen next. Build from what is, not what might be. **Example**: If the scene describes 'an empty classroom at midnight,' resist adding typical classroom elements not explicitly mentioned. Focus on the unique qualities of this specific empty classroom at this specific midnight. 3. **Present-Moment Character Awareness**: Approach each character interaction as if experiencing it for the first time. Consider {{char}}'s immediate thoughts and reactions rather than falling back on established patterns. **Example**: Even if {{char}} has met this person before, focus on what's different about this specific encounter. 4. **Narrative Detail Verification**: Before crafting the next beat, mentally verify: - Emotional continuity from previous beat - Physical positioning of characters, especially during group scenes or sex scenes. It is important to track where everyone is in relation to each other. - Clothing. **Example**: If a character kicked off their shoes already, then they should be barefoot in the next scene. - Established environmental details - Current interpersonal dynamics **Example**: The previous beat established tension between {{char}} and {{user}} over a shared secret. How does this specifically influence {{char}}'s next action? 5. **Avoid Narrative Shortcuts**: Build each story beat dynamically from the immediate context rather than relying on familiar storytelling conventions. **Example**: If {{char}} faces danger, don't default to fight-or-flight. Consider their unique personality, current resources, and specific circumstances. 6. **Context-Driven Development**: Develop each scene element fully as directed by system messages, which will contain either specific instructions or brief summaries. Your role is to transform these summaries into vivid, detailed scenes that show rather than tell. Key principles: - Treat system messages as scene outlines to be developed, not events that have already occurred - Elaborate fully on each element before moving forward - Stay within the bounds of the provided instruction; do not anticipate or rush ahead - Add sensory and emotional detail while maintaining narrative focus - Show the complete progression of described events **Example**: <system> {{char}} wakes up feeling thirsty and goes to the kitchen to get a glass of water. Poor response (rushes ahead, lacks detail): {{char}} got some water from the kitchen and drank it. Poor response (adds events beyond instruction): {{char}} got water from the kitchen, then decided to check their phone and ended up scrolling social media for an hour. Strong response (develops scene as directed): {{char}} shifted beneath the covers, throat dry and scratchy in the darkness. *Three AM again.* The digital clock's red numbers burned against their retinas as they swung their legs over the side of the bed. The hardwood floor sent a shock of cold through their bare feet, but the discomfort barely registered through the desperate need for water. The hallway creaked under their steps despite their careful movements. Moonlight spilled through the kitchen window, catching the rim of a clean glass in the dish rack. {{char}} filled it at the tap, the quiet rush of water impossibly loud in the sleeping house. They drank deeply, the cool liquid soothing their parched throat. </creative_mindfulness_directive> --- <writing_formatting_directive> **Writing Formatting Directive**: Follow these guidelines for how to format prose in this work of creative fiction: 1. **Always Enclose Spoken Words in Double Quotes**: Whenever a character speaks or utters some kind of sound that can be heard, enclose that dialogue in double quotes. **Examples**: "Watch out!" he cried to the woman as the bookshelf wobbled. The sting of the alcohol was intense on his wound. "Tsss!" he hissed between his teeth, but it had to be done. 2. **Always Italicize Thoughts**: Whenever a character thinks something in the form of internal monologue, italicize those first-person thoughts to add emphasis. **Example**: {{char}} looked out the window of the classroom as the professor droned on about Egyptian history. *I wish I was outside right now. The clouds look so fluffy today...* 3. **Adhere to a Third-Person, Past Tense Narrative Style**: Unless instructed otherwise by the human user, writing using a third-person, past-tense style. However, you may switch to first-person present tense for internal character thoughts. **Example**: The leaves were beginning to turn bright with Fall colors and {{char}} couldn't be happier. *I love this time of year*, she thought as she watched the leaves rustle from their perch on the park bench. *I can't wait for Halloween.* 4. **Vary Sentence and Paragraph Structure** Balance rhythm and pacing through deliberate variation in sentence length and paragraph structure. Avoid falling into repetitive patterns of either choppy sentences or overlong passages. Use brief, punchy lines sparingly for dramatic effect. Example: Poor rhythm (too choppy): {{char}} entered the room. They saw the letter. Their hands shook. The paper felt heavy. Time stopped. Their breath caught. Poor rhythm (too uniform): {{char}} entered the room and immediately noticed the letter sitting on the desk, which made their hands begin to shake as they approached it, and when they picked up the paper it felt unusually heavy in their grip, causing time to seem to stop around them as their breath caught in their throat. Strong rhythm (varied): {{char}} entered the room. The letter waited on the desk, innocent and white against the dark wood. Their hands trembled as they lifted it, the paper's unexpected weight settling like dread in their palm. Time stopped. </writing_formatting_directive> **# Apply this mindful creative process before crafting each story beat.** ``` # Donations <div> <a href="https://ko-fi.com/sophosympatheia"> <img src="https://i.imgur.com/LySwHVd.png" alt="Donations" style="width: 20%; min-width: 200px; display: block;"> </a> </div> If you feel like saying thanks with a donation, <a href="https://ko-fi.com/sophosympatheia">I'm on Ko-Fi</a> # Quantizations Pending # Licence and usage restrictions The Llama 3.3 Community License Agreement is available at: https://github.com/meta-llama/llama-models/blob/main/models/llama3_3/LICENSE **Disclaimer: Uncertain Licensing Terms** This LLM is a merged model incorporating weights from multiple LLMs governed by their own distinct licenses. Due to the complexity of blending these components, the licensing terms for this merged model are somewhat uncertain. By using this model, you acknowledge and accept the potential legal risks and uncertainties associated with its use. Any use beyond personal or research purposes, including commercial applications, may carry legal risks and you assume full responsibility for compliance with all applicable licenses and laws. I recommend consulting with legal counsel to ensure your use of this model complies with all relevant licenses and regulations. # Merge Details ## Merge Method This model was merged using the SLERP merge method. ## Models Merged The following models were included in the merge: * deepseek-ai/DeepSeek-R1-Distill-Llama-70B * unreleased-novatempus-70b-v0.1.1 ## Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: deepseek-ai/DeepSeek-R1-Distill-Llama-70B - model: unreleased-novatempus-70b-v0.1.1 merge_method: slerp base_model: deepseek-ai/DeepSeek-R1-Distill-Llama-70B parameters: t: - filter: self_attn value: [0.2, 0.25, 0.3, 0.25, 0.2] - filter: "q_proj|k_proj|v_proj" value: [0.2, 0.25, 0.3, 0.25, 0.2] - filter: "up_proj|down_proj" value: [0.2, 0.3, 0.4, 0.3, 0.2] - filter: mlp value: [0.25, 0.35, 0.55, 0.35, 0.25] - value: 0.45 # default for other components dtype: bfloat16 tokenizer: source: deepseek-ai/DeepSeek-R1-Distill-Llama-70B #necessary to fix tokenizer ```
[ "CRAFT" ]
ylgatatooine/llasa-3b
ylgatatooine
text-to-speech
[ "safetensors", "llama", "Text-to-Speech", "text-to-speech", "zh", "en", "base_model:meta-llama/Llama-3.2-3B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-3B-Instruct", "license:cc-by-nc-nd-4.0", "region:us" ]
2025-01-30T05:44:06Z
2025-02-01T21:57:14+00:00
14
0
--- base_model: - meta-llama/Llama-3.2-3B-Instruct language: - zh - en license: cc-by-nc-nd-4.0 pipeline_tag: text-to-speech tags: - Text-to-Speech --- # This is a clone of https://huggingface.co/HKUSTAudio/Llasa-3B, which is gated. ## Paper LLaSA: Scaling Train-Time and Test-Time Compute for LLaMA-based Speech Synthesis (Comming soon) - **Train from Scratch**: If you want to train the model from scratch, use the [LLaSA Training Repository](https://github.com/zhenye234/LLaSA_training). - **Scale for Test-Time Computation**: If you want to experiment with scaling for test-time computation, use the [LLaSA Testing Repository](https://github.com/zhenye234/LLaSA_inference). ## Model Information Our model, Llasa, is a text-to-speech (TTS) system that extends the text-based LLaMA (1B,3B, and 8B) language model by incorporating speech tokens from the XCodec2 codebook, which contains 65,536 tokens. We trained Llasa on a dataset comprising 250,000 hours of Chinese-English speech data. The model is capable of generating speech **either solely from input text or by utilizing a given speech prompt.** The method is seamlessly compatible with the Llama framework, making training TTS similar as training LLM (convert audios into single-codebook tokens and simply view it as a special language). It opens the possiblity of existing method for compression, acceleration and finetuning for LLM to be applied. **More brief information of XCodec and XCodec2** can be found from https://huggingface.co/HKUSTAudio/Llasa-3B/discussions/11 ## How to use Install [XCodec2](https://huggingface.co/HKUST-Audio/xcodec2). (Please use new version of xcodec2==0.1.3) ```bash conda create -n xcodec2 python=3.9 conda activate xcodec2 pip install xcodec2==0.1.3 ``` **1. Speech synthesis solely from input text** ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch import soundfile as sf llasa_3b ='HKUST-Audio/Llasa-3B' tokenizer = AutoTokenizer.from_pretrained(llasa_3b) model = AutoModelForCausalLM.from_pretrained(llasa_3b) model.eval() model.to('cuda') from xcodec2.modeling_xcodec2 import XCodec2Model model_path = "HKUST-Audio/xcodec2" Codec_model = XCodec2Model.from_pretrained(model_path) Codec_model.eval().cuda() input_text = 'Dealing with family secrets is never easy. Yet, sometimes, omission is a form of protection, intending to safeguard some from the harsh truths. One day, I hope you understand the reasons behind my actions. Until then, Anna, please, bear with me.' # input_text = '突然,身边一阵笑声。我看着他们,意气风发地挺直了胸膛,甩了甩那稍显肉感的双臂,轻笑道:"我身上的肉,是为了掩饰我爆棚的魅力,否则,岂不吓坏了你们呢?"' def ids_to_speech_tokens(speech_ids): speech_tokens_str = [] for speech_id in speech_ids: speech_tokens_str.append(f"<|s_{speech_id}|>") return speech_tokens_str def extract_speech_ids(speech_tokens_str): speech_ids = [] for token_str in speech_tokens_str: if token_str.startswith('<|s_') and token_str.endswith('|>'): num_str = token_str[4:-2] num = int(num_str) speech_ids.append(num) else: print(f"Unexpected token: {token_str}") return speech_ids #TTS start! with torch.no_grad(): formatted_text = f"<|TEXT_UNDERSTANDING_START|>{input_text}<|TEXT_UNDERSTANDING_END|>" # Tokenize the text chat = [ {"role": "user", "content": "Convert the text to speech:" + formatted_text}, {"role": "assistant", "content": "<|SPEECH_GENERATION_START|>"} ] input_ids = tokenizer.apply_chat_template( chat, tokenize=True, return_tensors='pt', continue_final_message=True ) input_ids = input_ids.to('cuda') speech_end_id = tokenizer.convert_tokens_to_ids('<|SPEECH_GENERATION_END|>') # Generate the speech autoregressively outputs = model.generate( input_ids, max_length=2048, # We trained our model with a max length of 2048 eos_token_id= speech_end_id , do_sample=True, top_p=1, # Adjusts the diversity of generated content temperature=0.8, # Controls randomness in output ) # Extract the speech tokens generated_ids = outputs[0][input_ids.shape[1]:-1] speech_tokens = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) # Convert token <|s_23456|> to int 23456 speech_tokens = extract_speech_ids(speech_tokens) speech_tokens = torch.tensor(speech_tokens).cuda().unsqueeze(0).unsqueeze(0) # Decode the speech tokens to speech waveform gen_wav = Codec_model.decode_code(speech_tokens) sf.write("gen.wav", gen_wav[0, 0, :].cpu().numpy(), 16000) ``` **2. Speech synthesis utilizing a given speech prompt** ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch import soundfile as sf llasa_3b ='HKUST-Audio/Llasa-3B' tokenizer = AutoTokenizer.from_pretrained(llasa_3b) model = AutoModelForCausalLM.from_pretrained(llasa_3b) model.eval() model.to('cuda') from xcodec2.modeling_xcodec2 import XCodec2Model model_path = "HKUST-Audio/xcodec2" Codec_model = XCodec2Model.from_pretrained(model_path) Codec_model.eval().cuda() # only 16khz speech support! prompt_wav, sr = sf.read("太乙真人.wav") # you can find wav in Files #prompt_wav, sr = sf.read("Anna.wav") # English prompt prompt_wav = torch.from_numpy(prompt_wav).float().unsqueeze(0) prompt_text ="对,这就是我万人敬仰的太乙真人,虽然有点婴儿肥,但也掩不住我逼人的帅气。" #promt_text = "A chance to leave him alone, but... No. She just wanted to see him again. Anna, you don't know how it feels to lose a sister. Anna, I'm sorry, but your father asked me not to tell you anything." target_text = '突然,身边一阵笑声。我看着他们,意气风发地挺直了胸膛,甩了甩那稍显肉感的双臂,轻笑道:"我身上的肉,是为了掩饰我爆棚的魅力,否则,岂不吓坏了你们呢?"' #target_text = "Dealing with family secrets is never easy. Yet, sometimes, omission is a form of protection, intending to safeguard some from the harsh truths. One day, I hope you understand the reasons behind my actions. Until then, Anna, please, bear with me." input_text = prompt_text + target_text def ids_to_speech_tokens(speech_ids): speech_tokens_str = [] for speech_id in speech_ids: speech_tokens_str.append(f"<|s_{speech_id}|>") return speech_tokens_str def extract_speech_ids(speech_tokens_str): speech_ids = [] for token_str in speech_tokens_str: if token_str.startswith('<|s_') and token_str.endswith('|>'): num_str = token_str[4:-2] num = int(num_str) speech_ids.append(num) else: print(f"Unexpected token: {token_str}") return speech_ids #TTS start! with torch.no_grad(): # Encode the prompt wav vq_code_prompt = Codec_model.encode_code(input_waveform=prompt_wav) print("Prompt Vq Code Shape:", vq_code_prompt.shape ) vq_code_prompt = vq_code_prompt[0,0,:] # Convert int 12345 to token <|s_12345|> speech_ids_prefix = ids_to_speech_tokens(vq_code_prompt) formatted_text = f"<|TEXT_UNDERSTANDING_START|>{input_text}<|TEXT_UNDERSTANDING_END|>" # Tokenize the text and the speech prefix chat = [ {"role": "user", "content": "Convert the text to speech:" + formatted_text}, {"role": "assistant", "content": "<|SPEECH_GENERATION_START|>" + ''.join(speech_ids_prefix)} ] input_ids = tokenizer.apply_chat_template( chat, tokenize=True, return_tensors='pt', continue_final_message=True ) input_ids = input_ids.to('cuda') speech_end_id = tokenizer.convert_tokens_to_ids('<|SPEECH_GENERATION_END|>') # Generate the speech autoregressively outputs = model.generate( input_ids, max_length=2048, # We trained our model with a max length of 2048 eos_token_id= speech_end_id , do_sample=True, top_p=1, temperature=0.8, ) # Extract the speech tokens generated_ids = outputs[0][input_ids.shape[1]-len(speech_ids_prefix):-1] speech_tokens = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) # Convert token <|s_23456|> to int 23456 speech_tokens = extract_speech_ids(speech_tokens) speech_tokens = torch.tensor(speech_tokens).cuda().unsqueeze(0).unsqueeze(0) # Decode the speech tokens to speech waveform gen_wav = Codec_model.decode_code(speech_tokens) # if only need the generated part # gen_wav = gen_wav[:,:,prompt_wav.shape[1]:] sf.write("gen.wav", gen_wav[0, 0, :].cpu().numpy(), 16000) ``` ## Disclaimer This model is licensed under the CC BY-NC-ND 4.0 License, which prohibits free commercial use because of ethics and privacy concerns; detected violations will result in legal consequences. This codebase is strictly prohibited from being used for any illegal purposes in any country or region. Please refer to your local laws about DMCA and other related laws.
[ "BEAR" ]
illuin-cde/gte_long-ctx_multi
illuin-cde
sentence-similarity
[ "transformers", "safetensors", "modernbert", "feature-extraction", "sentence-transformers", "mteb", "embedding", "transformers.js", "sentence-similarity", "en", "arxiv:2308.03281", "base_model:answerdotai/ModernBERT-base", "base_model:finetune:answerdotai/ModernBERT-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2025-02-09T16:54:10Z
2025-02-09T16:54:21+00:00
14
0
--- base_model: - answerdotai/ModernBERT-base language: - en library_name: transformers license: apache-2.0 pipeline_tag: sentence-similarity tags: - sentence-transformers - mteb - embedding - transformers.js base_model_relation: finetune --- # gte-modernbert-base We are excited to introduce the `gte-modernbert` series of models, which are built upon the latest modernBERT pre-trained encoder-only foundation models. The `gte-modernbert` series models include both text embedding models and rerank models. The `gte-modernbert` models demonstrates competitive performance in several text embedding and text retrieval evaluation tasks when compared to similar-scale models from the current open-source community. This includes assessments such as MTEB, LoCO, and COIR evaluation. ## Model Overview - Developed by: Tongyi Lab, Alibaba Group - Model Type: Text Embedding - Primary Language: English - Model Size: 149M - Max Input Length: 8192 tokens - Output Dimension: 768 ### Model list | Models | Language | Model Type | Model Size | Max Seq. Length | Dimension | MTEB-en | BEIR | LoCo | CoIR | |:--------------------------------------------------------------------------------------:|:--------:|:----------------------:|:----------:|:---------------:|:---------:|:-------:|:----:|:----:|:----:| | [`gte-modernbert-base`](https://huggingface.co/Alibaba-NLP/gte-modernbert-base) | English | text embedding | 149M | 8192 | 768 | 64.38 | 55.33 | 87.57 | 79.31 | | [`gte-reranker-modernbert-base`](https://huggingface.co/Alibaba-NLP/gte-reranker-modernbert-base) | English | text reranker | 149M | 8192 | - | - | 56.19 | 90.68 | 79.99 | ## Usage > [!TIP] > For `transformers` and `sentence-transformers`, if your GPU supports it, the efficient Flash Attention 2 will be used automatically if you have `flash_attn` installed. It is not mandatory. > > ```bash > pip install flash_attn > ``` Use with `transformers` ```python # Requires transformers>=4.48.0 import torch.nn.functional as F from transformers import AutoModel, AutoTokenizer input_texts = [ "what is the capital of China?", "how to implement quick sort in python?", "Beijing", "sorting algorithms" ] model_path = "Alibaba-NLP/gte-modernbert-base" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModel.from_pretrained(model_path) # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=8192, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = outputs.last_hidden_state[:, 0] # (Optionally) normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:1] @ embeddings[1:].T) * 100 print(scores.tolist()) # [[42.89073944091797, 71.30911254882812, 33.664554595947266]] ``` Use with `sentence-transformers`: ```python # Requires transformers>=4.48.0 from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim input_texts = [ "what is the capital of China?", "how to implement quick sort in python?", "Beijing", "sorting algorithms" ] model = SentenceTransformer("Alibaba-NLP/gte-modernbert-base") embeddings = model.encode(input_texts) print(embeddings.shape) # (4, 768) similarities = cos_sim(embeddings[0], embeddings[1:]) print(similarities) # tensor([[0.4289, 0.7131, 0.3366]]) ``` Use with `transformers.js`: ```js // npm i @huggingface/transformers import { pipeline, matmul } from "@huggingface/transformers"; // Create a feature extraction pipeline const extractor = await pipeline( "feature-extraction", "Alibaba-NLP/gte-modernbert-base", { dtype: "fp32" }, // Supported options: "fp32", "fp16", "q8", "q4", "q4f16" ); // Embed queries and documents const embeddings = await extractor( [ "what is the capital of China?", "how to implement quick sort in python?", "Beijing", "sorting algorithms", ], { pooling: "cls", normalize: true }, ); // Compute similarity scores const similarities = (await matmul(embeddings.slice([0, 1]), embeddings.slice([1, null]).transpose(1, 0))).mul(100); console.log(similarities.tolist()); // [[42.89077377319336, 71.30916595458984, 33.66455841064453]] ``` ## Training Details The `gte-modernbert` series of models follows the training scheme of the previous [GTE models](https://huggingface.co/collections/Alibaba-NLP/gte-models-6680f0b13f885cb431e6d469), with the only difference being that the pre-training language model base has been replaced from [GTE-MLM](https://huggingface.co/Alibaba-NLP/gte-en-mlm-base) to [ModernBert](https://huggingface.co/answerdotai/ModernBERT-base). For more training details, please refer to our paper: [mGTE: Generalized Long-Context Text Representation and Reranking Models for Multilingual Text Retrieval](https://aclanthology.org/2024.emnlp-industry.103/) ## Evaluation ### MTEB The results of other models are retrieved from [MTEB leaderboard](https://huggingface.co/spaces/mteb/leaderboard). Given that all models in the `gte-modernbert` series have a size of less than 1B parameters, we focused exclusively on the results of models under 1B from the MTEB leaderboard. | Model Name | Param Size (M) | Dimension | Sequence Length | Average (56) | Class. (12) | Clust. (11) | Pair Class. (3) | Reran. (4) | Retr. (15) | STS (10) | Summ. (1) | |:------------------------------------------------------------------------------------------------:|:--------------:|:---------:|:---------------:|:------------:|:-----------:|:---:|:---:|:---:|:---:|:-----------:|:--------:| | [mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) | 335 | 1024 | 512 | 64.68 | 75.64 | 46.71 | 87.2 | 60.11 | 54.39 | 85 | 32.71 | | [multilingual-e5-large-instruct](https://huggingface.co/intfloat/multilingual-e5-large-instruct) | 560 | 1024 | 514 | 64.41 | 77.56 | 47.1 | 86.19 | 58.58 | 52.47 | 84.78 | 30.39 | | [bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 335 | 1024 | 512 | 64.23 | 75.97 | 46.08 | 87.12 | 60.03 | 54.29 | 83.11 | 31.61 | | [gte-base-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-base-en-v1.5) | 137 | 768 | 8192 | 64.11 | 77.17 | 46.82 | 85.33 | 57.66 | 54.09 | 81.97 | 31.17 | | [bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 109 | 768 | 512 | 63.55 | 75.53 | 45.77 | 86.55 | 58.86 | 53.25 | 82.4 | 31.07 | | [gte-large-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | 409 | 1024 | 8192 | 65.39 | 77.75 | 47.95 | 84.63 | 58.50 | 57.91 | 81.43 | 30.91 | | [modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base) | 149 | 768 | 8192 | 62.62 | 74.31 | 44.98 | 83.96 | 56.42 | 52.89 | 81.78 | 31.39 | | [nomic-embed-text-v1.5](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5) | | 768 | 8192 | 62.28 | 73.55 | 43.93 | 84.61 | 55.78 | 53.01| 81.94 | 30.4 | | [gte-multilingual-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-base) | 305 | 768 | 8192 | 61.4 | 70.89 | 44.31 | 84.24 | 57.47 |51.08 | 82.11 | 30.58 | | [jina-embeddings-v3](https://huggingface.co/jinaai/jina-embeddings-v3) | 572 | 1024 | 8192 | 65.51 | 82.58 |45.21 |84.01 |58.13 |53.88 | 85.81 | 29.71 | | [**gte-modernbert-base**](https://huggingface.co/Alibaba-NLP/gte-modernbert-base) | 149 | 768 | 8192 | **64.38** | **76.99** | **46.47** | **85.93** | **59.24** | **55.33** | **81.57** | **30.68** | ### LoCo (Long Document Retrieval)(NDCG@10) | Model Name | Dimension | Sequence Length | Average (5) | QsmsumRetrieval | SummScreenRetrieval | QasperAbastractRetrieval | QasperTitleRetrieval | GovReportRetrieval | |:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | [gte-qwen1.5-7b](https://huggingface.co/Alibaba-NLP/gte-qwen1.5-7b) | 4096 | 32768 | 87.57 | 49.37 | 93.10 | 99.67 | 97.54 | 98.21 | | [gte-large-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-v1.5) |1024 | 8192 | 86.71 | 44.55 | 92.61 | 99.82 | 97.81 | 98.74 | | [gte-base-v1.5](https://huggingface.co/Alibaba-NLP/gte-base-v1.5) | 768 | 8192 | 87.44 | 49.91 | 91.78 | 99.82 | 97.13 | 98.58 | | [gte-modernbert-base](https://huggingface.co/Alibaba-NLP/gte-modernbert-base) | 768 | 8192 | 88.88 | 54.45 | 93.00 | 99.82 | 98.03 | 98.70 | | [gte-reranker-modernbert-base](https://huggingface.co/Alibaba-NLP/gte-reranker-modernbert-base) | - | 8192 | 90.68 | 70.86 | 94.06 | 99.73 | 99.11 | 89.67 | ### COIR (Code Retrieval Task)(NDCG@10) | Model Name | Dimension | Sequence Length | Average(20) | CodeSearchNet-ccr-go | CodeSearchNet-ccr-java | CodeSearchNet-ccr-javascript | CodeSearchNet-ccr-php | CodeSearchNet-ccr-python | CodeSearchNet-ccr-ruby | CodeSearchNet-go | CodeSearchNet-java | CodeSearchNet-javascript | CodeSearchNet-php | CodeSearchNet-python | CodeSearchNet-ruby | apps | codefeedback-mt | codefeedback-st | codetrans-contest | codetrans-dl | cosqa | stackoverflow-qa | synthetic-text2sql | |:----:|:---:|:---:|:---:|:---:| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | [gte-modernbert-base](https://huggingface.co/Alibaba-NLP/gte-modernbert-base) | 768 | 8192 | 79.31 | 94.15 | 93.57 | 94.27 | 91.51 | 93.93 | 90.63 | 88.32 | 83.27 | 76.05 | 85.12 | 88.16 | 77.59 | 57.54 | 82.34 | 85.95 | 71.89 | 35.46 | 43.47 | 91.2 | 61.87 | | [gte-reranker-modernbert-base](https://huggingface.co/Alibaba-NLP/gte-reranker-modernbert-base) | - | 8192 | 79.99 | 96.43 | 96.88 | 98.32 | 91.81 | 97.7 | 91.96 | 88.81 | 79.71 | 76.27 | 89.39 | 98.37 | 84.11 | 47.57 | 83.37 | 88.91 | 49.66 | 36.36 | 44.37 | 89.58 | 64.21 | ### BEIR(NDCG@10) | Model Name | Dimension | Sequence Length | Average(15) | ArguAna | ClimateFEVER | CQADupstackAndroidRetrieval | DBPedia | FEVER | FiQA2018 | HotpotQA | MSMARCO | NFCorpus | NQ | QuoraRetrieval | SCIDOCS | SciFact | Touche2020 | TRECCOVID | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | [gte-modernbert-base](https://huggingface.co/Alibaba-NLP/gte-modernbert-base) | 768 | 8192 | 55.33 | 72.68 | 37.74 | 42.63 | 41.79 | 91.03 | 48.81 | 69.47 | 40.9 | 36.44 | 57.62 | 88.55 | 21.29 | 77.4 | 21.68 | 81.95 | | [gte-reranker-modernbert-base](https://huggingface.co/Alibaba-NLP/gte-reranker-modernbert-base) | - | 8192 | 56.73 | 69.03 | 37.79 | 44.68 | 47.23 | 94.54 | 49.81 | 78.16 | 45.38 | 30.69 | 64.57 | 87.77 | 20.60 | 73.57 | 27.36 | 79.89 | ## Hiring We have open positions for **Research Interns** and **Full-Time Researchers** to join our team at Tongyi Lab. We are seeking passionate individuals with expertise in representation learning, LLM-driven information retrieval, Retrieval-Augmented Generation (RAG), and agent-based systems. Our team is located in the vibrant cities of **Beijing** and **Hangzhou**. If you are driven by curiosity and eager to make a meaningful impact through your work, we would love to hear from you. Please submit your resume along with a brief introduction to <a href="mailto:[email protected]">[email protected]</a>. ## Citation If you find our paper or models helpful, feel free to give us a cite. ``` @inproceedings{zhang2024mgte, title={mGTE: Generalized Long-Context Text Representation and Reranking Models for Multilingual Text Retrieval}, author={Zhang, Xin and Zhang, Yanzhao and Long, Dingkun and Xie, Wen and Dai, Ziqi and Tang, Jialong and Lin, Huan and Yang, Baosong and Xie, Pengjun and Huang, Fei and others}, booktitle={Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track}, pages={1393--1412}, year={2024} } @article{li2023towards, title={Towards general text embeddings with multi-stage contrastive learning}, author={Li, Zehan and Zhang, Xin and Zhang, Yanzhao and Long, Dingkun and Xie, Pengjun and Zhang, Meishan}, journal={arXiv preprint arXiv:2308.03281}, year={2023} } ```
[ "SCIFACT" ]
Zenabius/multilingual-e5-large-Q8_0-GGUF
Zenabius
feature-extraction
[ "sentence-transformers", "gguf", "mteb", "Sentence Transformers", "sentence-similarity", "feature-extraction", "llama-cpp", "gguf-my-repo", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh", "base_model:intfloat/multilingual-e5-large", "base_model:quantized:intfloat/multilingual-e5-large", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2025-02-10T02:47:13Z
2025-02-10T02:47:21+00:00
14
0
--- base_model: intfloat/multilingual-e5-large language: - multilingual - af - am - ar - as - az - be - bg - bn - br - bs - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - hu - hy - id - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - 'no' - om - or - pa - pl - ps - pt - ro - ru - sa - sd - si - sk - sl - so - sq - sr - su - sv - sw - ta - te - th - tl - tr - ug - uk - ur - uz - vi - xh - yi - zh license: mit tags: - mteb - Sentence Transformers - sentence-similarity - feature-extraction - sentence-transformers - llama-cpp - gguf-my-repo model-index: - name: multilingual-e5-large results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 79.05970149253731 - type: ap value: 43.486574390835635 - type: f1 value: 73.32700092140148 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (de) type: mteb/amazon_counterfactual config: de split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 71.22055674518201 - type: ap value: 81.55756710830498 - type: f1 value: 69.28271787752661 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en-ext) type: mteb/amazon_counterfactual config: en-ext split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 80.41979010494754 - type: ap value: 29.34879922376344 - type: f1 value: 67.62475449011278 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (ja) type: mteb/amazon_counterfactual config: ja split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 77.8372591006424 - type: ap value: 26.557560591210738 - type: f1 value: 64.96619417368707 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 93.489875 - type: ap value: 90.98758636917603 - type: f1 value: 93.48554819717332 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 47.564 - type: f1 value: 46.75122173518047 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (de) type: mteb/amazon_reviews_multi config: de split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 45.400000000000006 - type: f1 value: 44.17195682400632 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (es) type: mteb/amazon_reviews_multi config: es split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 43.068 - type: f1 value: 42.38155696855596 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (fr) type: mteb/amazon_reviews_multi config: fr split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 41.89 - type: f1 value: 40.84407321682663 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (ja) type: mteb/amazon_reviews_multi config: ja split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 40.120000000000005 - type: f1 value: 39.522976223819114 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (zh) type: mteb/amazon_reviews_multi config: zh split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 38.832 - type: f1 value: 38.0392533394713 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 30.725 - type: map_at_10 value: 46.055 - type: map_at_100 value: 46.900999999999996 - type: map_at_1000 value: 46.911 - type: map_at_3 value: 41.548 - type: map_at_5 value: 44.297 - type: mrr_at_1 value: 31.152 - type: mrr_at_10 value: 46.231 - type: mrr_at_100 value: 47.07 - type: mrr_at_1000 value: 47.08 - type: mrr_at_3 value: 41.738 - type: mrr_at_5 value: 44.468999999999994 - type: ndcg_at_1 value: 30.725 - type: ndcg_at_10 value: 54.379999999999995 - type: ndcg_at_100 value: 58.138 - type: ndcg_at_1000 value: 58.389 - type: ndcg_at_3 value: 45.156 - type: ndcg_at_5 value: 50.123 - type: precision_at_1 value: 30.725 - type: precision_at_10 value: 8.087 - type: precision_at_100 value: 0.9769999999999999 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 18.54 - type: precision_at_5 value: 13.542000000000002 - type: recall_at_1 value: 30.725 - type: recall_at_10 value: 80.868 - type: recall_at_100 value: 97.653 - type: recall_at_1000 value: 99.57300000000001 - type: recall_at_3 value: 55.619 - type: recall_at_5 value: 67.71000000000001 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 44.30960650674069 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 38.427074197498996 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 60.28270056031872 - type: mrr value: 74.38332673789738 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 84.05942144105269 - type: cos_sim_spearman value: 82.51212105850809 - type: euclidean_pearson value: 81.95639829909122 - type: euclidean_spearman value: 82.3717564144213 - type: manhattan_pearson value: 81.79273425468256 - type: manhattan_spearman value: 82.20066817871039 - task: type: BitextMining dataset: name: MTEB BUCC (de-en) type: mteb/bucc-bitext-mining config: de-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 99.46764091858039 - type: f1 value: 99.37717466945023 - type: precision value: 99.33194154488518 - type: recall value: 99.46764091858039 - task: type: BitextMining dataset: name: MTEB BUCC (fr-en) type: mteb/bucc-bitext-mining config: fr-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 98.29407880255337 - type: f1 value: 98.11248073959938 - type: precision value: 98.02443319392472 - type: recall value: 98.29407880255337 - task: type: BitextMining dataset: name: MTEB BUCC (ru-en) type: mteb/bucc-bitext-mining config: ru-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 97.79009352268791 - type: f1 value: 97.5176076665512 - type: precision value: 97.38136473848286 - type: recall value: 97.79009352268791 - task: type: BitextMining dataset: name: MTEB BUCC (zh-en) type: mteb/bucc-bitext-mining config: zh-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 99.26276987888363 - type: f1 value: 99.20133403545726 - type: precision value: 99.17500438827453 - type: recall value: 99.26276987888363 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 84.72727272727273 - type: f1 value: 84.67672206031433 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 35.34220182511161 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 33.4987096128766 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 25.558249999999997 - type: map_at_10 value: 34.44425000000001 - type: map_at_100 value: 35.59833333333333 - type: map_at_1000 value: 35.706916666666665 - type: map_at_3 value: 31.691749999999995 - type: map_at_5 value: 33.252916666666664 - type: mrr_at_1 value: 30.252666666666666 - type: mrr_at_10 value: 38.60675 - type: mrr_at_100 value: 39.42666666666666 - type: mrr_at_1000 value: 39.48408333333334 - type: mrr_at_3 value: 36.17441666666665 - type: mrr_at_5 value: 37.56275 - type: ndcg_at_1 value: 30.252666666666666 - type: ndcg_at_10 value: 39.683 - type: ndcg_at_100 value: 44.68541666666667 - type: ndcg_at_1000 value: 46.94316666666668 - type: ndcg_at_3 value: 34.961749999999995 - type: ndcg_at_5 value: 37.215666666666664 - type: precision_at_1 value: 30.252666666666666 - type: precision_at_10 value: 6.904166666666667 - type: precision_at_100 value: 1.0989999999999995 - type: precision_at_1000 value: 0.14733333333333334 - type: precision_at_3 value: 16.037666666666667 - type: precision_at_5 value: 11.413583333333333 - type: recall_at_1 value: 25.558249999999997 - type: recall_at_10 value: 51.13341666666666 - type: recall_at_100 value: 73.08366666666667 - type: recall_at_1000 value: 88.79483333333334 - type: recall_at_3 value: 37.989083333333326 - type: recall_at_5 value: 43.787833333333325 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 10.338 - type: map_at_10 value: 18.360000000000003 - type: map_at_100 value: 19.942 - type: map_at_1000 value: 20.134 - type: map_at_3 value: 15.174000000000001 - type: map_at_5 value: 16.830000000000002 - type: mrr_at_1 value: 23.257 - type: mrr_at_10 value: 33.768 - type: mrr_at_100 value: 34.707 - type: mrr_at_1000 value: 34.766000000000005 - type: mrr_at_3 value: 30.977 - type: mrr_at_5 value: 32.528 - type: ndcg_at_1 value: 23.257 - type: ndcg_at_10 value: 25.733 - type: ndcg_at_100 value: 32.288 - type: ndcg_at_1000 value: 35.992000000000004 - type: ndcg_at_3 value: 20.866 - type: ndcg_at_5 value: 22.612 - type: precision_at_1 value: 23.257 - type: precision_at_10 value: 8.124 - type: precision_at_100 value: 1.518 - type: precision_at_1000 value: 0.219 - type: precision_at_3 value: 15.679000000000002 - type: precision_at_5 value: 12.117 - type: recall_at_1 value: 10.338 - type: recall_at_10 value: 31.154 - type: recall_at_100 value: 54.161 - type: recall_at_1000 value: 75.21900000000001 - type: recall_at_3 value: 19.427 - type: recall_at_5 value: 24.214 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 8.498 - type: map_at_10 value: 19.103 - type: map_at_100 value: 27.375 - type: map_at_1000 value: 28.981 - type: map_at_3 value: 13.764999999999999 - type: map_at_5 value: 15.950000000000001 - type: mrr_at_1 value: 65.5 - type: mrr_at_10 value: 74.53800000000001 - type: mrr_at_100 value: 74.71799999999999 - type: mrr_at_1000 value: 74.725 - type: mrr_at_3 value: 72.792 - type: mrr_at_5 value: 73.554 - type: ndcg_at_1 value: 53.37499999999999 - type: ndcg_at_10 value: 41.286 - type: ndcg_at_100 value: 45.972 - type: ndcg_at_1000 value: 53.123 - type: ndcg_at_3 value: 46.172999999999995 - type: ndcg_at_5 value: 43.033 - type: precision_at_1 value: 65.5 - type: precision_at_10 value: 32.725 - type: precision_at_100 value: 10.683 - type: precision_at_1000 value: 1.978 - type: precision_at_3 value: 50 - type: precision_at_5 value: 41.349999999999994 - type: recall_at_1 value: 8.498 - type: recall_at_10 value: 25.070999999999998 - type: recall_at_100 value: 52.383 - type: recall_at_1000 value: 74.91499999999999 - type: recall_at_3 value: 15.207999999999998 - type: recall_at_5 value: 18.563 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 46.5 - type: f1 value: 41.93833713984145 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 67.914 - type: map_at_10 value: 78.10000000000001 - type: map_at_100 value: 78.333 - type: map_at_1000 value: 78.346 - type: map_at_3 value: 76.626 - type: map_at_5 value: 77.627 - type: mrr_at_1 value: 72.74199999999999 - type: mrr_at_10 value: 82.414 - type: mrr_at_100 value: 82.511 - type: mrr_at_1000 value: 82.513 - type: mrr_at_3 value: 81.231 - type: mrr_at_5 value: 82.065 - type: ndcg_at_1 value: 72.74199999999999 - type: ndcg_at_10 value: 82.806 - type: ndcg_at_100 value: 83.677 - type: ndcg_at_1000 value: 83.917 - type: ndcg_at_3 value: 80.305 - type: ndcg_at_5 value: 81.843 - type: precision_at_1 value: 72.74199999999999 - type: precision_at_10 value: 10.24 - type: precision_at_100 value: 1.089 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 31.268 - type: precision_at_5 value: 19.706000000000003 - type: recall_at_1 value: 67.914 - type: recall_at_10 value: 92.889 - type: recall_at_100 value: 96.42699999999999 - type: recall_at_1000 value: 97.92 - type: recall_at_3 value: 86.21 - type: recall_at_5 value: 90.036 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 22.166 - type: map_at_10 value: 35.57 - type: map_at_100 value: 37.405 - type: map_at_1000 value: 37.564 - type: map_at_3 value: 30.379 - type: map_at_5 value: 33.324 - type: mrr_at_1 value: 43.519000000000005 - type: mrr_at_10 value: 51.556000000000004 - type: mrr_at_100 value: 52.344 - type: mrr_at_1000 value: 52.373999999999995 - type: mrr_at_3 value: 48.868 - type: mrr_at_5 value: 50.319 - type: ndcg_at_1 value: 43.519000000000005 - type: ndcg_at_10 value: 43.803 - type: ndcg_at_100 value: 50.468999999999994 - type: ndcg_at_1000 value: 53.111 - type: ndcg_at_3 value: 38.893 - type: ndcg_at_5 value: 40.653 - type: precision_at_1 value: 43.519000000000005 - type: precision_at_10 value: 12.253 - type: precision_at_100 value: 1.931 - type: precision_at_1000 value: 0.242 - type: precision_at_3 value: 25.617 - type: precision_at_5 value: 19.383 - type: recall_at_1 value: 22.166 - type: recall_at_10 value: 51.6 - type: recall_at_100 value: 76.574 - type: recall_at_1000 value: 92.192 - type: recall_at_3 value: 34.477999999999994 - type: recall_at_5 value: 41.835 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 39.041 - type: map_at_10 value: 62.961999999999996 - type: map_at_100 value: 63.79899999999999 - type: map_at_1000 value: 63.854 - type: map_at_3 value: 59.399 - type: map_at_5 value: 61.669 - type: mrr_at_1 value: 78.082 - type: mrr_at_10 value: 84.321 - type: mrr_at_100 value: 84.49600000000001 - type: mrr_at_1000 value: 84.502 - type: mrr_at_3 value: 83.421 - type: mrr_at_5 value: 83.977 - type: ndcg_at_1 value: 78.082 - type: ndcg_at_10 value: 71.229 - type: ndcg_at_100 value: 74.10900000000001 - type: ndcg_at_1000 value: 75.169 - type: ndcg_at_3 value: 66.28699999999999 - type: ndcg_at_5 value: 69.084 - type: precision_at_1 value: 78.082 - type: precision_at_10 value: 14.993 - type: precision_at_100 value: 1.7239999999999998 - type: precision_at_1000 value: 0.186 - type: precision_at_3 value: 42.737 - type: precision_at_5 value: 27.843 - type: recall_at_1 value: 39.041 - type: recall_at_10 value: 74.96300000000001 - type: recall_at_100 value: 86.199 - type: recall_at_1000 value: 93.228 - type: recall_at_3 value: 64.105 - type: recall_at_5 value: 69.608 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 90.23160000000001 - type: ap value: 85.5674856808308 - type: f1 value: 90.18033354786317 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 24.091 - type: map_at_10 value: 36.753 - type: map_at_100 value: 37.913000000000004 - type: map_at_1000 value: 37.958999999999996 - type: map_at_3 value: 32.818999999999996 - type: map_at_5 value: 35.171 - type: mrr_at_1 value: 24.742 - type: mrr_at_10 value: 37.285000000000004 - type: mrr_at_100 value: 38.391999999999996 - type: mrr_at_1000 value: 38.431 - type: mrr_at_3 value: 33.440999999999995 - type: mrr_at_5 value: 35.75 - type: ndcg_at_1 value: 24.742 - type: ndcg_at_10 value: 43.698 - type: ndcg_at_100 value: 49.145 - type: ndcg_at_1000 value: 50.23800000000001 - type: ndcg_at_3 value: 35.769 - type: ndcg_at_5 value: 39.961999999999996 - type: precision_at_1 value: 24.742 - type: precision_at_10 value: 6.7989999999999995 - type: precision_at_100 value: 0.95 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 15.096000000000002 - type: precision_at_5 value: 11.183 - type: recall_at_1 value: 24.091 - type: recall_at_10 value: 65.068 - type: recall_at_100 value: 89.899 - type: recall_at_1000 value: 98.16 - type: recall_at_3 value: 43.68 - type: recall_at_5 value: 53.754999999999995 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 93.66621067031465 - type: f1 value: 93.49622853272142 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (de) type: mteb/mtop_domain config: de split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 91.94702733164272 - type: f1 value: 91.17043441745282 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (es) type: mteb/mtop_domain config: es split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 92.20146764509674 - type: f1 value: 91.98359080555608 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (fr) type: mteb/mtop_domain config: fr split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 88.99780770435328 - type: f1 value: 89.19746342724068 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (hi) type: mteb/mtop_domain config: hi split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 89.78486912871998 - type: f1 value: 89.24578823628642 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (th) type: mteb/mtop_domain config: th split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 88.74502712477394 - type: f1 value: 89.00297573881542 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 77.9046967624259 - type: f1 value: 59.36787125785957 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (de) type: mteb/mtop_intent config: de split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 74.5280360664976 - type: f1 value: 57.17723440888718 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (es) type: mteb/mtop_intent config: es split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 75.44029352901934 - type: f1 value: 54.052855531072964 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (fr) type: mteb/mtop_intent config: fr split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 70.5606013153774 - type: f1 value: 52.62215934386531 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (hi) type: mteb/mtop_intent config: hi split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 73.11581211903908 - type: f1 value: 52.341291845645465 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (th) type: mteb/mtop_intent config: th split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 74.28933092224233 - type: f1 value: 57.07918745504911 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (af) type: mteb/amazon_massive_intent config: af split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.38063214525892 - type: f1 value: 59.46463723443009 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (am) type: mteb/amazon_massive_intent config: am split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 56.06926698049766 - type: f1 value: 52.49084283283562 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ar) type: mteb/amazon_massive_intent config: ar split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 60.74983187626093 - type: f1 value: 56.960640620165904 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (az) type: mteb/amazon_massive_intent config: az split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.86550100874243 - type: f1 value: 62.47370548140688 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (bn) type: mteb/amazon_massive_intent config: bn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.971082716879636 - type: f1 value: 61.03812421957381 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (cy) type: mteb/amazon_massive_intent config: cy split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 54.98318762609282 - type: f1 value: 51.51207916008392 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (da) type: mteb/amazon_massive_intent config: da split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.45527908540686 - type: f1 value: 66.16631905400318 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (de) type: mteb/amazon_massive_intent config: de split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.32750504371216 - type: f1 value: 66.16755288646591 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (el) type: mteb/amazon_massive_intent config: el split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.09213180901143 - type: f1 value: 66.95654394661507 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 73.75588433086752 - type: f1 value: 71.79973779656923 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (es) type: mteb/amazon_massive_intent config: es split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.49428379287154 - type: f1 value: 68.37494379215734 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fa) type: mteb/amazon_massive_intent config: fa split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.90921318090115 - type: f1 value: 66.79517376481645 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fi) type: mteb/amazon_massive_intent config: fi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.12104909213181 - type: f1 value: 67.29448842879584 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fr) type: mteb/amazon_massive_intent config: fr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.34095494283793 - type: f1 value: 67.01134288992947 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (he) type: mteb/amazon_massive_intent config: he split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.61264290517822 - type: f1 value: 64.68730512660757 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hi) type: mteb/amazon_massive_intent config: hi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.79757901815738 - type: f1 value: 65.24938539425598 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hu) type: mteb/amazon_massive_intent config: hu split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.68728984532616 - type: f1 value: 67.0487169762553 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hy) type: mteb/amazon_massive_intent config: hy split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.07464694014795 - type: f1 value: 59.183532276789286 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (id) type: mteb/amazon_massive_intent config: id split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.04707464694015 - type: f1 value: 67.66829629003848 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (is) type: mteb/amazon_massive_intent config: is split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.42434431741762 - type: f1 value: 59.01617226544757 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (it) type: mteb/amazon_massive_intent config: it split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.53127101546738 - type: f1 value: 68.10033760906255 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ja) type: mteb/amazon_massive_intent config: ja split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 72.50504371217215 - type: f1 value: 69.74931103158923 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (jv) type: mteb/amazon_massive_intent config: jv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.91190316072628 - type: f1 value: 54.05551136648796 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ka) type: mteb/amazon_massive_intent config: ka split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 51.78211163416275 - type: f1 value: 49.874888544058535 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (km) type: mteb/amazon_massive_intent config: km split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 47.017484868863484 - type: f1 value: 44.53364263352014 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (kn) type: mteb/amazon_massive_intent config: kn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.16207128446537 - type: f1 value: 59.01185692320829 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ko) type: mteb/amazon_massive_intent config: ko split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.42501681237391 - type: f1 value: 67.13169450166086 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (lv) type: mteb/amazon_massive_intent config: lv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.0780094149294 - type: f1 value: 64.41720167850707 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ml) type: mteb/amazon_massive_intent config: ml split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.57162071284466 - type: f1 value: 62.414138683804424 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (mn) type: mteb/amazon_massive_intent config: mn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 61.71149966375252 - type: f1 value: 58.594805125087234 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ms) type: mteb/amazon_massive_intent config: ms split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.03900470746471 - type: f1 value: 63.87937257883887 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (my) type: mteb/amazon_massive_intent config: my split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 60.8776059179556 - type: f1 value: 57.48587618059131 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (nb) type: mteb/amazon_massive_intent config: nb split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.87895090786819 - type: f1 value: 66.8141299430347 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (nl) type: mteb/amazon_massive_intent config: nl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.45057162071285 - type: f1 value: 67.46444039673516 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (pl) type: mteb/amazon_massive_intent config: pl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.546738399462 - type: f1 value: 68.63640876702655 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (pt) type: mteb/amazon_massive_intent config: pt split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.72965702757229 - type: f1 value: 68.54119560379115 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ro) type: mteb/amazon_massive_intent config: ro split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 68.35574983187625 - type: f1 value: 65.88844917691927 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ru) type: mteb/amazon_massive_intent config: ru split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.70477471418964 - type: f1 value: 69.19665697061978 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sl) type: mteb/amazon_massive_intent config: sl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.0880968392737 - type: f1 value: 64.76962317666086 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sq) type: mteb/amazon_massive_intent config: sq split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.18493611297916 - type: f1 value: 62.49984559035371 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sv) type: mteb/amazon_massive_intent config: sv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.75857431069265 - type: f1 value: 69.20053687623418 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sw) type: mteb/amazon_massive_intent config: sw split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 58.500336247478145 - type: f1 value: 55.2972398687929 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ta) type: mteb/amazon_massive_intent config: ta split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.68997982515132 - type: f1 value: 59.36848202755348 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (te) type: mteb/amazon_massive_intent config: te split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.01950235373235 - type: f1 value: 60.09351954625423 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (th) type: mteb/amazon_massive_intent config: th split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 68.29186281102892 - type: f1 value: 67.57860496703447 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (tl) type: mteb/amazon_massive_intent config: tl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.77471418964357 - type: f1 value: 61.913983147713836 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (tr) type: mteb/amazon_massive_intent config: tr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.87222595830532 - type: f1 value: 66.03679033708141 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ur) type: mteb/amazon_massive_intent config: ur split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.04505716207127 - type: f1 value: 61.28569169817908 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (vi) type: mteb/amazon_massive_intent config: vi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.38466711499663 - type: f1 value: 67.20532357036844 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-CN) type: mteb/amazon_massive_intent config: zh-CN split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.12306657700067 - type: f1 value: 68.91251226588182 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-TW) type: mteb/amazon_massive_intent config: zh-TW split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.20040349697378 - type: f1 value: 66.02657347714175 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (af) type: mteb/amazon_massive_scenario config: af split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.73907195696032 - type: f1 value: 66.98484521791418 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (am) type: mteb/amazon_massive_scenario config: am split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 60.58843308675185 - type: f1 value: 58.95591723092005 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ar) type: mteb/amazon_massive_scenario config: ar split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.22730329522528 - type: f1 value: 66.0894499712115 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (az) type: mteb/amazon_massive_scenario config: az split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.48285137861465 - type: f1 value: 65.21963176785157 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (bn) type: mteb/amazon_massive_scenario config: bn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.74714189643578 - type: f1 value: 66.8212192745412 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (cy) type: mteb/amazon_massive_scenario config: cy split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 59.09213180901143 - type: f1 value: 56.70735546356339 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (da) type: mteb/amazon_massive_scenario config: da split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.05716207128448 - type: f1 value: 74.8413712365364 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (de) type: mteb/amazon_massive_scenario config: de split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.69737726967047 - type: f1 value: 74.7664341963 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (el) type: mteb/amazon_massive_scenario config: el split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.90383322125084 - type: f1 value: 73.59201554448323 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.51176866173503 - type: f1 value: 77.46104434577758 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (es) type: mteb/amazon_massive_scenario config: es split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.31069266980496 - type: f1 value: 74.61048660675635 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fa) type: mteb/amazon_massive_scenario config: fa split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 72.95225285810356 - type: f1 value: 72.33160006574627 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fi) type: mteb/amazon_massive_scenario config: fi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.12373907195696 - type: f1 value: 73.20921012557481 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fr) type: mteb/amazon_massive_scenario config: fr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.86684599865501 - type: f1 value: 73.82348774610831 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (he) type: mteb/amazon_massive_scenario config: he split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.40215198386012 - type: f1 value: 71.11945183971858 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hi) type: mteb/amazon_massive_scenario config: hi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 72.12844653665098 - type: f1 value: 71.34450495911766 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hu) type: mteb/amazon_massive_scenario config: hu split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.52252858103566 - type: f1 value: 73.98878711342999 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hy) type: mteb/amazon_massive_scenario config: hy split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 64.93611297915265 - type: f1 value: 63.723200467653385 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (id) type: mteb/amazon_massive_scenario config: id split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.11903160726295 - type: f1 value: 73.82138439467096 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (is) type: mteb/amazon_massive_scenario config: is split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.15198386012105 - type: f1 value: 66.02172193802167 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (it) type: mteb/amazon_massive_scenario config: it split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.32414256893072 - type: f1 value: 74.30943421170574 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ja) type: mteb/amazon_massive_scenario config: ja split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.46805648957633 - type: f1 value: 77.62808409298209 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (jv) type: mteb/amazon_massive_scenario config: jv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.318762609280434 - type: f1 value: 62.094284066075076 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ka) type: mteb/amazon_massive_scenario config: ka split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 58.34902488231338 - type: f1 value: 57.12893860987984 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (km) type: mteb/amazon_massive_scenario config: km split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 50.88433086751849 - type: f1 value: 48.2272350802058 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (kn) type: mteb/amazon_massive_scenario config: kn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.4425016812374 - type: f1 value: 64.61463095996173 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ko) type: mteb/amazon_massive_scenario config: ko split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.04707464694015 - type: f1 value: 75.05099199098998 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (lv) type: mteb/amazon_massive_scenario config: lv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.50437121721586 - type: f1 value: 69.83397721096314 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ml) type: mteb/amazon_massive_scenario config: ml split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.94283792871553 - type: f1 value: 68.8704663703913 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (mn) type: mteb/amazon_massive_scenario config: mn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 64.79488903833222 - type: f1 value: 63.615424063345436 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ms) type: mteb/amazon_massive_scenario config: ms split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.88231338264963 - type: f1 value: 68.57892302593237 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (my) type: mteb/amazon_massive_scenario config: my split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.248150638870214 - type: f1 value: 61.06680605338809 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (nb) type: mteb/amazon_massive_scenario config: nb split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.84196368527236 - type: f1 value: 74.52566464968763 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (nl) type: mteb/amazon_massive_scenario config: nl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.8285137861466 - type: f1 value: 74.8853197608802 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (pl) type: mteb/amazon_massive_scenario config: pl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.13248150638869 - type: f1 value: 74.3982040999179 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (pt) type: mteb/amazon_massive_scenario config: pt split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.49024882313383 - type: f1 value: 73.82153848368573 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ro) type: mteb/amazon_massive_scenario config: ro split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.72158708809684 - type: f1 value: 71.85049433180541 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ru) type: mteb/amazon_massive_scenario config: ru split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.137861466039 - type: f1 value: 75.37628348188467 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sl) type: mteb/amazon_massive_scenario config: sl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.86953597848016 - type: f1 value: 71.87537624521661 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sq) type: mteb/amazon_massive_scenario config: sq split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.27572293207801 - type: f1 value: 68.80017302344231 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sv) type: mteb/amazon_massive_scenario config: sv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.09952925353059 - type: f1 value: 76.07992707688408 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sw) type: mteb/amazon_massive_scenario config: sw split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.140551445864155 - type: f1 value: 61.73855010331415 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ta) type: mteb/amazon_massive_scenario config: ta split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.27774041694687 - type: f1 value: 64.83664868894539 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (te) type: mteb/amazon_massive_scenario config: te split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.69468728984533 - type: f1 value: 64.76239666920868 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (th) type: mteb/amazon_massive_scenario config: th split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.44653665097512 - type: f1 value: 73.14646052013873 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (tl) type: mteb/amazon_massive_scenario config: tl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.71351714862139 - type: f1 value: 66.67212180163382 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (tr) type: mteb/amazon_massive_scenario config: tr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.9946200403497 - type: f1 value: 73.87348793725525 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ur) type: mteb/amazon_massive_scenario config: ur split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.15400134498992 - type: f1 value: 67.09433241421094 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (vi) type: mteb/amazon_massive_scenario config: vi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.11365164761264 - type: f1 value: 73.59502539433753 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-CN) type: mteb/amazon_massive_scenario config: zh-CN split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.82582380632145 - type: f1 value: 76.89992945316313 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-TW) type: mteb/amazon_massive_scenario config: zh-TW split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.81237390719569 - type: f1 value: 72.36499770986265 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 31.480506569594695 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 29.71252128004552 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.421396787056548 - type: mrr value: 32.48155274872267 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 5.595 - type: map_at_10 value: 12.642000000000001 - type: map_at_100 value: 15.726 - type: map_at_1000 value: 17.061999999999998 - type: map_at_3 value: 9.125 - type: map_at_5 value: 10.866000000000001 - type: mrr_at_1 value: 43.344 - type: mrr_at_10 value: 52.227999999999994 - type: mrr_at_100 value: 52.898999999999994 - type: mrr_at_1000 value: 52.944 - type: mrr_at_3 value: 49.845 - type: mrr_at_5 value: 51.115 - type: ndcg_at_1 value: 41.949999999999996 - type: ndcg_at_10 value: 33.995 - type: ndcg_at_100 value: 30.869999999999997 - type: ndcg_at_1000 value: 39.487 - type: ndcg_at_3 value: 38.903999999999996 - type: ndcg_at_5 value: 37.236999999999995 - type: precision_at_1 value: 43.344 - type: precision_at_10 value: 25.480000000000004 - type: precision_at_100 value: 7.672 - type: precision_at_1000 value: 2.028 - type: precision_at_3 value: 36.636 - type: precision_at_5 value: 32.632 - type: recall_at_1 value: 5.595 - type: recall_at_10 value: 16.466 - type: recall_at_100 value: 31.226 - type: recall_at_1000 value: 62.778999999999996 - type: recall_at_3 value: 9.931 - type: recall_at_5 value: 12.884 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 40.414 - type: map_at_10 value: 56.754000000000005 - type: map_at_100 value: 57.457 - type: map_at_1000 value: 57.477999999999994 - type: map_at_3 value: 52.873999999999995 - type: map_at_5 value: 55.175 - type: mrr_at_1 value: 45.278 - type: mrr_at_10 value: 59.192 - type: mrr_at_100 value: 59.650000000000006 - type: mrr_at_1000 value: 59.665 - type: mrr_at_3 value: 56.141 - type: mrr_at_5 value: 57.998000000000005 - type: ndcg_at_1 value: 45.278 - type: ndcg_at_10 value: 64.056 - type: ndcg_at_100 value: 66.89 - type: ndcg_at_1000 value: 67.364 - type: ndcg_at_3 value: 56.97 - type: ndcg_at_5 value: 60.719 - type: precision_at_1 value: 45.278 - type: precision_at_10 value: 9.994 - type: precision_at_100 value: 1.165 - type: precision_at_1000 value: 0.121 - type: precision_at_3 value: 25.512 - type: precision_at_5 value: 17.509 - type: recall_at_1 value: 40.414 - type: recall_at_10 value: 83.596 - type: recall_at_100 value: 95.72 - type: recall_at_1000 value: 99.24 - type: recall_at_3 value: 65.472 - type: recall_at_5 value: 74.039 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 70.352 - type: map_at_10 value: 84.369 - type: map_at_100 value: 85.02499999999999 - type: map_at_1000 value: 85.04 - type: map_at_3 value: 81.42399999999999 - type: map_at_5 value: 83.279 - type: mrr_at_1 value: 81.05 - type: mrr_at_10 value: 87.401 - type: mrr_at_100 value: 87.504 - type: mrr_at_1000 value: 87.505 - type: mrr_at_3 value: 86.443 - type: mrr_at_5 value: 87.10799999999999 - type: ndcg_at_1 value: 81.04 - type: ndcg_at_10 value: 88.181 - type: ndcg_at_100 value: 89.411 - type: ndcg_at_1000 value: 89.507 - type: ndcg_at_3 value: 85.28099999999999 - type: ndcg_at_5 value: 86.888 - type: precision_at_1 value: 81.04 - type: precision_at_10 value: 13.406 - type: precision_at_100 value: 1.5350000000000001 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.31 - type: precision_at_5 value: 24.54 - type: recall_at_1 value: 70.352 - type: recall_at_10 value: 95.358 - type: recall_at_100 value: 99.541 - type: recall_at_1000 value: 99.984 - type: recall_at_3 value: 87.111 - type: recall_at_5 value: 91.643 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 46.54068723291946 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 63.216287629895994 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 4.023000000000001 - type: map_at_10 value: 10.071 - type: map_at_100 value: 11.892 - type: map_at_1000 value: 12.196 - type: map_at_3 value: 7.234 - type: map_at_5 value: 8.613999999999999 - type: mrr_at_1 value: 19.900000000000002 - type: mrr_at_10 value: 30.516 - type: mrr_at_100 value: 31.656000000000002 - type: mrr_at_1000 value: 31.723000000000003 - type: mrr_at_3 value: 27.400000000000002 - type: mrr_at_5 value: 29.270000000000003 - type: ndcg_at_1 value: 19.900000000000002 - type: ndcg_at_10 value: 17.474 - type: ndcg_at_100 value: 25.020999999999997 - type: ndcg_at_1000 value: 30.728 - type: ndcg_at_3 value: 16.588 - type: ndcg_at_5 value: 14.498 - type: precision_at_1 value: 19.900000000000002 - type: precision_at_10 value: 9.139999999999999 - type: precision_at_100 value: 2.011 - type: precision_at_1000 value: 0.33899999999999997 - type: precision_at_3 value: 15.667 - type: precision_at_5 value: 12.839999999999998 - type: recall_at_1 value: 4.023000000000001 - type: recall_at_10 value: 18.497 - type: recall_at_100 value: 40.8 - type: recall_at_1000 value: 68.812 - type: recall_at_3 value: 9.508 - type: recall_at_5 value: 12.983 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 83.967008785134 - type: cos_sim_spearman value: 80.23142141101837 - type: euclidean_pearson value: 81.20166064704539 - type: euclidean_spearman value: 80.18961335654585 - type: manhattan_pearson value: 81.13925443187625 - type: manhattan_spearman value: 80.07948723044424 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 86.94262461316023 - type: cos_sim_spearman value: 80.01596278563865 - type: euclidean_pearson value: 83.80799622922581 - type: euclidean_spearman value: 79.94984954947103 - type: manhattan_pearson value: 83.68473841756281 - type: manhattan_spearman value: 79.84990707951822 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 80.57346443146068 - type: cos_sim_spearman value: 81.54689837570866 - type: euclidean_pearson value: 81.10909881516007 - type: euclidean_spearman value: 81.56746243261762 - type: manhattan_pearson value: 80.87076036186582 - type: manhattan_spearman value: 81.33074987964402 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 79.54733787179849 - type: cos_sim_spearman value: 77.72202105610411 - type: euclidean_pearson value: 78.9043595478849 - type: euclidean_spearman value: 77.93422804309435 - type: manhattan_pearson value: 78.58115121621368 - type: manhattan_spearman value: 77.62508135122033 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 88.59880017237558 - type: cos_sim_spearman value: 89.31088630824758 - type: euclidean_pearson value: 88.47069261564656 - type: euclidean_spearman value: 89.33581971465233 - type: manhattan_pearson value: 88.40774264100956 - type: manhattan_spearman value: 89.28657485627835 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 84.08055117917084 - type: cos_sim_spearman value: 85.78491813080304 - type: euclidean_pearson value: 84.99329155500392 - type: euclidean_spearman value: 85.76728064677287 - type: manhattan_pearson value: 84.87947428989587 - type: manhattan_spearman value: 85.62429454917464 - task: type: STS dataset: name: MTEB STS17 (ko-ko) type: mteb/sts17-crosslingual-sts config: ko-ko split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 82.14190939287384 - type: cos_sim_spearman value: 82.27331573306041 - type: euclidean_pearson value: 81.891896953716 - type: euclidean_spearman value: 82.37695542955998 - type: manhattan_pearson value: 81.73123869460504 - type: manhattan_spearman value: 82.19989168441421 - task: type: STS dataset: name: MTEB STS17 (ar-ar) type: mteb/sts17-crosslingual-sts config: ar-ar split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 76.84695301843362 - type: cos_sim_spearman value: 77.87790986014461 - type: euclidean_pearson value: 76.91981583106315 - type: euclidean_spearman value: 77.88154772749589 - type: manhattan_pearson value: 76.94953277451093 - type: manhattan_spearman value: 77.80499230728604 - task: type: STS dataset: name: MTEB STS17 (en-ar) type: mteb/sts17-crosslingual-sts config: en-ar split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 75.44657840482016 - type: cos_sim_spearman value: 75.05531095119674 - type: euclidean_pearson value: 75.88161755829299 - type: euclidean_spearman value: 74.73176238219332 - type: manhattan_pearson value: 75.63984765635362 - type: manhattan_spearman value: 74.86476440770737 - task: type: STS dataset: name: MTEB STS17 (en-de) type: mteb/sts17-crosslingual-sts config: en-de split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 85.64700140524133 - type: cos_sim_spearman value: 86.16014210425672 - type: euclidean_pearson value: 86.49086860843221 - type: euclidean_spearman value: 86.09729326815614 - type: manhattan_pearson value: 86.43406265125513 - type: manhattan_spearman value: 86.17740150939994 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.91170098764921 - type: cos_sim_spearman value: 88.12437004058931 - type: euclidean_pearson value: 88.81828254494437 - type: euclidean_spearman value: 88.14831794572122 - type: manhattan_pearson value: 88.93442183448961 - type: manhattan_spearman value: 88.15254630778304 - task: type: STS dataset: name: MTEB STS17 (en-tr) type: mteb/sts17-crosslingual-sts config: en-tr split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 72.91390577997292 - type: cos_sim_spearman value: 71.22979457536074 - type: euclidean_pearson value: 74.40314008106749 - type: euclidean_spearman value: 72.54972136083246 - type: manhattan_pearson value: 73.85687539530218 - type: manhattan_spearman value: 72.09500771742637 - task: type: STS dataset: name: MTEB STS17 (es-en) type: mteb/sts17-crosslingual-sts config: es-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 80.9301067983089 - type: cos_sim_spearman value: 80.74989828346473 - type: euclidean_pearson value: 81.36781301814257 - type: euclidean_spearman value: 80.9448819964426 - type: manhattan_pearson value: 81.0351322685609 - type: manhattan_spearman value: 80.70192121844177 - task: type: STS dataset: name: MTEB STS17 (es-es) type: mteb/sts17-crosslingual-sts config: es-es split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.13820465980005 - type: cos_sim_spearman value: 86.73532498758757 - type: euclidean_pearson value: 87.21329451846637 - type: euclidean_spearman value: 86.57863198601002 - type: manhattan_pearson value: 87.06973713818554 - type: manhattan_spearman value: 86.47534918791499 - task: type: STS dataset: name: MTEB STS17 (fr-en) type: mteb/sts17-crosslingual-sts config: fr-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 85.48720108904415 - type: cos_sim_spearman value: 85.62221757068387 - type: euclidean_pearson value: 86.1010129512749 - type: euclidean_spearman value: 85.86580966509942 - type: manhattan_pearson value: 86.26800938808971 - type: manhattan_spearman value: 85.88902721678429 - task: type: STS dataset: name: MTEB STS17 (it-en) type: mteb/sts17-crosslingual-sts config: it-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 83.98021347333516 - type: cos_sim_spearman value: 84.53806553803501 - type: euclidean_pearson value: 84.61483347248364 - type: euclidean_spearman value: 85.14191408011702 - type: manhattan_pearson value: 84.75297588825967 - type: manhattan_spearman value: 85.33176753669242 - task: type: STS dataset: name: MTEB STS17 (nl-en) type: mteb/sts17-crosslingual-sts config: nl-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 84.51856644893233 - type: cos_sim_spearman value: 85.27510748506413 - type: euclidean_pearson value: 85.09886861540977 - type: euclidean_spearman value: 85.62579245860887 - type: manhattan_pearson value: 84.93017860464607 - type: manhattan_spearman value: 85.5063988898453 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 62.581573200584195 - type: cos_sim_spearman value: 63.05503590247928 - type: euclidean_pearson value: 63.652564812602094 - type: euclidean_spearman value: 62.64811520876156 - type: manhattan_pearson value: 63.506842893061076 - type: manhattan_spearman value: 62.51289573046917 - task: type: STS dataset: name: MTEB STS22 (de) type: mteb/sts22-crosslingual-sts config: de split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 48.2248801729127 - type: cos_sim_spearman value: 56.5936604678561 - type: euclidean_pearson value: 43.98149464089 - type: euclidean_spearman value: 56.108561882423615 - type: manhattan_pearson value: 43.86880305903564 - type: manhattan_spearman value: 56.04671150510166 - task: type: STS dataset: name: MTEB STS22 (es) type: mteb/sts22-crosslingual-sts config: es split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 55.17564527009831 - type: cos_sim_spearman value: 64.57978560979488 - type: euclidean_pearson value: 58.8818330154583 - type: euclidean_spearman value: 64.99214839071281 - type: manhattan_pearson value: 58.72671436121381 - type: manhattan_spearman value: 65.10713416616109 - task: type: STS dataset: name: MTEB STS22 (pl) type: mteb/sts22-crosslingual-sts config: pl split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 26.772131864023297 - type: cos_sim_spearman value: 34.68200792408681 - type: euclidean_pearson value: 16.68082419005441 - type: euclidean_spearman value: 34.83099932652166 - type: manhattan_pearson value: 16.52605949659529 - type: manhattan_spearman value: 34.82075801399475 - task: type: STS dataset: name: MTEB STS22 (tr) type: mteb/sts22-crosslingual-sts config: tr split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 54.42415189043831 - type: cos_sim_spearman value: 63.54594264576758 - type: euclidean_pearson value: 57.36577498297745 - type: euclidean_spearman value: 63.111466379158074 - type: manhattan_pearson value: 57.584543715873885 - type: manhattan_spearman value: 63.22361054139183 - task: type: STS dataset: name: MTEB STS22 (ar) type: mteb/sts22-crosslingual-sts config: ar split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 47.55216762405518 - type: cos_sim_spearman value: 56.98670142896412 - type: euclidean_pearson value: 50.15318757562699 - type: euclidean_spearman value: 56.524941926541906 - type: manhattan_pearson value: 49.955618528674904 - type: manhattan_spearman value: 56.37102209240117 - task: type: STS dataset: name: MTEB STS22 (ru) type: mteb/sts22-crosslingual-sts config: ru split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 49.20540980338571 - type: cos_sim_spearman value: 59.9009453504406 - type: euclidean_pearson value: 49.557749853620535 - type: euclidean_spearman value: 59.76631621172456 - type: manhattan_pearson value: 49.62340591181147 - type: manhattan_spearman value: 59.94224880322436 - task: type: STS dataset: name: MTEB STS22 (zh) type: mteb/sts22-crosslingual-sts config: zh split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 51.508169956576985 - type: cos_sim_spearman value: 66.82461565306046 - type: euclidean_pearson value: 56.2274426480083 - type: euclidean_spearman value: 66.6775323848333 - type: manhattan_pearson value: 55.98277796300661 - type: manhattan_spearman value: 66.63669848497175 - task: type: STS dataset: name: MTEB STS22 (fr) type: mteb/sts22-crosslingual-sts config: fr split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 72.86478788045507 - type: cos_sim_spearman value: 76.7946552053193 - type: euclidean_pearson value: 75.01598530490269 - type: euclidean_spearman value: 76.83618917858281 - type: manhattan_pearson value: 74.68337628304332 - type: manhattan_spearman value: 76.57480204017773 - task: type: STS dataset: name: MTEB STS22 (de-en) type: mteb/sts22-crosslingual-sts config: de-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 55.922619099401984 - type: cos_sim_spearman value: 56.599362477240774 - type: euclidean_pearson value: 56.68307052369783 - type: euclidean_spearman value: 54.28760436777401 - type: manhattan_pearson value: 56.67763566500681 - type: manhattan_spearman value: 53.94619541711359 - task: type: STS dataset: name: MTEB STS22 (es-en) type: mteb/sts22-crosslingual-sts config: es-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 66.74357206710913 - type: cos_sim_spearman value: 72.5208244925311 - type: euclidean_pearson value: 67.49254562186032 - type: euclidean_spearman value: 72.02469076238683 - type: manhattan_pearson value: 67.45251772238085 - type: manhattan_spearman value: 72.05538819984538 - task: type: STS dataset: name: MTEB STS22 (it) type: mteb/sts22-crosslingual-sts config: it split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 71.25734330033191 - type: cos_sim_spearman value: 76.98349083946823 - type: euclidean_pearson value: 73.71642838667736 - type: euclidean_spearman value: 77.01715504651384 - type: manhattan_pearson value: 73.61712711868105 - type: manhattan_spearman value: 77.01392571153896 - task: type: STS dataset: name: MTEB STS22 (pl-en) type: mteb/sts22-crosslingual-sts config: pl-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 63.18215462781212 - type: cos_sim_spearman value: 65.54373266117607 - type: euclidean_pearson value: 64.54126095439005 - type: euclidean_spearman value: 65.30410369102711 - type: manhattan_pearson value: 63.50332221148234 - type: manhattan_spearman value: 64.3455878104313 - task: type: STS dataset: name: MTEB STS22 (zh-en) type: mteb/sts22-crosslingual-sts config: zh-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 62.30509221440029 - type: cos_sim_spearman value: 65.99582704642478 - type: euclidean_pearson value: 63.43818859884195 - type: euclidean_spearman value: 66.83172582815764 - type: manhattan_pearson value: 63.055779168508764 - type: manhattan_spearman value: 65.49585020501449 - task: type: STS dataset: name: MTEB STS22 (es-it) type: mteb/sts22-crosslingual-sts config: es-it split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 59.587830825340404 - type: cos_sim_spearman value: 68.93467614588089 - type: euclidean_pearson value: 62.3073527367404 - type: euclidean_spearman value: 69.69758171553175 - type: manhattan_pearson value: 61.9074580815789 - type: manhattan_spearman value: 69.57696375597865 - task: type: STS dataset: name: MTEB STS22 (de-fr) type: mteb/sts22-crosslingual-sts config: de-fr split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 57.143220125577066 - type: cos_sim_spearman value: 67.78857859159226 - type: euclidean_pearson value: 55.58225107923733 - type: euclidean_spearman value: 67.80662907184563 - type: manhattan_pearson value: 56.24953502726514 - type: manhattan_spearman value: 67.98262125431616 - task: type: STS dataset: name: MTEB STS22 (de-pl) type: mteb/sts22-crosslingual-sts config: de-pl split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 21.826928900322066 - type: cos_sim_spearman value: 49.578506634400405 - type: euclidean_pearson value: 27.939890138843214 - type: euclidean_spearman value: 52.71950519136242 - type: manhattan_pearson value: 26.39878683847546 - type: manhattan_spearman value: 47.54609580342499 - task: type: STS dataset: name: MTEB STS22 (fr-pl) type: mteb/sts22-crosslingual-sts config: fr-pl split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 57.27603854632001 - type: cos_sim_spearman value: 50.709255283710995 - type: euclidean_pearson value: 59.5419024445929 - type: euclidean_spearman value: 50.709255283710995 - type: manhattan_pearson value: 59.03256832438492 - type: manhattan_spearman value: 61.97797868009122 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 85.00757054859712 - type: cos_sim_spearman value: 87.29283629622222 - type: euclidean_pearson value: 86.54824171775536 - type: euclidean_spearman value: 87.24364730491402 - type: manhattan_pearson value: 86.5062156915074 - type: manhattan_spearman value: 87.15052170378574 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 82.03549357197389 - type: mrr value: 95.05437645143527 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 57.260999999999996 - type: map_at_10 value: 66.259 - type: map_at_100 value: 66.884 - type: map_at_1000 value: 66.912 - type: map_at_3 value: 63.685 - type: map_at_5 value: 65.35499999999999 - type: mrr_at_1 value: 60.333000000000006 - type: mrr_at_10 value: 67.5 - type: mrr_at_100 value: 68.013 - type: mrr_at_1000 value: 68.038 - type: mrr_at_3 value: 65.61099999999999 - type: mrr_at_5 value: 66.861 - type: ndcg_at_1 value: 60.333000000000006 - type: ndcg_at_10 value: 70.41 - type: ndcg_at_100 value: 73.10600000000001 - type: ndcg_at_1000 value: 73.846 - type: ndcg_at_3 value: 66.133 - type: ndcg_at_5 value: 68.499 - type: precision_at_1 value: 60.333000000000006 - type: precision_at_10 value: 9.232999999999999 - type: precision_at_100 value: 1.0630000000000002 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 25.667 - type: precision_at_5 value: 17.067 - type: recall_at_1 value: 57.260999999999996 - type: recall_at_10 value: 81.94399999999999 - type: recall_at_100 value: 93.867 - type: recall_at_1000 value: 99.667 - type: recall_at_3 value: 70.339 - type: recall_at_5 value: 76.25 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.74356435643564 - type: cos_sim_ap value: 93.13411948212683 - type: cos_sim_f1 value: 86.80521991300147 - type: cos_sim_precision value: 84.00374181478017 - type: cos_sim_recall value: 89.8 - type: dot_accuracy value: 99.67920792079208 - type: dot_ap value: 89.27277565444479 - type: dot_f1 value: 83.9276990718124 - type: dot_precision value: 82.04393505253104 - type: dot_recall value: 85.9 - type: euclidean_accuracy value: 99.74257425742574 - type: euclidean_ap value: 93.17993008259062 - type: euclidean_f1 value: 86.69396110542476 - type: euclidean_precision value: 88.78406708595388 - type: euclidean_recall value: 84.7 - type: manhattan_accuracy value: 99.74257425742574 - type: manhattan_ap value: 93.14413755550099 - type: manhattan_f1 value: 86.82483594144371 - type: manhattan_precision value: 87.66564729867483 - type: manhattan_recall value: 86 - type: max_accuracy value: 99.74356435643564 - type: max_ap value: 93.17993008259062 - type: max_f1 value: 86.82483594144371 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 57.525863806168566 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 32.68850574423839 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 49.71580650644033 - type: mrr value: 50.50971903913081 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 29.152190498799484 - type: cos_sim_spearman value: 29.686180371952727 - type: dot_pearson value: 27.248664793816342 - type: dot_spearman value: 28.37748983721745 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.20400000000000001 - type: map_at_10 value: 1.6209999999999998 - type: map_at_100 value: 9.690999999999999 - type: map_at_1000 value: 23.733 - type: map_at_3 value: 0.575 - type: map_at_5 value: 0.885 - type: mrr_at_1 value: 78 - type: mrr_at_10 value: 86.56700000000001 - type: mrr_at_100 value: 86.56700000000001 - type: mrr_at_1000 value: 86.56700000000001 - type: mrr_at_3 value: 85.667 - type: mrr_at_5 value: 86.56700000000001 - type: ndcg_at_1 value: 76 - type: ndcg_at_10 value: 71.326 - type: ndcg_at_100 value: 54.208999999999996 - type: ndcg_at_1000 value: 49.252 - type: ndcg_at_3 value: 74.235 - type: ndcg_at_5 value: 73.833 - type: precision_at_1 value: 78 - type: precision_at_10 value: 74.8 - type: precision_at_100 value: 55.50000000000001 - type: precision_at_1000 value: 21.836 - type: precision_at_3 value: 78 - type: precision_at_5 value: 78 - type: recall_at_1 value: 0.20400000000000001 - type: recall_at_10 value: 1.894 - type: recall_at_100 value: 13.245999999999999 - type: recall_at_1000 value: 46.373 - type: recall_at_3 value: 0.613 - type: recall_at_5 value: 0.991 - task: type: BitextMining dataset: name: MTEB Tatoeba (sqi-eng) type: mteb/tatoeba-bitext-mining config: sqi-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.89999999999999 - type: f1 value: 94.69999999999999 - type: precision value: 94.11666666666667 - type: recall value: 95.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (fry-eng) type: mteb/tatoeba-bitext-mining config: fry-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 68.20809248554913 - type: f1 value: 63.431048720066066 - type: precision value: 61.69143958161298 - type: recall value: 68.20809248554913 - task: type: BitextMining dataset: name: MTEB Tatoeba (kur-eng) type: mteb/tatoeba-bitext-mining config: kur-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 71.21951219512195 - type: f1 value: 66.82926829268293 - type: precision value: 65.1260162601626 - type: recall value: 71.21951219512195 - task: type: BitextMining dataset: name: MTEB Tatoeba (tur-eng) type: mteb/tatoeba-bitext-mining config: tur-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.2 - type: f1 value: 96.26666666666667 - type: precision value: 95.8 - type: recall value: 97.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (deu-eng) type: mteb/tatoeba-bitext-mining config: deu-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 99.3 - type: f1 value: 99.06666666666666 - type: precision value: 98.95 - type: recall value: 99.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (nld-eng) type: mteb/tatoeba-bitext-mining config: nld-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.39999999999999 - type: f1 value: 96.63333333333333 - type: precision value: 96.26666666666668 - type: recall value: 97.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (ron-eng) type: mteb/tatoeba-bitext-mining config: ron-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96 - type: f1 value: 94.86666666666666 - type: precision value: 94.31666666666668 - type: recall value: 96 - task: type: BitextMining dataset: name: MTEB Tatoeba (ang-eng) type: mteb/tatoeba-bitext-mining config: ang-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 47.01492537313433 - type: f1 value: 40.178867566927266 - type: precision value: 38.179295828549556 - type: recall value: 47.01492537313433 - task: type: BitextMining dataset: name: MTEB Tatoeba (ido-eng) type: mteb/tatoeba-bitext-mining config: ido-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 86.5 - type: f1 value: 83.62537480063796 - type: precision value: 82.44555555555554 - type: recall value: 86.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (jav-eng) type: mteb/tatoeba-bitext-mining config: jav-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 80.48780487804879 - type: f1 value: 75.45644599303138 - type: precision value: 73.37398373983739 - type: recall value: 80.48780487804879 - task: type: BitextMining dataset: name: MTEB Tatoeba (isl-eng) type: mteb/tatoeba-bitext-mining config: isl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.7 - type: f1 value: 91.95666666666666 - type: precision value: 91.125 - type: recall value: 93.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (slv-eng) type: mteb/tatoeba-bitext-mining config: slv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.73754556500607 - type: f1 value: 89.65168084244632 - type: precision value: 88.73025516403402 - type: recall value: 91.73754556500607 - task: type: BitextMining dataset: name: MTEB Tatoeba (cym-eng) type: mteb/tatoeba-bitext-mining config: cym-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 81.04347826086956 - type: f1 value: 76.2128364389234 - type: precision value: 74.2 - type: recall value: 81.04347826086956 - task: type: BitextMining dataset: name: MTEB Tatoeba (kaz-eng) type: mteb/tatoeba-bitext-mining config: kaz-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 83.65217391304348 - type: f1 value: 79.4376811594203 - type: precision value: 77.65797101449274 - type: recall value: 83.65217391304348 - task: type: BitextMining dataset: name: MTEB Tatoeba (est-eng) type: mteb/tatoeba-bitext-mining config: est-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.5 - type: f1 value: 85.02690476190476 - type: precision value: 83.96261904761904 - type: recall value: 87.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (heb-eng) type: mteb/tatoeba-bitext-mining config: heb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89.3 - type: f1 value: 86.52333333333333 - type: precision value: 85.22833333333332 - type: recall value: 89.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (gla-eng) type: mteb/tatoeba-bitext-mining config: gla-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 65.01809408926418 - type: f1 value: 59.00594446432805 - type: precision value: 56.827215807915444 - type: recall value: 65.01809408926418 - task: type: BitextMining dataset: name: MTEB Tatoeba (mar-eng) type: mteb/tatoeba-bitext-mining config: mar-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.2 - type: f1 value: 88.58 - type: precision value: 87.33333333333334 - type: recall value: 91.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (lat-eng) type: mteb/tatoeba-bitext-mining config: lat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 59.199999999999996 - type: f1 value: 53.299166276284915 - type: precision value: 51.3383908045977 - type: recall value: 59.199999999999996 - task: type: BitextMining dataset: name: MTEB Tatoeba (bel-eng) type: mteb/tatoeba-bitext-mining config: bel-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.2 - type: f1 value: 91.2 - type: precision value: 90.25 - type: recall value: 93.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (pms-eng) type: mteb/tatoeba-bitext-mining config: pms-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 64.76190476190476 - type: f1 value: 59.867110667110666 - type: precision value: 58.07390192653351 - type: recall value: 64.76190476190476 - task: type: BitextMining dataset: name: MTEB Tatoeba (gle-eng) type: mteb/tatoeba-bitext-mining config: gle-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.2 - type: f1 value: 71.48147546897547 - type: precision value: 69.65409090909091 - type: recall value: 76.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (pes-eng) type: mteb/tatoeba-bitext-mining config: pes-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.8 - type: f1 value: 92.14 - type: precision value: 91.35833333333333 - type: recall value: 93.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (nob-eng) type: mteb/tatoeba-bitext-mining config: nob-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.89999999999999 - type: f1 value: 97.2 - type: precision value: 96.85000000000001 - type: recall value: 97.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (bul-eng) type: mteb/tatoeba-bitext-mining config: bul-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.6 - type: f1 value: 92.93333333333334 - type: precision value: 92.13333333333333 - type: recall value: 94.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (cbk-eng) type: mteb/tatoeba-bitext-mining config: cbk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 74.1 - type: f1 value: 69.14817460317461 - type: precision value: 67.2515873015873 - type: recall value: 74.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (hun-eng) type: mteb/tatoeba-bitext-mining config: hun-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.19999999999999 - type: f1 value: 94.01333333333335 - type: precision value: 93.46666666666667 - type: recall value: 95.19999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (uig-eng) type: mteb/tatoeba-bitext-mining config: uig-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.9 - type: f1 value: 72.07523809523809 - type: precision value: 70.19777777777779 - type: recall value: 76.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (rus-eng) type: mteb/tatoeba-bitext-mining config: rus-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.1 - type: f1 value: 92.31666666666666 - type: precision value: 91.43333333333332 - type: recall value: 94.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (spa-eng) type: mteb/tatoeba-bitext-mining config: spa-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.8 - type: f1 value: 97.1 - type: precision value: 96.76666666666668 - type: recall value: 97.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (hye-eng) type: mteb/tatoeba-bitext-mining config: hye-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.85714285714286 - type: f1 value: 90.92093441150045 - type: precision value: 90.00449236298293 - type: recall value: 92.85714285714286 - task: type: BitextMining dataset: name: MTEB Tatoeba (tel-eng) type: mteb/tatoeba-bitext-mining config: tel-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.16239316239316 - type: f1 value: 91.33903133903132 - type: precision value: 90.56267806267806 - type: recall value: 93.16239316239316 - task: type: BitextMining dataset: name: MTEB Tatoeba (afr-eng) type: mteb/tatoeba-bitext-mining config: afr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.4 - type: f1 value: 90.25666666666666 - type: precision value: 89.25833333333334 - type: recall value: 92.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (mon-eng) type: mteb/tatoeba-bitext-mining config: mon-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.22727272727272 - type: f1 value: 87.53030303030303 - type: precision value: 86.37121212121211 - type: recall value: 90.22727272727272 - task: type: BitextMining dataset: name: MTEB Tatoeba (arz-eng) type: mteb/tatoeba-bitext-mining config: arz-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 79.03563941299791 - type: f1 value: 74.7349505840072 - type: precision value: 72.9035639412998 - type: recall value: 79.03563941299791 - task: type: BitextMining dataset: name: MTEB Tatoeba (hrv-eng) type: mteb/tatoeba-bitext-mining config: hrv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97 - type: f1 value: 96.15 - type: precision value: 95.76666666666668 - type: recall value: 97 - task: type: BitextMining dataset: name: MTEB Tatoeba (nov-eng) type: mteb/tatoeba-bitext-mining config: nov-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.26459143968872 - type: f1 value: 71.55642023346303 - type: precision value: 69.7544932369835 - type: recall value: 76.26459143968872 - task: type: BitextMining dataset: name: MTEB Tatoeba (gsw-eng) type: mteb/tatoeba-bitext-mining config: gsw-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 58.119658119658126 - type: f1 value: 51.65242165242165 - type: precision value: 49.41768108434775 - type: recall value: 58.119658119658126 - task: type: BitextMining dataset: name: MTEB Tatoeba (nds-eng) type: mteb/tatoeba-bitext-mining config: nds-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 74.3 - type: f1 value: 69.52055555555555 - type: precision value: 67.7574938949939 - type: recall value: 74.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (ukr-eng) type: mteb/tatoeba-bitext-mining config: ukr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.8 - type: f1 value: 93.31666666666666 - type: precision value: 92.60000000000001 - type: recall value: 94.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (uzb-eng) type: mteb/tatoeba-bitext-mining config: uzb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.63551401869158 - type: f1 value: 72.35202492211837 - type: precision value: 70.60358255451713 - type: recall value: 76.63551401869158 - task: type: BitextMining dataset: name: MTEB Tatoeba (lit-eng) type: mteb/tatoeba-bitext-mining config: lit-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.4 - type: f1 value: 88.4811111111111 - type: precision value: 87.7452380952381 - type: recall value: 90.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (ina-eng) type: mteb/tatoeba-bitext-mining config: ina-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95 - type: f1 value: 93.60666666666667 - type: precision value: 92.975 - type: recall value: 95 - task: type: BitextMining dataset: name: MTEB Tatoeba (lfn-eng) type: mteb/tatoeba-bitext-mining config: lfn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 67.2 - type: f1 value: 63.01595782872099 - type: precision value: 61.596587301587306 - type: recall value: 67.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (zsm-eng) type: mteb/tatoeba-bitext-mining config: zsm-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.7 - type: f1 value: 94.52999999999999 - type: precision value: 94 - type: recall value: 95.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (ita-eng) type: mteb/tatoeba-bitext-mining config: ita-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.6 - type: f1 value: 93.28999999999999 - type: precision value: 92.675 - type: recall value: 94.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (cmn-eng) type: mteb/tatoeba-bitext-mining config: cmn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.39999999999999 - type: f1 value: 95.28333333333333 - type: precision value: 94.75 - type: recall value: 96.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (lvs-eng) type: mteb/tatoeba-bitext-mining config: lvs-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.9 - type: f1 value: 89.83 - type: precision value: 88.92 - type: recall value: 91.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (glg-eng) type: mteb/tatoeba-bitext-mining config: glg-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.69999999999999 - type: f1 value: 93.34222222222223 - type: precision value: 92.75416666666668 - type: recall value: 94.69999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (ceb-eng) type: mteb/tatoeba-bitext-mining config: ceb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 60.333333333333336 - type: f1 value: 55.31203703703703 - type: precision value: 53.39971108326371 - type: recall value: 60.333333333333336 - task: type: BitextMining dataset: name: MTEB Tatoeba (bre-eng) type: mteb/tatoeba-bitext-mining config: bre-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 12.9 - type: f1 value: 11.099861903031458 - type: precision value: 10.589187932631877 - type: recall value: 12.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (ben-eng) type: mteb/tatoeba-bitext-mining config: ben-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 86.7 - type: f1 value: 83.0152380952381 - type: precision value: 81.37833333333333 - type: recall value: 86.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (swg-eng) type: mteb/tatoeba-bitext-mining config: swg-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 63.39285714285714 - type: f1 value: 56.832482993197274 - type: precision value: 54.56845238095237 - type: recall value: 63.39285714285714 - task: type: BitextMining dataset: name: MTEB Tatoeba (arq-eng) type: mteb/tatoeba-bitext-mining config: arq-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 48.73765093304062 - type: f1 value: 41.555736920720456 - type: precision value: 39.06874531737319 - type: recall value: 48.73765093304062 - task: type: BitextMining dataset: name: MTEB Tatoeba (kab-eng) type: mteb/tatoeba-bitext-mining config: kab-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 41.099999999999994 - type: f1 value: 36.540165945165946 - type: precision value: 35.05175685425686 - type: recall value: 41.099999999999994 - task: type: BitextMining dataset: name: MTEB Tatoeba (fra-eng) type: mteb/tatoeba-bitext-mining config: fra-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.89999999999999 - type: f1 value: 93.42333333333333 - type: precision value: 92.75833333333333 - type: recall value: 94.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (por-eng) type: mteb/tatoeba-bitext-mining config: por-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.89999999999999 - type: f1 value: 93.63333333333334 - type: precision value: 93.01666666666665 - type: recall value: 94.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (tat-eng) type: mteb/tatoeba-bitext-mining config: tat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.9 - type: f1 value: 73.64833333333334 - type: precision value: 71.90282106782105 - type: recall value: 77.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (oci-eng) type: mteb/tatoeba-bitext-mining config: oci-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 59.4 - type: f1 value: 54.90521367521367 - type: precision value: 53.432840025471606 - type: recall value: 59.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (pol-eng) type: mteb/tatoeba-bitext-mining config: pol-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.39999999999999 - type: f1 value: 96.6 - type: precision value: 96.2 - type: recall value: 97.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (war-eng) type: mteb/tatoeba-bitext-mining config: war-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 67.2 - type: f1 value: 62.25926129426129 - type: precision value: 60.408376623376626 - type: recall value: 67.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (aze-eng) type: mteb/tatoeba-bitext-mining config: aze-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.2 - type: f1 value: 87.60666666666667 - type: precision value: 86.45277777777778 - type: recall value: 90.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (vie-eng) type: mteb/tatoeba-bitext-mining config: vie-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.7 - type: f1 value: 97 - type: precision value: 96.65 - type: recall value: 97.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (nno-eng) type: mteb/tatoeba-bitext-mining config: nno-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.2 - type: f1 value: 91.39746031746031 - type: precision value: 90.6125 - type: recall value: 93.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (cha-eng) type: mteb/tatoeba-bitext-mining config: cha-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 32.11678832116788 - type: f1 value: 27.210415386260234 - type: precision value: 26.20408990846947 - type: recall value: 32.11678832116788 - task: type: BitextMining dataset: name: MTEB Tatoeba (mhr-eng) type: mteb/tatoeba-bitext-mining config: mhr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 8.5 - type: f1 value: 6.787319277832475 - type: precision value: 6.3452094433344435 - type: recall value: 8.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (dan-eng) type: mteb/tatoeba-bitext-mining config: dan-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.1 - type: f1 value: 95.08 - type: precision value: 94.61666666666667 - type: recall value: 96.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (ell-eng) type: mteb/tatoeba-bitext-mining config: ell-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.3 - type: f1 value: 93.88333333333333 - type: precision value: 93.18333333333332 - type: recall value: 95.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (amh-eng) type: mteb/tatoeba-bitext-mining config: amh-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 85.11904761904762 - type: f1 value: 80.69444444444444 - type: precision value: 78.72023809523809 - type: recall value: 85.11904761904762 - task: type: BitextMining dataset: name: MTEB Tatoeba (pam-eng) type: mteb/tatoeba-bitext-mining config: pam-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 11.1 - type: f1 value: 9.276381801735853 - type: precision value: 8.798174603174601 - type: recall value: 11.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (hsb-eng) type: mteb/tatoeba-bitext-mining config: hsb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 63.56107660455487 - type: f1 value: 58.70433569191332 - type: precision value: 56.896926581464015 - type: recall value: 63.56107660455487 - task: type: BitextMining dataset: name: MTEB Tatoeba (srp-eng) type: mteb/tatoeba-bitext-mining config: srp-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.69999999999999 - type: f1 value: 93.10000000000001 - type: precision value: 92.35 - type: recall value: 94.69999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (epo-eng) type: mteb/tatoeba-bitext-mining config: epo-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.8 - type: f1 value: 96.01222222222222 - type: precision value: 95.67083333333332 - type: recall value: 96.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (kzj-eng) type: mteb/tatoeba-bitext-mining config: kzj-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 9.2 - type: f1 value: 7.911555250305249 - type: precision value: 7.631246556216846 - type: recall value: 9.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (awa-eng) type: mteb/tatoeba-bitext-mining config: awa-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.48917748917748 - type: f1 value: 72.27375798804371 - type: precision value: 70.14430014430013 - type: recall value: 77.48917748917748 - task: type: BitextMining dataset: name: MTEB Tatoeba (fao-eng) type: mteb/tatoeba-bitext-mining config: fao-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.09923664122137 - type: f1 value: 72.61541257724463 - type: precision value: 70.8998380754106 - type: recall value: 77.09923664122137 - task: type: BitextMining dataset: name: MTEB Tatoeba (mal-eng) type: mteb/tatoeba-bitext-mining config: mal-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 98.2532751091703 - type: f1 value: 97.69529354682193 - type: precision value: 97.42843279961184 - type: recall value: 98.2532751091703 - task: type: BitextMining dataset: name: MTEB Tatoeba (ile-eng) type: mteb/tatoeba-bitext-mining config: ile-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 82.8 - type: f1 value: 79.14672619047619 - type: precision value: 77.59489247311828 - type: recall value: 82.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (bos-eng) type: mteb/tatoeba-bitext-mining config: bos-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.35028248587571 - type: f1 value: 92.86252354048965 - type: precision value: 92.2080979284369 - type: recall value: 94.35028248587571 - task: type: BitextMining dataset: name: MTEB Tatoeba (cor-eng) type: mteb/tatoeba-bitext-mining config: cor-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 8.5 - type: f1 value: 6.282429263935621 - type: precision value: 5.783274240739785 - type: recall value: 8.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (cat-eng) type: mteb/tatoeba-bitext-mining config: cat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.7 - type: f1 value: 91.025 - type: precision value: 90.30428571428571 - type: recall value: 92.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (eus-eng) type: mteb/tatoeba-bitext-mining config: eus-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 81 - type: f1 value: 77.8232380952381 - type: precision value: 76.60194444444444 - type: recall value: 81 - task: type: BitextMining dataset: name: MTEB Tatoeba (yue-eng) type: mteb/tatoeba-bitext-mining config: yue-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91 - type: f1 value: 88.70857142857142 - type: precision value: 87.7 - type: recall value: 91 - task: type: BitextMining dataset: name: MTEB Tatoeba (swe-eng) type: mteb/tatoeba-bitext-mining config: swe-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.39999999999999 - type: f1 value: 95.3 - type: precision value: 94.76666666666667 - type: recall value: 96.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (dtp-eng) type: mteb/tatoeba-bitext-mining config: dtp-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 8.1 - type: f1 value: 7.001008218834307 - type: precision value: 6.708329562594269 - type: recall value: 8.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (kat-eng) type: mteb/tatoeba-bitext-mining config: kat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.1313672922252 - type: f1 value: 84.09070598748882 - type: precision value: 82.79171454104429 - type: recall value: 87.1313672922252 - task: type: BitextMining dataset: name: MTEB Tatoeba (jpn-eng) type: mteb/tatoeba-bitext-mining config: jpn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.39999999999999 - type: f1 value: 95.28333333333333 - type: precision value: 94.73333333333332 - type: recall value: 96.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (csb-eng) type: mteb/tatoeba-bitext-mining config: csb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 42.29249011857708 - type: f1 value: 36.981018542283365 - type: precision value: 35.415877813576024 - type: recall value: 42.29249011857708 - task: type: BitextMining dataset: name: MTEB Tatoeba (xho-eng) type: mteb/tatoeba-bitext-mining config: xho-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 83.80281690140845 - type: f1 value: 80.86854460093896 - type: precision value: 79.60093896713614 - type: recall value: 83.80281690140845 - task: type: BitextMining dataset: name: MTEB Tatoeba (orv-eng) type: mteb/tatoeba-bitext-mining config: orv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 45.26946107784431 - type: f1 value: 39.80235464678088 - type: precision value: 38.14342660001342 - type: recall value: 45.26946107784431 - task: type: BitextMining dataset: name: MTEB Tatoeba (ind-eng) type: mteb/tatoeba-bitext-mining config: ind-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.3 - type: f1 value: 92.9 - type: precision value: 92.26666666666668 - type: recall value: 94.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (tuk-eng) type: mteb/tatoeba-bitext-mining config: tuk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 37.93103448275862 - type: f1 value: 33.15192743764172 - type: precision value: 31.57456528146183 - type: recall value: 37.93103448275862 - task: type: BitextMining dataset: name: MTEB Tatoeba (max-eng) type: mteb/tatoeba-bitext-mining config: max-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 69.01408450704226 - type: f1 value: 63.41549295774648 - type: precision value: 61.342778895595806 - type: recall value: 69.01408450704226 - task: type: BitextMining dataset: name: MTEB Tatoeba (swh-eng) type: mteb/tatoeba-bitext-mining config: swh-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.66666666666667 - type: f1 value: 71.60705960705961 - type: precision value: 69.60683760683762 - type: recall value: 76.66666666666667 - task: type: BitextMining dataset: name: MTEB Tatoeba (hin-eng) type: mteb/tatoeba-bitext-mining config: hin-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.8 - type: f1 value: 94.48333333333333 - type: precision value: 93.83333333333333 - type: recall value: 95.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (dsb-eng) type: mteb/tatoeba-bitext-mining config: dsb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 52.81837160751566 - type: f1 value: 48.435977731384824 - type: precision value: 47.11291973845539 - type: recall value: 52.81837160751566 - task: type: BitextMining dataset: name: MTEB Tatoeba (ber-eng) type: mteb/tatoeba-bitext-mining config: ber-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 44.9 - type: f1 value: 38.88962621607783 - type: precision value: 36.95936507936508 - type: recall value: 44.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (tam-eng) type: mteb/tatoeba-bitext-mining config: tam-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.55374592833876 - type: f1 value: 88.22553125484721 - type: precision value: 87.26927252985884 - type: recall value: 90.55374592833876 - task: type: BitextMining dataset: name: MTEB Tatoeba (slk-eng) type: mteb/tatoeba-bitext-mining config: slk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.6 - type: f1 value: 93.13333333333333 - type: precision value: 92.45333333333333 - type: recall value: 94.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (tgl-eng) type: mteb/tatoeba-bitext-mining config: tgl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.7 - type: f1 value: 91.99666666666667 - type: precision value: 91.26666666666668 - type: recall value: 93.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (ast-eng) type: mteb/tatoeba-bitext-mining config: ast-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 85.03937007874016 - type: f1 value: 81.75853018372703 - type: precision value: 80.34120734908137 - type: recall value: 85.03937007874016 - task: type: BitextMining dataset: name: MTEB Tatoeba (mkd-eng) type: mteb/tatoeba-bitext-mining config: mkd-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 88.3 - type: f1 value: 85.5 - type: precision value: 84.25833333333334 - type: recall value: 88.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (khm-eng) type: mteb/tatoeba-bitext-mining config: khm-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 65.51246537396122 - type: f1 value: 60.02297410192148 - type: precision value: 58.133467727289236 - type: recall value: 65.51246537396122 - task: type: BitextMining dataset: name: MTEB Tatoeba (ces-eng) type: mteb/tatoeba-bitext-mining config: ces-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96 - type: f1 value: 94.89 - type: precision value: 94.39166666666667 - type: recall value: 96 - task: type: BitextMining dataset: name: MTEB Tatoeba (tzl-eng) type: mteb/tatoeba-bitext-mining config: tzl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 57.692307692307686 - type: f1 value: 53.162393162393165 - type: precision value: 51.70673076923077 - type: recall value: 57.692307692307686 - task: type: BitextMining dataset: name: MTEB Tatoeba (urd-eng) type: mteb/tatoeba-bitext-mining config: urd-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.60000000000001 - type: f1 value: 89.21190476190475 - type: precision value: 88.08666666666667 - type: recall value: 91.60000000000001 - task: type: BitextMining dataset: name: MTEB Tatoeba (ara-eng) type: mteb/tatoeba-bitext-mining config: ara-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 88 - type: f1 value: 85.47 - type: precision value: 84.43266233766234 - type: recall value: 88 - task: type: BitextMining dataset: name: MTEB Tatoeba (kor-eng) type: mteb/tatoeba-bitext-mining config: kor-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.7 - type: f1 value: 90.64999999999999 - type: precision value: 89.68333333333332 - type: recall value: 92.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (yid-eng) type: mteb/tatoeba-bitext-mining config: yid-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 80.30660377358491 - type: f1 value: 76.33044137466307 - type: precision value: 74.78970125786164 - type: recall value: 80.30660377358491 - task: type: BitextMining dataset: name: MTEB Tatoeba (fin-eng) type: mteb/tatoeba-bitext-mining config: fin-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.39999999999999 - type: f1 value: 95.44 - type: precision value: 94.99166666666666 - type: recall value: 96.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (tha-eng) type: mteb/tatoeba-bitext-mining config: tha-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.53284671532847 - type: f1 value: 95.37712895377129 - type: precision value: 94.7992700729927 - type: recall value: 96.53284671532847 - task: type: BitextMining dataset: name: MTEB Tatoeba (wuu-eng) type: mteb/tatoeba-bitext-mining config: wuu-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89 - type: f1 value: 86.23190476190476 - type: precision value: 85.035 - type: recall value: 89 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.585 - type: map_at_10 value: 9.012 - type: map_at_100 value: 14.027000000000001 - type: map_at_1000 value: 15.565000000000001 - type: map_at_3 value: 5.032 - type: map_at_5 value: 6.657 - type: mrr_at_1 value: 28.571 - type: mrr_at_10 value: 45.377 - type: mrr_at_100 value: 46.119 - type: mrr_at_1000 value: 46.127 - type: mrr_at_3 value: 41.156 - type: mrr_at_5 value: 42.585 - type: ndcg_at_1 value: 27.551 - type: ndcg_at_10 value: 23.395 - type: ndcg_at_100 value: 33.342 - type: ndcg_at_1000 value: 45.523 - type: ndcg_at_3 value: 25.158 - type: ndcg_at_5 value: 23.427 - type: precision_at_1 value: 28.571 - type: precision_at_10 value: 21.429000000000002 - type: precision_at_100 value: 6.714 - type: precision_at_1000 value: 1.473 - type: precision_at_3 value: 27.211000000000002 - type: precision_at_5 value: 24.490000000000002 - type: recall_at_1 value: 2.585 - type: recall_at_10 value: 15.418999999999999 - type: recall_at_100 value: 42.485 - type: recall_at_1000 value: 79.536 - type: recall_at_3 value: 6.239999999999999 - type: recall_at_5 value: 8.996 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.3234 - type: ap value: 14.361688653847423 - type: f1 value: 54.819068624319044 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 61.97792869269949 - type: f1 value: 62.28965628513728 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 38.90540145385218 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 86.53513739047506 - type: cos_sim_ap value: 75.27741586677557 - type: cos_sim_f1 value: 69.18792902473774 - type: cos_sim_precision value: 67.94708725515136 - type: cos_sim_recall value: 70.47493403693932 - type: dot_accuracy value: 84.7052512368123 - type: dot_ap value: 69.36075482849378 - type: dot_f1 value: 64.44688376631296 - type: dot_precision value: 59.92288500793831 - type: dot_recall value: 69.70976253298153 - type: euclidean_accuracy value: 86.60666388508076 - type: euclidean_ap value: 75.47512772621097 - type: euclidean_f1 value: 69.413872536473 - type: euclidean_precision value: 67.39562624254472 - type: euclidean_recall value: 71.55672823218997 - type: manhattan_accuracy value: 86.52917684925792 - type: manhattan_ap value: 75.34000110496703 - type: manhattan_f1 value: 69.28489190226429 - type: manhattan_precision value: 67.24608889992551 - type: manhattan_recall value: 71.45118733509234 - type: max_accuracy value: 86.60666388508076 - type: max_ap value: 75.47512772621097 - type: max_f1 value: 69.413872536473 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.01695967710637 - type: cos_sim_ap value: 85.8298270742901 - type: cos_sim_f1 value: 78.46988128389272 - type: cos_sim_precision value: 74.86017897091722 - type: cos_sim_recall value: 82.44533415460425 - type: dot_accuracy value: 88.19420188613343 - type: dot_ap value: 83.82679165901324 - type: dot_f1 value: 76.55833777304208 - type: dot_precision value: 75.6884875846501 - type: dot_recall value: 77.44841392054204 - type: euclidean_accuracy value: 89.03054294252338 - type: euclidean_ap value: 85.89089555185325 - type: euclidean_f1 value: 78.62997658079624 - type: euclidean_precision value: 74.92329149232914 - type: euclidean_recall value: 82.72251308900523 - type: manhattan_accuracy value: 89.0266620095471 - type: manhattan_ap value: 85.86458997929147 - type: manhattan_f1 value: 78.50685331000291 - type: manhattan_precision value: 74.5499861534201 - type: manhattan_recall value: 82.90729904527257 - type: max_accuracy value: 89.03054294252338 - type: max_ap value: 85.89089555185325 - type: max_f1 value: 78.62997658079624 --- # Zenabius/multilingual-e5-large-Q8_0-GGUF This model was converted to GGUF format from [`intfloat/multilingual-e5-large`](https://huggingface.co/intfloat/multilingual-e5-large) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/intfloat/multilingual-e5-large) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Zenabius/multilingual-e5-large-Q8_0-GGUF --hf-file multilingual-e5-large-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Zenabius/multilingual-e5-large-Q8_0-GGUF --hf-file multilingual-e5-large-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Zenabius/multilingual-e5-large-Q8_0-GGUF --hf-file multilingual-e5-large-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Zenabius/multilingual-e5-large-Q8_0-GGUF --hf-file multilingual-e5-large-q8_0.gguf -c 2048 ```
[ "BIOSSES", "SCIFACT" ]
asjoberg/openELM-270M-instruct-raw
asjoberg
null
[ "safetensors", "openelm", "custom_code", "arxiv:2404.14619", "license:other", "region:us" ]
2025-02-10T22:20:52Z
2025-02-10T22:24:32+00:00
14
0
--- license: other license_name: apple-sample-code-license license_link: LICENSE --- # OpenELM *Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari* We introduce **OpenELM**, a family of **Open** **E**fficient **L**anguage **M**odels. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy. We pretrained OpenELM models using the [CoreNet](https://github.com/apple/corenet) library. We release both pretrained and instruction tuned models with 270M, 450M, 1.1B and 3B parameters. We release the complete framework, encompassing data preparation, training, fine-tuning, and evaluation procedures, alongside multiple pre-trained checkpoints and training logs, to facilitate open research. Our pre-training dataset contains RefinedWeb, deduplicated PILE, a subset of RedPajama, and a subset of Dolma v1.6, totaling approximately 1.8 trillion tokens. Please check license agreements and terms of these datasets before using them. ## Usage We have provided an example function to generate output from OpenELM models loaded via [HuggingFace Hub](https://huggingface.co/docs/hub/) in `generate_openelm.py`. You can try the model by running the following command: ``` python generate_openelm.py --model apple/OpenELM-450M-Instruct --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 ``` Please refer to [this link](https://huggingface.co/docs/hub/security-tokens) to obtain your hugging face access token. Additional arguments to the hugging face generate function can be passed via `generate_kwargs`. As an example, to speedup the inference, you can try [lookup token speculative generation](https://huggingface.co/docs/transformers/generation_strategies) by passing the `prompt_lookup_num_tokens` argument as follows: ``` python generate_openelm.py --model apple/OpenELM-450M-Instruct --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 prompt_lookup_num_tokens=10 ``` Alternatively, try model-wise speculative generation with an [assistive model](https://huggingface.co/blog/assisted-generation) by passing a smaller model through the `assistant_model` argument, for example: ``` python generate_openelm.py --model apple/OpenELM-450M-Instruct --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 --assistant_model [SMALLER_MODEL] ``` ## Main Results ### Zero-Shot | **Model Size** | **ARC-c** | **ARC-e** | **BoolQ** | **HellaSwag** | **PIQA** | **SciQ** | **WinoGrande** | **Average** | |-----------------------------------------------------------------------------|-----------|-----------|-----------|---------------|-----------|-----------|----------------|-------------| | [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 26.45 | 45.08 | **53.98** | 46.71 | 69.75 | **84.70** | **53.91** | 54.37 | | [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **30.55** | **46.68** | 48.56 | **52.07** | **70.78** | 84.40 | 52.72 | **55.11** | | [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 27.56 | 48.06 | 55.78 | 53.97 | 72.31 | 87.20 | 58.01 | 57.56 | | [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **30.38** | **50.00** | **60.37** | **59.34** | **72.63** | **88.00** | **58.96** | **59.95** | | [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 32.34 | **55.43** | 63.58 | 64.81 | **75.57** | **90.60** | 61.72 | 63.44 | | [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **37.97** | 52.23 | **70.00** | **71.20** | 75.03 | 89.30 | **62.75** | **65.50** | | [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 35.58 | 59.89 | 67.40 | 72.44 | 78.24 | **92.70** | 65.51 | 67.39 | | [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **39.42** | **61.74** | **68.17** | **76.36** | **79.00** | 92.50 | **66.85** | **69.15** | ### LLM360 | **Model Size** | **ARC-c** | **HellaSwag** | **MMLU** | **TruthfulQA** | **WinoGrande** | **Average** | |-----------------------------------------------------------------------------|-----------|---------------|-----------|----------------|----------------|-------------| | [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | 47.15 | 25.72 | **39.24** | **53.83** | 38.72 | | [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | **51.58** | **26.70** | 38.72 | 53.20 | **40.54** | | [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | 53.86 | **26.01** | 40.18 | 57.22 | 41.50 | | [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | **59.31** | 25.41 | **40.48** | **58.33** | **43.41** | | [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | 65.71 | **27.05** | 36.98 | 63.22 | 45.93 | | [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | **71.83** | 25.65 | **45.95** | **64.72** | **49.94** | | [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | 73.28 | **26.76** | 34.98 | 67.25 | 48.90 | | [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | **76.87** | 24.80 | **38.76** | **67.96** | **51.22** | ### OpenLLM Leaderboard | **Model Size** | **ARC-c** | **CrowS-Pairs** | **HellaSwag** | **MMLU** | **PIQA** | **RACE** | **TruthfulQA** | **WinoGrande** | **Average** | |-----------------------------------------------------------------------------|-----------|-----------------|---------------|-----------|-----------|-----------|----------------|----------------|-------------| | [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | **66.79** | 47.15 | 25.72 | 69.75 | 30.91 | **39.24** | **53.83** | 45.13 | | [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | 66.01 | **51.58** | **26.70** | **70.78** | 33.78 | 38.72 | 53.20 | **46.66** | | [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | **68.63** | 53.86 | **26.01** | 72.31 | 33.11 | 40.18 | 57.22 | 47.69 | | [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | 67.44 | **59.31** | 25.41 | **72.63** | **36.84** | **40.48** | **58.33** | **49.25** | | [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | **71.74** | 65.71 | **27.05** | **75.57** | 36.46 | 36.98 | 63.22 | 51.68 | | [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | 71.02 | **71.83** | 25.65 | 75.03 | **39.43** | **45.95** | **64.72** | **54.40** | | [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | **73.29** | 73.28 | **26.76** | 78.24 | **38.76** | 34.98 | 67.25 | 54.35 | | [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | 72.33 | **76.87** | 24.80 | **79.00** | 38.47 | **38.76** | **67.96** | **55.73** | See the technical report for more results and comparison. ## Evaluation ### Setup Install the following dependencies: ```bash # install public lm-eval-harness harness_repo="public-lm-eval-harness" git clone https://github.com/EleutherAI/lm-evaluation-harness ${harness_repo} cd ${harness_repo} # use main branch on 03-15-2024, SHA is dc90fec git checkout dc90fec pip install -e . cd .. # 66d6242 is the main branch on 2024-04-01 pip install datasets@git+https://github.com/huggingface/datasets.git@66d6242 pip install tokenizers>=0.15.2 transformers>=4.38.2 sentencepiece>=0.2.0 ``` ### Evaluate OpenELM ```bash # OpenELM-450M-Instruct hf_model=apple/OpenELM-450M-Instruct # this flag is needed because lm-eval-harness set add_bos_token to False by default, but OpenELM uses LLaMA tokenizer which requires add_bos_token to be True tokenizer=meta-llama/Llama-2-7b-hf add_bos_token=True batch_size=1 mkdir lm_eval_output shot=0 task=arc_challenge,arc_easy,boolq,hellaswag,piqa,race,winogrande,sciq,truthfulqa_mc2 lm_eval --model hf \ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \ --tasks ${task} \ --device cuda:0 \ --num_fewshot ${shot} \ --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \ --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log shot=5 task=mmlu,winogrande lm_eval --model hf \ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \ --tasks ${task} \ --device cuda:0 \ --num_fewshot ${shot} \ --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \ --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log shot=25 task=arc_challenge,crows_pairs_english lm_eval --model hf \ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \ --tasks ${task} \ --device cuda:0 \ --num_fewshot ${shot} \ --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \ --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log shot=10 task=hellaswag lm_eval --model hf \ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token},tokenizer=${tokenizer} \ --tasks ${task} \ --device cuda:0 \ --num_fewshot ${shot} \ --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \ --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log ``` ## Bias, Risks, and Limitations The release of OpenELM models aims to empower and enrich the open research community by providing access to state-of-the-art language models. Trained on publicly available datasets, these models are made available without any safety guarantees. Consequently, there exists the possibility of these models producing outputs that are inaccurate, harmful, biased, or objectionable in response to user prompts. Thus, it is imperative for users and developers to undertake thorough safety testing and implement appropriate filtering mechanisms tailored to their specific requirements. ## Citation If you find our work useful, please cite: ```BibTex @article{mehtaOpenELMEfficientLanguage2024, title = {{OpenELM}: {An} {Efficient} {Language} {Model} {Family} with {Open} {Training} and {Inference} {Framework}}, shorttitle = {{OpenELM}}, url = {https://arxiv.org/abs/2404.14619v1}, language = {en}, urldate = {2024-04-24}, journal = {arXiv.org}, author = {Mehta, Sachin and Sekhavat, Mohammad Hossein and Cao, Qingqing and Horton, Maxwell and Jin, Yanzi and Sun, Chenfan and Mirzadeh, Iman and Najibi, Mahyar and Belenko, Dmitry and Zatloukal, Peter and Rastegari, Mohammad}, month = apr, year = {2024}, } @inproceedings{mehta2022cvnets, author = {Mehta, Sachin and Abdolhosseini, Farzad and Rastegari, Mohammad}, title = {CVNets: High Performance Library for Computer Vision}, year = {2022}, booktitle = {Proceedings of the 30th ACM International Conference on Multimedia}, series = {MM '22} } ```
[ "SCIQ" ]
Teradata/multilingual-e5-large
Teradata
feature-extraction
[ "onnx", "mteb", "sentence-similarity", "feature-extraction", "teradata", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh", "license:mit", "model-index", "region:us" ]
2025-02-12T17:37:05Z
2025-03-04T09:43:17+00:00
14
0
--- language: - multilingual - af - am - ar - as - az - be - bg - bn - br - bs - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - hu - hy - id - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - 'no' - om - or - pa - pl - ps - pt - ro - ru - sa - sd - si - sk - sl - so - sq - sr - su - sv - sw - ta - te - th - tl - tr - ug - uk - ur - uz - vi - xh - yi - zh license: mit tags: - mteb - sentence-similarity - feature-extraction - onnx - teradata model-index: - name: multilingual-e5-large results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 79.05970149253731 - type: ap value: 43.486574390835635 - type: f1 value: 73.32700092140148 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (de) type: mteb/amazon_counterfactual config: de split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 71.22055674518201 - type: ap value: 81.55756710830498 - type: f1 value: 69.28271787752661 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en-ext) type: mteb/amazon_counterfactual config: en-ext split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 80.41979010494754 - type: ap value: 29.34879922376344 - type: f1 value: 67.62475449011278 - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (ja) type: mteb/amazon_counterfactual config: ja split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 77.8372591006424 - type: ap value: 26.557560591210738 - type: f1 value: 64.96619417368707 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 93.489875 - type: ap value: 90.98758636917603 - type: f1 value: 93.48554819717332 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 47.564 - type: f1 value: 46.75122173518047 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (de) type: mteb/amazon_reviews_multi config: de split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 45.400000000000006 - type: f1 value: 44.17195682400632 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (es) type: mteb/amazon_reviews_multi config: es split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 43.068 - type: f1 value: 42.38155696855596 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (fr) type: mteb/amazon_reviews_multi config: fr split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 41.89 - type: f1 value: 40.84407321682663 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (ja) type: mteb/amazon_reviews_multi config: ja split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 40.120000000000005 - type: f1 value: 39.522976223819114 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (zh) type: mteb/amazon_reviews_multi config: zh split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 38.832 - type: f1 value: 38.0392533394713 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 30.725 - type: map_at_10 value: 46.055 - type: map_at_100 value: 46.900999999999996 - type: map_at_1000 value: 46.911 - type: map_at_3 value: 41.548 - type: map_at_5 value: 44.297 - type: mrr_at_1 value: 31.152 - type: mrr_at_10 value: 46.231 - type: mrr_at_100 value: 47.07 - type: mrr_at_1000 value: 47.08 - type: mrr_at_3 value: 41.738 - type: mrr_at_5 value: 44.468999999999994 - type: ndcg_at_1 value: 30.725 - type: ndcg_at_10 value: 54.379999999999995 - type: ndcg_at_100 value: 58.138 - type: ndcg_at_1000 value: 58.389 - type: ndcg_at_3 value: 45.156 - type: ndcg_at_5 value: 50.123 - type: precision_at_1 value: 30.725 - type: precision_at_10 value: 8.087 - type: precision_at_100 value: 0.9769999999999999 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 18.54 - type: precision_at_5 value: 13.542000000000002 - type: recall_at_1 value: 30.725 - type: recall_at_10 value: 80.868 - type: recall_at_100 value: 97.653 - type: recall_at_1000 value: 99.57300000000001 - type: recall_at_3 value: 55.619 - type: recall_at_5 value: 67.71000000000001 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 44.30960650674069 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 38.427074197498996 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 60.28270056031872 - type: mrr value: 74.38332673789738 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 84.05942144105269 - type: cos_sim_spearman value: 82.51212105850809 - type: euclidean_pearson value: 81.95639829909122 - type: euclidean_spearman value: 82.3717564144213 - type: manhattan_pearson value: 81.79273425468256 - type: manhattan_spearman value: 82.20066817871039 - task: type: BitextMining dataset: name: MTEB BUCC (de-en) type: mteb/bucc-bitext-mining config: de-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 99.46764091858039 - type: f1 value: 99.37717466945023 - type: precision value: 99.33194154488518 - type: recall value: 99.46764091858039 - task: type: BitextMining dataset: name: MTEB BUCC (fr-en) type: mteb/bucc-bitext-mining config: fr-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 98.29407880255337 - type: f1 value: 98.11248073959938 - type: precision value: 98.02443319392472 - type: recall value: 98.29407880255337 - task: type: BitextMining dataset: name: MTEB BUCC (ru-en) type: mteb/bucc-bitext-mining config: ru-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 97.79009352268791 - type: f1 value: 97.5176076665512 - type: precision value: 97.38136473848286 - type: recall value: 97.79009352268791 - task: type: BitextMining dataset: name: MTEB BUCC (zh-en) type: mteb/bucc-bitext-mining config: zh-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 99.26276987888363 - type: f1 value: 99.20133403545726 - type: precision value: 99.17500438827453 - type: recall value: 99.26276987888363 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 84.72727272727273 - type: f1 value: 84.67672206031433 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 35.34220182511161 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 33.4987096128766 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 25.558249999999997 - type: map_at_10 value: 34.44425000000001 - type: map_at_100 value: 35.59833333333333 - type: map_at_1000 value: 35.706916666666665 - type: map_at_3 value: 31.691749999999995 - type: map_at_5 value: 33.252916666666664 - type: mrr_at_1 value: 30.252666666666666 - type: mrr_at_10 value: 38.60675 - type: mrr_at_100 value: 39.42666666666666 - type: mrr_at_1000 value: 39.48408333333334 - type: mrr_at_3 value: 36.17441666666665 - type: mrr_at_5 value: 37.56275 - type: ndcg_at_1 value: 30.252666666666666 - type: ndcg_at_10 value: 39.683 - type: ndcg_at_100 value: 44.68541666666667 - type: ndcg_at_1000 value: 46.94316666666668 - type: ndcg_at_3 value: 34.961749999999995 - type: ndcg_at_5 value: 37.215666666666664 - type: precision_at_1 value: 30.252666666666666 - type: precision_at_10 value: 6.904166666666667 - type: precision_at_100 value: 1.0989999999999995 - type: precision_at_1000 value: 0.14733333333333334 - type: precision_at_3 value: 16.037666666666667 - type: precision_at_5 value: 11.413583333333333 - type: recall_at_1 value: 25.558249999999997 - type: recall_at_10 value: 51.13341666666666 - type: recall_at_100 value: 73.08366666666667 - type: recall_at_1000 value: 88.79483333333334 - type: recall_at_3 value: 37.989083333333326 - type: recall_at_5 value: 43.787833333333325 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 10.338 - type: map_at_10 value: 18.360000000000003 - type: map_at_100 value: 19.942 - type: map_at_1000 value: 20.134 - type: map_at_3 value: 15.174000000000001 - type: map_at_5 value: 16.830000000000002 - type: mrr_at_1 value: 23.257 - type: mrr_at_10 value: 33.768 - type: mrr_at_100 value: 34.707 - type: mrr_at_1000 value: 34.766000000000005 - type: mrr_at_3 value: 30.977 - type: mrr_at_5 value: 32.528 - type: ndcg_at_1 value: 23.257 - type: ndcg_at_10 value: 25.733 - type: ndcg_at_100 value: 32.288 - type: ndcg_at_1000 value: 35.992000000000004 - type: ndcg_at_3 value: 20.866 - type: ndcg_at_5 value: 22.612 - type: precision_at_1 value: 23.257 - type: precision_at_10 value: 8.124 - type: precision_at_100 value: 1.518 - type: precision_at_1000 value: 0.219 - type: precision_at_3 value: 15.679000000000002 - type: precision_at_5 value: 12.117 - type: recall_at_1 value: 10.338 - type: recall_at_10 value: 31.154 - type: recall_at_100 value: 54.161 - type: recall_at_1000 value: 75.21900000000001 - type: recall_at_3 value: 19.427 - type: recall_at_5 value: 24.214 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 8.498 - type: map_at_10 value: 19.103 - type: map_at_100 value: 27.375 - type: map_at_1000 value: 28.981 - type: map_at_3 value: 13.764999999999999 - type: map_at_5 value: 15.950000000000001 - type: mrr_at_1 value: 65.5 - type: mrr_at_10 value: 74.53800000000001 - type: mrr_at_100 value: 74.71799999999999 - type: mrr_at_1000 value: 74.725 - type: mrr_at_3 value: 72.792 - type: mrr_at_5 value: 73.554 - type: ndcg_at_1 value: 53.37499999999999 - type: ndcg_at_10 value: 41.286 - type: ndcg_at_100 value: 45.972 - type: ndcg_at_1000 value: 53.123 - type: ndcg_at_3 value: 46.172999999999995 - type: ndcg_at_5 value: 43.033 - type: precision_at_1 value: 65.5 - type: precision_at_10 value: 32.725 - type: precision_at_100 value: 10.683 - type: precision_at_1000 value: 1.978 - type: precision_at_3 value: 50 - type: precision_at_5 value: 41.349999999999994 - type: recall_at_1 value: 8.498 - type: recall_at_10 value: 25.070999999999998 - type: recall_at_100 value: 52.383 - type: recall_at_1000 value: 74.91499999999999 - type: recall_at_3 value: 15.207999999999998 - type: recall_at_5 value: 18.563 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 46.5 - type: f1 value: 41.93833713984145 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 67.914 - type: map_at_10 value: 78.10000000000001 - type: map_at_100 value: 78.333 - type: map_at_1000 value: 78.346 - type: map_at_3 value: 76.626 - type: map_at_5 value: 77.627 - type: mrr_at_1 value: 72.74199999999999 - type: mrr_at_10 value: 82.414 - type: mrr_at_100 value: 82.511 - type: mrr_at_1000 value: 82.513 - type: mrr_at_3 value: 81.231 - type: mrr_at_5 value: 82.065 - type: ndcg_at_1 value: 72.74199999999999 - type: ndcg_at_10 value: 82.806 - type: ndcg_at_100 value: 83.677 - type: ndcg_at_1000 value: 83.917 - type: ndcg_at_3 value: 80.305 - type: ndcg_at_5 value: 81.843 - type: precision_at_1 value: 72.74199999999999 - type: precision_at_10 value: 10.24 - type: precision_at_100 value: 1.089 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 31.268 - type: precision_at_5 value: 19.706000000000003 - type: recall_at_1 value: 67.914 - type: recall_at_10 value: 92.889 - type: recall_at_100 value: 96.42699999999999 - type: recall_at_1000 value: 97.92 - type: recall_at_3 value: 86.21 - type: recall_at_5 value: 90.036 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 22.166 - type: map_at_10 value: 35.57 - type: map_at_100 value: 37.405 - type: map_at_1000 value: 37.564 - type: map_at_3 value: 30.379 - type: map_at_5 value: 33.324 - type: mrr_at_1 value: 43.519000000000005 - type: mrr_at_10 value: 51.556000000000004 - type: mrr_at_100 value: 52.344 - type: mrr_at_1000 value: 52.373999999999995 - type: mrr_at_3 value: 48.868 - type: mrr_at_5 value: 50.319 - type: ndcg_at_1 value: 43.519000000000005 - type: ndcg_at_10 value: 43.803 - type: ndcg_at_100 value: 50.468999999999994 - type: ndcg_at_1000 value: 53.111 - type: ndcg_at_3 value: 38.893 - type: ndcg_at_5 value: 40.653 - type: precision_at_1 value: 43.519000000000005 - type: precision_at_10 value: 12.253 - type: precision_at_100 value: 1.931 - type: precision_at_1000 value: 0.242 - type: precision_at_3 value: 25.617 - type: precision_at_5 value: 19.383 - type: recall_at_1 value: 22.166 - type: recall_at_10 value: 51.6 - type: recall_at_100 value: 76.574 - type: recall_at_1000 value: 92.192 - type: recall_at_3 value: 34.477999999999994 - type: recall_at_5 value: 41.835 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 39.041 - type: map_at_10 value: 62.961999999999996 - type: map_at_100 value: 63.79899999999999 - type: map_at_1000 value: 63.854 - type: map_at_3 value: 59.399 - type: map_at_5 value: 61.669 - type: mrr_at_1 value: 78.082 - type: mrr_at_10 value: 84.321 - type: mrr_at_100 value: 84.49600000000001 - type: mrr_at_1000 value: 84.502 - type: mrr_at_3 value: 83.421 - type: mrr_at_5 value: 83.977 - type: ndcg_at_1 value: 78.082 - type: ndcg_at_10 value: 71.229 - type: ndcg_at_100 value: 74.10900000000001 - type: ndcg_at_1000 value: 75.169 - type: ndcg_at_3 value: 66.28699999999999 - type: ndcg_at_5 value: 69.084 - type: precision_at_1 value: 78.082 - type: precision_at_10 value: 14.993 - type: precision_at_100 value: 1.7239999999999998 - type: precision_at_1000 value: 0.186 - type: precision_at_3 value: 42.737 - type: precision_at_5 value: 27.843 - type: recall_at_1 value: 39.041 - type: recall_at_10 value: 74.96300000000001 - type: recall_at_100 value: 86.199 - type: recall_at_1000 value: 93.228 - type: recall_at_3 value: 64.105 - type: recall_at_5 value: 69.608 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 90.23160000000001 - type: ap value: 85.5674856808308 - type: f1 value: 90.18033354786317 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 24.091 - type: map_at_10 value: 36.753 - type: map_at_100 value: 37.913000000000004 - type: map_at_1000 value: 37.958999999999996 - type: map_at_3 value: 32.818999999999996 - type: map_at_5 value: 35.171 - type: mrr_at_1 value: 24.742 - type: mrr_at_10 value: 37.285000000000004 - type: mrr_at_100 value: 38.391999999999996 - type: mrr_at_1000 value: 38.431 - type: mrr_at_3 value: 33.440999999999995 - type: mrr_at_5 value: 35.75 - type: ndcg_at_1 value: 24.742 - type: ndcg_at_10 value: 43.698 - type: ndcg_at_100 value: 49.145 - type: ndcg_at_1000 value: 50.23800000000001 - type: ndcg_at_3 value: 35.769 - type: ndcg_at_5 value: 39.961999999999996 - type: precision_at_1 value: 24.742 - type: precision_at_10 value: 6.7989999999999995 - type: precision_at_100 value: 0.95 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 15.096000000000002 - type: precision_at_5 value: 11.183 - type: recall_at_1 value: 24.091 - type: recall_at_10 value: 65.068 - type: recall_at_100 value: 89.899 - type: recall_at_1000 value: 98.16 - type: recall_at_3 value: 43.68 - type: recall_at_5 value: 53.754999999999995 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 93.66621067031465 - type: f1 value: 93.49622853272142 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (de) type: mteb/mtop_domain config: de split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 91.94702733164272 - type: f1 value: 91.17043441745282 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (es) type: mteb/mtop_domain config: es split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 92.20146764509674 - type: f1 value: 91.98359080555608 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (fr) type: mteb/mtop_domain config: fr split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 88.99780770435328 - type: f1 value: 89.19746342724068 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (hi) type: mteb/mtop_domain config: hi split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 89.78486912871998 - type: f1 value: 89.24578823628642 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (th) type: mteb/mtop_domain config: th split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 88.74502712477394 - type: f1 value: 89.00297573881542 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 77.9046967624259 - type: f1 value: 59.36787125785957 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (de) type: mteb/mtop_intent config: de split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 74.5280360664976 - type: f1 value: 57.17723440888718 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (es) type: mteb/mtop_intent config: es split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 75.44029352901934 - type: f1 value: 54.052855531072964 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (fr) type: mteb/mtop_intent config: fr split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 70.5606013153774 - type: f1 value: 52.62215934386531 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (hi) type: mteb/mtop_intent config: hi split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 73.11581211903908 - type: f1 value: 52.341291845645465 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (th) type: mteb/mtop_intent config: th split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 74.28933092224233 - type: f1 value: 57.07918745504911 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (af) type: mteb/amazon_massive_intent config: af split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.38063214525892 - type: f1 value: 59.46463723443009 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (am) type: mteb/amazon_massive_intent config: am split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 56.06926698049766 - type: f1 value: 52.49084283283562 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ar) type: mteb/amazon_massive_intent config: ar split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 60.74983187626093 - type: f1 value: 56.960640620165904 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (az) type: mteb/amazon_massive_intent config: az split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.86550100874243 - type: f1 value: 62.47370548140688 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (bn) type: mteb/amazon_massive_intent config: bn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.971082716879636 - type: f1 value: 61.03812421957381 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (cy) type: mteb/amazon_massive_intent config: cy split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 54.98318762609282 - type: f1 value: 51.51207916008392 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (da) type: mteb/amazon_massive_intent config: da split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.45527908540686 - type: f1 value: 66.16631905400318 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (de) type: mteb/amazon_massive_intent config: de split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.32750504371216 - type: f1 value: 66.16755288646591 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (el) type: mteb/amazon_massive_intent config: el split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.09213180901143 - type: f1 value: 66.95654394661507 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 73.75588433086752 - type: f1 value: 71.79973779656923 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (es) type: mteb/amazon_massive_intent config: es split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.49428379287154 - type: f1 value: 68.37494379215734 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fa) type: mteb/amazon_massive_intent config: fa split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.90921318090115 - type: f1 value: 66.79517376481645 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fi) type: mteb/amazon_massive_intent config: fi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.12104909213181 - type: f1 value: 67.29448842879584 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (fr) type: mteb/amazon_massive_intent config: fr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.34095494283793 - type: f1 value: 67.01134288992947 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (he) type: mteb/amazon_massive_intent config: he split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.61264290517822 - type: f1 value: 64.68730512660757 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hi) type: mteb/amazon_massive_intent config: hi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.79757901815738 - type: f1 value: 65.24938539425598 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hu) type: mteb/amazon_massive_intent config: hu split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.68728984532616 - type: f1 value: 67.0487169762553 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (hy) type: mteb/amazon_massive_intent config: hy split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.07464694014795 - type: f1 value: 59.183532276789286 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (id) type: mteb/amazon_massive_intent config: id split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.04707464694015 - type: f1 value: 67.66829629003848 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (is) type: mteb/amazon_massive_intent config: is split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.42434431741762 - type: f1 value: 59.01617226544757 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (it) type: mteb/amazon_massive_intent config: it split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.53127101546738 - type: f1 value: 68.10033760906255 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ja) type: mteb/amazon_massive_intent config: ja split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 72.50504371217215 - type: f1 value: 69.74931103158923 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (jv) type: mteb/amazon_massive_intent config: jv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.91190316072628 - type: f1 value: 54.05551136648796 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ka) type: mteb/amazon_massive_intent config: ka split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 51.78211163416275 - type: f1 value: 49.874888544058535 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (km) type: mteb/amazon_massive_intent config: km split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 47.017484868863484 - type: f1 value: 44.53364263352014 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (kn) type: mteb/amazon_massive_intent config: kn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.16207128446537 - type: f1 value: 59.01185692320829 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ko) type: mteb/amazon_massive_intent config: ko split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.42501681237391 - type: f1 value: 67.13169450166086 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (lv) type: mteb/amazon_massive_intent config: lv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.0780094149294 - type: f1 value: 64.41720167850707 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ml) type: mteb/amazon_massive_intent config: ml split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.57162071284466 - type: f1 value: 62.414138683804424 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (mn) type: mteb/amazon_massive_intent config: mn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 61.71149966375252 - type: f1 value: 58.594805125087234 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ms) type: mteb/amazon_massive_intent config: ms split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.03900470746471 - type: f1 value: 63.87937257883887 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (my) type: mteb/amazon_massive_intent config: my split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 60.8776059179556 - type: f1 value: 57.48587618059131 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (nb) type: mteb/amazon_massive_intent config: nb split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.87895090786819 - type: f1 value: 66.8141299430347 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (nl) type: mteb/amazon_massive_intent config: nl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.45057162071285 - type: f1 value: 67.46444039673516 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (pl) type: mteb/amazon_massive_intent config: pl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.546738399462 - type: f1 value: 68.63640876702655 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (pt) type: mteb/amazon_massive_intent config: pt split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.72965702757229 - type: f1 value: 68.54119560379115 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ro) type: mteb/amazon_massive_intent config: ro split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 68.35574983187625 - type: f1 value: 65.88844917691927 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ru) type: mteb/amazon_massive_intent config: ru split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.70477471418964 - type: f1 value: 69.19665697061978 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sl) type: mteb/amazon_massive_intent config: sl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.0880968392737 - type: f1 value: 64.76962317666086 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sq) type: mteb/amazon_massive_intent config: sq split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.18493611297916 - type: f1 value: 62.49984559035371 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sv) type: mteb/amazon_massive_intent config: sv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.75857431069265 - type: f1 value: 69.20053687623418 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (sw) type: mteb/amazon_massive_intent config: sw split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 58.500336247478145 - type: f1 value: 55.2972398687929 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ta) type: mteb/amazon_massive_intent config: ta split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.68997982515132 - type: f1 value: 59.36848202755348 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (te) type: mteb/amazon_massive_intent config: te split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.01950235373235 - type: f1 value: 60.09351954625423 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (th) type: mteb/amazon_massive_intent config: th split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 68.29186281102892 - type: f1 value: 67.57860496703447 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (tl) type: mteb/amazon_massive_intent config: tl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.77471418964357 - type: f1 value: 61.913983147713836 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (tr) type: mteb/amazon_massive_intent config: tr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.87222595830532 - type: f1 value: 66.03679033708141 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (ur) type: mteb/amazon_massive_intent config: ur split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.04505716207127 - type: f1 value: 61.28569169817908 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (vi) type: mteb/amazon_massive_intent config: vi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.38466711499663 - type: f1 value: 67.20532357036844 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-CN) type: mteb/amazon_massive_intent config: zh-CN split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.12306657700067 - type: f1 value: 68.91251226588182 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-TW) type: mteb/amazon_massive_intent config: zh-TW split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.20040349697378 - type: f1 value: 66.02657347714175 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (af) type: mteb/amazon_massive_scenario config: af split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.73907195696032 - type: f1 value: 66.98484521791418 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (am) type: mteb/amazon_massive_scenario config: am split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 60.58843308675185 - type: f1 value: 58.95591723092005 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ar) type: mteb/amazon_massive_scenario config: ar split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.22730329522528 - type: f1 value: 66.0894499712115 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (az) type: mteb/amazon_massive_scenario config: az split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.48285137861465 - type: f1 value: 65.21963176785157 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (bn) type: mteb/amazon_massive_scenario config: bn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.74714189643578 - type: f1 value: 66.8212192745412 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (cy) type: mteb/amazon_massive_scenario config: cy split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 59.09213180901143 - type: f1 value: 56.70735546356339 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (da) type: mteb/amazon_massive_scenario config: da split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.05716207128448 - type: f1 value: 74.8413712365364 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (de) type: mteb/amazon_massive_scenario config: de split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.69737726967047 - type: f1 value: 74.7664341963 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (el) type: mteb/amazon_massive_scenario config: el split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.90383322125084 - type: f1 value: 73.59201554448323 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.51176866173503 - type: f1 value: 77.46104434577758 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (es) type: mteb/amazon_massive_scenario config: es split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.31069266980496 - type: f1 value: 74.61048660675635 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fa) type: mteb/amazon_massive_scenario config: fa split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 72.95225285810356 - type: f1 value: 72.33160006574627 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fi) type: mteb/amazon_massive_scenario config: fi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.12373907195696 - type: f1 value: 73.20921012557481 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (fr) type: mteb/amazon_massive_scenario config: fr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.86684599865501 - type: f1 value: 73.82348774610831 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (he) type: mteb/amazon_massive_scenario config: he split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.40215198386012 - type: f1 value: 71.11945183971858 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hi) type: mteb/amazon_massive_scenario config: hi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 72.12844653665098 - type: f1 value: 71.34450495911766 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hu) type: mteb/amazon_massive_scenario config: hu split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.52252858103566 - type: f1 value: 73.98878711342999 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (hy) type: mteb/amazon_massive_scenario config: hy split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 64.93611297915265 - type: f1 value: 63.723200467653385 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (id) type: mteb/amazon_massive_scenario config: id split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.11903160726295 - type: f1 value: 73.82138439467096 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (is) type: mteb/amazon_massive_scenario config: is split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.15198386012105 - type: f1 value: 66.02172193802167 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (it) type: mteb/amazon_massive_scenario config: it split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.32414256893072 - type: f1 value: 74.30943421170574 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ja) type: mteb/amazon_massive_scenario config: ja split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.46805648957633 - type: f1 value: 77.62808409298209 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (jv) type: mteb/amazon_massive_scenario config: jv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.318762609280434 - type: f1 value: 62.094284066075076 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ka) type: mteb/amazon_massive_scenario config: ka split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 58.34902488231338 - type: f1 value: 57.12893860987984 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (km) type: mteb/amazon_massive_scenario config: km split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 50.88433086751849 - type: f1 value: 48.2272350802058 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (kn) type: mteb/amazon_massive_scenario config: kn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.4425016812374 - type: f1 value: 64.61463095996173 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ko) type: mteb/amazon_massive_scenario config: ko split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.04707464694015 - type: f1 value: 75.05099199098998 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (lv) type: mteb/amazon_massive_scenario config: lv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.50437121721586 - type: f1 value: 69.83397721096314 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ml) type: mteb/amazon_massive_scenario config: ml split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.94283792871553 - type: f1 value: 68.8704663703913 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (mn) type: mteb/amazon_massive_scenario config: mn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 64.79488903833222 - type: f1 value: 63.615424063345436 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ms) type: mteb/amazon_massive_scenario config: ms split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.88231338264963 - type: f1 value: 68.57892302593237 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (my) type: mteb/amazon_massive_scenario config: my split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.248150638870214 - type: f1 value: 61.06680605338809 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (nb) type: mteb/amazon_massive_scenario config: nb split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.84196368527236 - type: f1 value: 74.52566464968763 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (nl) type: mteb/amazon_massive_scenario config: nl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.8285137861466 - type: f1 value: 74.8853197608802 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (pl) type: mteb/amazon_massive_scenario config: pl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.13248150638869 - type: f1 value: 74.3982040999179 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (pt) type: mteb/amazon_massive_scenario config: pt split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.49024882313383 - type: f1 value: 73.82153848368573 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ro) type: mteb/amazon_massive_scenario config: ro split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.72158708809684 - type: f1 value: 71.85049433180541 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ru) type: mteb/amazon_massive_scenario config: ru split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.137861466039 - type: f1 value: 75.37628348188467 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sl) type: mteb/amazon_massive_scenario config: sl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.86953597848016 - type: f1 value: 71.87537624521661 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sq) type: mteb/amazon_massive_scenario config: sq split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.27572293207801 - type: f1 value: 68.80017302344231 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sv) type: mteb/amazon_massive_scenario config: sv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.09952925353059 - type: f1 value: 76.07992707688408 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (sw) type: mteb/amazon_massive_scenario config: sw split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.140551445864155 - type: f1 value: 61.73855010331415 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ta) type: mteb/amazon_massive_scenario config: ta split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.27774041694687 - type: f1 value: 64.83664868894539 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (te) type: mteb/amazon_massive_scenario config: te split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.69468728984533 - type: f1 value: 64.76239666920868 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (th) type: mteb/amazon_massive_scenario config: th split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.44653665097512 - type: f1 value: 73.14646052013873 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (tl) type: mteb/amazon_massive_scenario config: tl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.71351714862139 - type: f1 value: 66.67212180163382 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (tr) type: mteb/amazon_massive_scenario config: tr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.9946200403497 - type: f1 value: 73.87348793725525 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (ur) type: mteb/amazon_massive_scenario config: ur split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.15400134498992 - type: f1 value: 67.09433241421094 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (vi) type: mteb/amazon_massive_scenario config: vi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.11365164761264 - type: f1 value: 73.59502539433753 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-CN) type: mteb/amazon_massive_scenario config: zh-CN split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.82582380632145 - type: f1 value: 76.89992945316313 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-TW) type: mteb/amazon_massive_scenario config: zh-TW split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.81237390719569 - type: f1 value: 72.36499770986265 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 31.480506569594695 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 29.71252128004552 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.421396787056548 - type: mrr value: 32.48155274872267 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 5.595 - type: map_at_10 value: 12.642000000000001 - type: map_at_100 value: 15.726 - type: map_at_1000 value: 17.061999999999998 - type: map_at_3 value: 9.125 - type: map_at_5 value: 10.866000000000001 - type: mrr_at_1 value: 43.344 - type: mrr_at_10 value: 52.227999999999994 - type: mrr_at_100 value: 52.898999999999994 - type: mrr_at_1000 value: 52.944 - type: mrr_at_3 value: 49.845 - type: mrr_at_5 value: 51.115 - type: ndcg_at_1 value: 41.949999999999996 - type: ndcg_at_10 value: 33.995 - type: ndcg_at_100 value: 30.869999999999997 - type: ndcg_at_1000 value: 39.487 - type: ndcg_at_3 value: 38.903999999999996 - type: ndcg_at_5 value: 37.236999999999995 - type: precision_at_1 value: 43.344 - type: precision_at_10 value: 25.480000000000004 - type: precision_at_100 value: 7.672 - type: precision_at_1000 value: 2.028 - type: precision_at_3 value: 36.636 - type: precision_at_5 value: 32.632 - type: recall_at_1 value: 5.595 - type: recall_at_10 value: 16.466 - type: recall_at_100 value: 31.226 - type: recall_at_1000 value: 62.778999999999996 - type: recall_at_3 value: 9.931 - type: recall_at_5 value: 12.884 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 40.414 - type: map_at_10 value: 56.754000000000005 - type: map_at_100 value: 57.457 - type: map_at_1000 value: 57.477999999999994 - type: map_at_3 value: 52.873999999999995 - type: map_at_5 value: 55.175 - type: mrr_at_1 value: 45.278 - type: mrr_at_10 value: 59.192 - type: mrr_at_100 value: 59.650000000000006 - type: mrr_at_1000 value: 59.665 - type: mrr_at_3 value: 56.141 - type: mrr_at_5 value: 57.998000000000005 - type: ndcg_at_1 value: 45.278 - type: ndcg_at_10 value: 64.056 - type: ndcg_at_100 value: 66.89 - type: ndcg_at_1000 value: 67.364 - type: ndcg_at_3 value: 56.97 - type: ndcg_at_5 value: 60.719 - type: precision_at_1 value: 45.278 - type: precision_at_10 value: 9.994 - type: precision_at_100 value: 1.165 - type: precision_at_1000 value: 0.121 - type: precision_at_3 value: 25.512 - type: precision_at_5 value: 17.509 - type: recall_at_1 value: 40.414 - type: recall_at_10 value: 83.596 - type: recall_at_100 value: 95.72 - type: recall_at_1000 value: 99.24 - type: recall_at_3 value: 65.472 - type: recall_at_5 value: 74.039 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 70.352 - type: map_at_10 value: 84.369 - type: map_at_100 value: 85.02499999999999 - type: map_at_1000 value: 85.04 - type: map_at_3 value: 81.42399999999999 - type: map_at_5 value: 83.279 - type: mrr_at_1 value: 81.05 - type: mrr_at_10 value: 87.401 - type: mrr_at_100 value: 87.504 - type: mrr_at_1000 value: 87.505 - type: mrr_at_3 value: 86.443 - type: mrr_at_5 value: 87.10799999999999 - type: ndcg_at_1 value: 81.04 - type: ndcg_at_10 value: 88.181 - type: ndcg_at_100 value: 89.411 - type: ndcg_at_1000 value: 89.507 - type: ndcg_at_3 value: 85.28099999999999 - type: ndcg_at_5 value: 86.888 - type: precision_at_1 value: 81.04 - type: precision_at_10 value: 13.406 - type: precision_at_100 value: 1.5350000000000001 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.31 - type: precision_at_5 value: 24.54 - type: recall_at_1 value: 70.352 - type: recall_at_10 value: 95.358 - type: recall_at_100 value: 99.541 - type: recall_at_1000 value: 99.984 - type: recall_at_3 value: 87.111 - type: recall_at_5 value: 91.643 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 46.54068723291946 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 63.216287629895994 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 4.023000000000001 - type: map_at_10 value: 10.071 - type: map_at_100 value: 11.892 - type: map_at_1000 value: 12.196 - type: map_at_3 value: 7.234 - type: map_at_5 value: 8.613999999999999 - type: mrr_at_1 value: 19.900000000000002 - type: mrr_at_10 value: 30.516 - type: mrr_at_100 value: 31.656000000000002 - type: mrr_at_1000 value: 31.723000000000003 - type: mrr_at_3 value: 27.400000000000002 - type: mrr_at_5 value: 29.270000000000003 - type: ndcg_at_1 value: 19.900000000000002 - type: ndcg_at_10 value: 17.474 - type: ndcg_at_100 value: 25.020999999999997 - type: ndcg_at_1000 value: 30.728 - type: ndcg_at_3 value: 16.588 - type: ndcg_at_5 value: 14.498 - type: precision_at_1 value: 19.900000000000002 - type: precision_at_10 value: 9.139999999999999 - type: precision_at_100 value: 2.011 - type: precision_at_1000 value: 0.33899999999999997 - type: precision_at_3 value: 15.667 - type: precision_at_5 value: 12.839999999999998 - type: recall_at_1 value: 4.023000000000001 - type: recall_at_10 value: 18.497 - type: recall_at_100 value: 40.8 - type: recall_at_1000 value: 68.812 - type: recall_at_3 value: 9.508 - type: recall_at_5 value: 12.983 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 83.967008785134 - type: cos_sim_spearman value: 80.23142141101837 - type: euclidean_pearson value: 81.20166064704539 - type: euclidean_spearman value: 80.18961335654585 - type: manhattan_pearson value: 81.13925443187625 - type: manhattan_spearman value: 80.07948723044424 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 86.94262461316023 - type: cos_sim_spearman value: 80.01596278563865 - type: euclidean_pearson value: 83.80799622922581 - type: euclidean_spearman value: 79.94984954947103 - type: manhattan_pearson value: 83.68473841756281 - type: manhattan_spearman value: 79.84990707951822 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 80.57346443146068 - type: cos_sim_spearman value: 81.54689837570866 - type: euclidean_pearson value: 81.10909881516007 - type: euclidean_spearman value: 81.56746243261762 - type: manhattan_pearson value: 80.87076036186582 - type: manhattan_spearman value: 81.33074987964402 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 79.54733787179849 - type: cos_sim_spearman value: 77.72202105610411 - type: euclidean_pearson value: 78.9043595478849 - type: euclidean_spearman value: 77.93422804309435 - type: manhattan_pearson value: 78.58115121621368 - type: manhattan_spearman value: 77.62508135122033 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 88.59880017237558 - type: cos_sim_spearman value: 89.31088630824758 - type: euclidean_pearson value: 88.47069261564656 - type: euclidean_spearman value: 89.33581971465233 - type: manhattan_pearson value: 88.40774264100956 - type: manhattan_spearman value: 89.28657485627835 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 84.08055117917084 - type: cos_sim_spearman value: 85.78491813080304 - type: euclidean_pearson value: 84.99329155500392 - type: euclidean_spearman value: 85.76728064677287 - type: manhattan_pearson value: 84.87947428989587 - type: manhattan_spearman value: 85.62429454917464 - task: type: STS dataset: name: MTEB STS17 (ko-ko) type: mteb/sts17-crosslingual-sts config: ko-ko split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 82.14190939287384 - type: cos_sim_spearman value: 82.27331573306041 - type: euclidean_pearson value: 81.891896953716 - type: euclidean_spearman value: 82.37695542955998 - type: manhattan_pearson value: 81.73123869460504 - type: manhattan_spearman value: 82.19989168441421 - task: type: STS dataset: name: MTEB STS17 (ar-ar) type: mteb/sts17-crosslingual-sts config: ar-ar split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 76.84695301843362 - type: cos_sim_spearman value: 77.87790986014461 - type: euclidean_pearson value: 76.91981583106315 - type: euclidean_spearman value: 77.88154772749589 - type: manhattan_pearson value: 76.94953277451093 - type: manhattan_spearman value: 77.80499230728604 - task: type: STS dataset: name: MTEB STS17 (en-ar) type: mteb/sts17-crosslingual-sts config: en-ar split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 75.44657840482016 - type: cos_sim_spearman value: 75.05531095119674 - type: euclidean_pearson value: 75.88161755829299 - type: euclidean_spearman value: 74.73176238219332 - type: manhattan_pearson value: 75.63984765635362 - type: manhattan_spearman value: 74.86476440770737 - task: type: STS dataset: name: MTEB STS17 (en-de) type: mteb/sts17-crosslingual-sts config: en-de split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 85.64700140524133 - type: cos_sim_spearman value: 86.16014210425672 - type: euclidean_pearson value: 86.49086860843221 - type: euclidean_spearman value: 86.09729326815614 - type: manhattan_pearson value: 86.43406265125513 - type: manhattan_spearman value: 86.17740150939994 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.91170098764921 - type: cos_sim_spearman value: 88.12437004058931 - type: euclidean_pearson value: 88.81828254494437 - type: euclidean_spearman value: 88.14831794572122 - type: manhattan_pearson value: 88.93442183448961 - type: manhattan_spearman value: 88.15254630778304 - task: type: STS dataset: name: MTEB STS17 (en-tr) type: mteb/sts17-crosslingual-sts config: en-tr split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 72.91390577997292 - type: cos_sim_spearman value: 71.22979457536074 - type: euclidean_pearson value: 74.40314008106749 - type: euclidean_spearman value: 72.54972136083246 - type: manhattan_pearson value: 73.85687539530218 - type: manhattan_spearman value: 72.09500771742637 - task: type: STS dataset: name: MTEB STS17 (es-en) type: mteb/sts17-crosslingual-sts config: es-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 80.9301067983089 - type: cos_sim_spearman value: 80.74989828346473 - type: euclidean_pearson value: 81.36781301814257 - type: euclidean_spearman value: 80.9448819964426 - type: manhattan_pearson value: 81.0351322685609 - type: manhattan_spearman value: 80.70192121844177 - task: type: STS dataset: name: MTEB STS17 (es-es) type: mteb/sts17-crosslingual-sts config: es-es split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.13820465980005 - type: cos_sim_spearman value: 86.73532498758757 - type: euclidean_pearson value: 87.21329451846637 - type: euclidean_spearman value: 86.57863198601002 - type: manhattan_pearson value: 87.06973713818554 - type: manhattan_spearman value: 86.47534918791499 - task: type: STS dataset: name: MTEB STS17 (fr-en) type: mteb/sts17-crosslingual-sts config: fr-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 85.48720108904415 - type: cos_sim_spearman value: 85.62221757068387 - type: euclidean_pearson value: 86.1010129512749 - type: euclidean_spearman value: 85.86580966509942 - type: manhattan_pearson value: 86.26800938808971 - type: manhattan_spearman value: 85.88902721678429 - task: type: STS dataset: name: MTEB STS17 (it-en) type: mteb/sts17-crosslingual-sts config: it-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 83.98021347333516 - type: cos_sim_spearman value: 84.53806553803501 - type: euclidean_pearson value: 84.61483347248364 - type: euclidean_spearman value: 85.14191408011702 - type: manhattan_pearson value: 84.75297588825967 - type: manhattan_spearman value: 85.33176753669242 - task: type: STS dataset: name: MTEB STS17 (nl-en) type: mteb/sts17-crosslingual-sts config: nl-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 84.51856644893233 - type: cos_sim_spearman value: 85.27510748506413 - type: euclidean_pearson value: 85.09886861540977 - type: euclidean_spearman value: 85.62579245860887 - type: manhattan_pearson value: 84.93017860464607 - type: manhattan_spearman value: 85.5063988898453 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 62.581573200584195 - type: cos_sim_spearman value: 63.05503590247928 - type: euclidean_pearson value: 63.652564812602094 - type: euclidean_spearman value: 62.64811520876156 - type: manhattan_pearson value: 63.506842893061076 - type: manhattan_spearman value: 62.51289573046917 - task: type: STS dataset: name: MTEB STS22 (de) type: mteb/sts22-crosslingual-sts config: de split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 48.2248801729127 - type: cos_sim_spearman value: 56.5936604678561 - type: euclidean_pearson value: 43.98149464089 - type: euclidean_spearman value: 56.108561882423615 - type: manhattan_pearson value: 43.86880305903564 - type: manhattan_spearman value: 56.04671150510166 - task: type: STS dataset: name: MTEB STS22 (es) type: mteb/sts22-crosslingual-sts config: es split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 55.17564527009831 - type: cos_sim_spearman value: 64.57978560979488 - type: euclidean_pearson value: 58.8818330154583 - type: euclidean_spearman value: 64.99214839071281 - type: manhattan_pearson value: 58.72671436121381 - type: manhattan_spearman value: 65.10713416616109 - task: type: STS dataset: name: MTEB STS22 (pl) type: mteb/sts22-crosslingual-sts config: pl split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 26.772131864023297 - type: cos_sim_spearman value: 34.68200792408681 - type: euclidean_pearson value: 16.68082419005441 - type: euclidean_spearman value: 34.83099932652166 - type: manhattan_pearson value: 16.52605949659529 - type: manhattan_spearman value: 34.82075801399475 - task: type: STS dataset: name: MTEB STS22 (tr) type: mteb/sts22-crosslingual-sts config: tr split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 54.42415189043831 - type: cos_sim_spearman value: 63.54594264576758 - type: euclidean_pearson value: 57.36577498297745 - type: euclidean_spearman value: 63.111466379158074 - type: manhattan_pearson value: 57.584543715873885 - type: manhattan_spearman value: 63.22361054139183 - task: type: STS dataset: name: MTEB STS22 (ar) type: mteb/sts22-crosslingual-sts config: ar split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 47.55216762405518 - type: cos_sim_spearman value: 56.98670142896412 - type: euclidean_pearson value: 50.15318757562699 - type: euclidean_spearman value: 56.524941926541906 - type: manhattan_pearson value: 49.955618528674904 - type: manhattan_spearman value: 56.37102209240117 - task: type: STS dataset: name: MTEB STS22 (ru) type: mteb/sts22-crosslingual-sts config: ru split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 49.20540980338571 - type: cos_sim_spearman value: 59.9009453504406 - type: euclidean_pearson value: 49.557749853620535 - type: euclidean_spearman value: 59.76631621172456 - type: manhattan_pearson value: 49.62340591181147 - type: manhattan_spearman value: 59.94224880322436 - task: type: STS dataset: name: MTEB STS22 (zh) type: mteb/sts22-crosslingual-sts config: zh split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 51.508169956576985 - type: cos_sim_spearman value: 66.82461565306046 - type: euclidean_pearson value: 56.2274426480083 - type: euclidean_spearman value: 66.6775323848333 - type: manhattan_pearson value: 55.98277796300661 - type: manhattan_spearman value: 66.63669848497175 - task: type: STS dataset: name: MTEB STS22 (fr) type: mteb/sts22-crosslingual-sts config: fr split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 72.86478788045507 - type: cos_sim_spearman value: 76.7946552053193 - type: euclidean_pearson value: 75.01598530490269 - type: euclidean_spearman value: 76.83618917858281 - type: manhattan_pearson value: 74.68337628304332 - type: manhattan_spearman value: 76.57480204017773 - task: type: STS dataset: name: MTEB STS22 (de-en) type: mteb/sts22-crosslingual-sts config: de-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 55.922619099401984 - type: cos_sim_spearman value: 56.599362477240774 - type: euclidean_pearson value: 56.68307052369783 - type: euclidean_spearman value: 54.28760436777401 - type: manhattan_pearson value: 56.67763566500681 - type: manhattan_spearman value: 53.94619541711359 - task: type: STS dataset: name: MTEB STS22 (es-en) type: mteb/sts22-crosslingual-sts config: es-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 66.74357206710913 - type: cos_sim_spearman value: 72.5208244925311 - type: euclidean_pearson value: 67.49254562186032 - type: euclidean_spearman value: 72.02469076238683 - type: manhattan_pearson value: 67.45251772238085 - type: manhattan_spearman value: 72.05538819984538 - task: type: STS dataset: name: MTEB STS22 (it) type: mteb/sts22-crosslingual-sts config: it split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 71.25734330033191 - type: cos_sim_spearman value: 76.98349083946823 - type: euclidean_pearson value: 73.71642838667736 - type: euclidean_spearman value: 77.01715504651384 - type: manhattan_pearson value: 73.61712711868105 - type: manhattan_spearman value: 77.01392571153896 - task: type: STS dataset: name: MTEB STS22 (pl-en) type: mteb/sts22-crosslingual-sts config: pl-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 63.18215462781212 - type: cos_sim_spearman value: 65.54373266117607 - type: euclidean_pearson value: 64.54126095439005 - type: euclidean_spearman value: 65.30410369102711 - type: manhattan_pearson value: 63.50332221148234 - type: manhattan_spearman value: 64.3455878104313 - task: type: STS dataset: name: MTEB STS22 (zh-en) type: mteb/sts22-crosslingual-sts config: zh-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 62.30509221440029 - type: cos_sim_spearman value: 65.99582704642478 - type: euclidean_pearson value: 63.43818859884195 - type: euclidean_spearman value: 66.83172582815764 - type: manhattan_pearson value: 63.055779168508764 - type: manhattan_spearman value: 65.49585020501449 - task: type: STS dataset: name: MTEB STS22 (es-it) type: mteb/sts22-crosslingual-sts config: es-it split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 59.587830825340404 - type: cos_sim_spearman value: 68.93467614588089 - type: euclidean_pearson value: 62.3073527367404 - type: euclidean_spearman value: 69.69758171553175 - type: manhattan_pearson value: 61.9074580815789 - type: manhattan_spearman value: 69.57696375597865 - task: type: STS dataset: name: MTEB STS22 (de-fr) type: mteb/sts22-crosslingual-sts config: de-fr split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 57.143220125577066 - type: cos_sim_spearman value: 67.78857859159226 - type: euclidean_pearson value: 55.58225107923733 - type: euclidean_spearman value: 67.80662907184563 - type: manhattan_pearson value: 56.24953502726514 - type: manhattan_spearman value: 67.98262125431616 - task: type: STS dataset: name: MTEB STS22 (de-pl) type: mteb/sts22-crosslingual-sts config: de-pl split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 21.826928900322066 - type: cos_sim_spearman value: 49.578506634400405 - type: euclidean_pearson value: 27.939890138843214 - type: euclidean_spearman value: 52.71950519136242 - type: manhattan_pearson value: 26.39878683847546 - type: manhattan_spearman value: 47.54609580342499 - task: type: STS dataset: name: MTEB STS22 (fr-pl) type: mteb/sts22-crosslingual-sts config: fr-pl split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 57.27603854632001 - type: cos_sim_spearman value: 50.709255283710995 - type: euclidean_pearson value: 59.5419024445929 - type: euclidean_spearman value: 50.709255283710995 - type: manhattan_pearson value: 59.03256832438492 - type: manhattan_spearman value: 61.97797868009122 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 85.00757054859712 - type: cos_sim_spearman value: 87.29283629622222 - type: euclidean_pearson value: 86.54824171775536 - type: euclidean_spearman value: 87.24364730491402 - type: manhattan_pearson value: 86.5062156915074 - type: manhattan_spearman value: 87.15052170378574 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 82.03549357197389 - type: mrr value: 95.05437645143527 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 57.260999999999996 - type: map_at_10 value: 66.259 - type: map_at_100 value: 66.884 - type: map_at_1000 value: 66.912 - type: map_at_3 value: 63.685 - type: map_at_5 value: 65.35499999999999 - type: mrr_at_1 value: 60.333000000000006 - type: mrr_at_10 value: 67.5 - type: mrr_at_100 value: 68.013 - type: mrr_at_1000 value: 68.038 - type: mrr_at_3 value: 65.61099999999999 - type: mrr_at_5 value: 66.861 - type: ndcg_at_1 value: 60.333000000000006 - type: ndcg_at_10 value: 70.41 - type: ndcg_at_100 value: 73.10600000000001 - type: ndcg_at_1000 value: 73.846 - type: ndcg_at_3 value: 66.133 - type: ndcg_at_5 value: 68.499 - type: precision_at_1 value: 60.333000000000006 - type: precision_at_10 value: 9.232999999999999 - type: precision_at_100 value: 1.0630000000000002 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 25.667 - type: precision_at_5 value: 17.067 - type: recall_at_1 value: 57.260999999999996 - type: recall_at_10 value: 81.94399999999999 - type: recall_at_100 value: 93.867 - type: recall_at_1000 value: 99.667 - type: recall_at_3 value: 70.339 - type: recall_at_5 value: 76.25 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.74356435643564 - type: cos_sim_ap value: 93.13411948212683 - type: cos_sim_f1 value: 86.80521991300147 - type: cos_sim_precision value: 84.00374181478017 - type: cos_sim_recall value: 89.8 - type: dot_accuracy value: 99.67920792079208 - type: dot_ap value: 89.27277565444479 - type: dot_f1 value: 83.9276990718124 - type: dot_precision value: 82.04393505253104 - type: dot_recall value: 85.9 - type: euclidean_accuracy value: 99.74257425742574 - type: euclidean_ap value: 93.17993008259062 - type: euclidean_f1 value: 86.69396110542476 - type: euclidean_precision value: 88.78406708595388 - type: euclidean_recall value: 84.7 - type: manhattan_accuracy value: 99.74257425742574 - type: manhattan_ap value: 93.14413755550099 - type: manhattan_f1 value: 86.82483594144371 - type: manhattan_precision value: 87.66564729867483 - type: manhattan_recall value: 86 - type: max_accuracy value: 99.74356435643564 - type: max_ap value: 93.17993008259062 - type: max_f1 value: 86.82483594144371 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 57.525863806168566 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 32.68850574423839 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 49.71580650644033 - type: mrr value: 50.50971903913081 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 29.152190498799484 - type: cos_sim_spearman value: 29.686180371952727 - type: dot_pearson value: 27.248664793816342 - type: dot_spearman value: 28.37748983721745 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.20400000000000001 - type: map_at_10 value: 1.6209999999999998 - type: map_at_100 value: 9.690999999999999 - type: map_at_1000 value: 23.733 - type: map_at_3 value: 0.575 - type: map_at_5 value: 0.885 - type: mrr_at_1 value: 78 - type: mrr_at_10 value: 86.56700000000001 - type: mrr_at_100 value: 86.56700000000001 - type: mrr_at_1000 value: 86.56700000000001 - type: mrr_at_3 value: 85.667 - type: mrr_at_5 value: 86.56700000000001 - type: ndcg_at_1 value: 76 - type: ndcg_at_10 value: 71.326 - type: ndcg_at_100 value: 54.208999999999996 - type: ndcg_at_1000 value: 49.252 - type: ndcg_at_3 value: 74.235 - type: ndcg_at_5 value: 73.833 - type: precision_at_1 value: 78 - type: precision_at_10 value: 74.8 - type: precision_at_100 value: 55.50000000000001 - type: precision_at_1000 value: 21.836 - type: precision_at_3 value: 78 - type: precision_at_5 value: 78 - type: recall_at_1 value: 0.20400000000000001 - type: recall_at_10 value: 1.894 - type: recall_at_100 value: 13.245999999999999 - type: recall_at_1000 value: 46.373 - type: recall_at_3 value: 0.613 - type: recall_at_5 value: 0.991 - task: type: BitextMining dataset: name: MTEB Tatoeba (sqi-eng) type: mteb/tatoeba-bitext-mining config: sqi-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.89999999999999 - type: f1 value: 94.69999999999999 - type: precision value: 94.11666666666667 - type: recall value: 95.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (fry-eng) type: mteb/tatoeba-bitext-mining config: fry-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 68.20809248554913 - type: f1 value: 63.431048720066066 - type: precision value: 61.69143958161298 - type: recall value: 68.20809248554913 - task: type: BitextMining dataset: name: MTEB Tatoeba (kur-eng) type: mteb/tatoeba-bitext-mining config: kur-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 71.21951219512195 - type: f1 value: 66.82926829268293 - type: precision value: 65.1260162601626 - type: recall value: 71.21951219512195 - task: type: BitextMining dataset: name: MTEB Tatoeba (tur-eng) type: mteb/tatoeba-bitext-mining config: tur-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.2 - type: f1 value: 96.26666666666667 - type: precision value: 95.8 - type: recall value: 97.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (deu-eng) type: mteb/tatoeba-bitext-mining config: deu-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 99.3 - type: f1 value: 99.06666666666666 - type: precision value: 98.95 - type: recall value: 99.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (nld-eng) type: mteb/tatoeba-bitext-mining config: nld-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.39999999999999 - type: f1 value: 96.63333333333333 - type: precision value: 96.26666666666668 - type: recall value: 97.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (ron-eng) type: mteb/tatoeba-bitext-mining config: ron-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96 - type: f1 value: 94.86666666666666 - type: precision value: 94.31666666666668 - type: recall value: 96 - task: type: BitextMining dataset: name: MTEB Tatoeba (ang-eng) type: mteb/tatoeba-bitext-mining config: ang-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 47.01492537313433 - type: f1 value: 40.178867566927266 - type: precision value: 38.179295828549556 - type: recall value: 47.01492537313433 - task: type: BitextMining dataset: name: MTEB Tatoeba (ido-eng) type: mteb/tatoeba-bitext-mining config: ido-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 86.5 - type: f1 value: 83.62537480063796 - type: precision value: 82.44555555555554 - type: recall value: 86.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (jav-eng) type: mteb/tatoeba-bitext-mining config: jav-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 80.48780487804879 - type: f1 value: 75.45644599303138 - type: precision value: 73.37398373983739 - type: recall value: 80.48780487804879 - task: type: BitextMining dataset: name: MTEB Tatoeba (isl-eng) type: mteb/tatoeba-bitext-mining config: isl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.7 - type: f1 value: 91.95666666666666 - type: precision value: 91.125 - type: recall value: 93.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (slv-eng) type: mteb/tatoeba-bitext-mining config: slv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.73754556500607 - type: f1 value: 89.65168084244632 - type: precision value: 88.73025516403402 - type: recall value: 91.73754556500607 - task: type: BitextMining dataset: name: MTEB Tatoeba (cym-eng) type: mteb/tatoeba-bitext-mining config: cym-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 81.04347826086956 - type: f1 value: 76.2128364389234 - type: precision value: 74.2 - type: recall value: 81.04347826086956 - task: type: BitextMining dataset: name: MTEB Tatoeba (kaz-eng) type: mteb/tatoeba-bitext-mining config: kaz-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 83.65217391304348 - type: f1 value: 79.4376811594203 - type: precision value: 77.65797101449274 - type: recall value: 83.65217391304348 - task: type: BitextMining dataset: name: MTEB Tatoeba (est-eng) type: mteb/tatoeba-bitext-mining config: est-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.5 - type: f1 value: 85.02690476190476 - type: precision value: 83.96261904761904 - type: recall value: 87.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (heb-eng) type: mteb/tatoeba-bitext-mining config: heb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89.3 - type: f1 value: 86.52333333333333 - type: precision value: 85.22833333333332 - type: recall value: 89.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (gla-eng) type: mteb/tatoeba-bitext-mining config: gla-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 65.01809408926418 - type: f1 value: 59.00594446432805 - type: precision value: 56.827215807915444 - type: recall value: 65.01809408926418 - task: type: BitextMining dataset: name: MTEB Tatoeba (mar-eng) type: mteb/tatoeba-bitext-mining config: mar-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.2 - type: f1 value: 88.58 - type: precision value: 87.33333333333334 - type: recall value: 91.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (lat-eng) type: mteb/tatoeba-bitext-mining config: lat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 59.199999999999996 - type: f1 value: 53.299166276284915 - type: precision value: 51.3383908045977 - type: recall value: 59.199999999999996 - task: type: BitextMining dataset: name: MTEB Tatoeba (bel-eng) type: mteb/tatoeba-bitext-mining config: bel-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.2 - type: f1 value: 91.2 - type: precision value: 90.25 - type: recall value: 93.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (pms-eng) type: mteb/tatoeba-bitext-mining config: pms-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 64.76190476190476 - type: f1 value: 59.867110667110666 - type: precision value: 58.07390192653351 - type: recall value: 64.76190476190476 - task: type: BitextMining dataset: name: MTEB Tatoeba (gle-eng) type: mteb/tatoeba-bitext-mining config: gle-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.2 - type: f1 value: 71.48147546897547 - type: precision value: 69.65409090909091 - type: recall value: 76.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (pes-eng) type: mteb/tatoeba-bitext-mining config: pes-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.8 - type: f1 value: 92.14 - type: precision value: 91.35833333333333 - type: recall value: 93.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (nob-eng) type: mteb/tatoeba-bitext-mining config: nob-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.89999999999999 - type: f1 value: 97.2 - type: precision value: 96.85000000000001 - type: recall value: 97.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (bul-eng) type: mteb/tatoeba-bitext-mining config: bul-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.6 - type: f1 value: 92.93333333333334 - type: precision value: 92.13333333333333 - type: recall value: 94.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (cbk-eng) type: mteb/tatoeba-bitext-mining config: cbk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 74.1 - type: f1 value: 69.14817460317461 - type: precision value: 67.2515873015873 - type: recall value: 74.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (hun-eng) type: mteb/tatoeba-bitext-mining config: hun-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.19999999999999 - type: f1 value: 94.01333333333335 - type: precision value: 93.46666666666667 - type: recall value: 95.19999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (uig-eng) type: mteb/tatoeba-bitext-mining config: uig-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.9 - type: f1 value: 72.07523809523809 - type: precision value: 70.19777777777779 - type: recall value: 76.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (rus-eng) type: mteb/tatoeba-bitext-mining config: rus-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.1 - type: f1 value: 92.31666666666666 - type: precision value: 91.43333333333332 - type: recall value: 94.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (spa-eng) type: mteb/tatoeba-bitext-mining config: spa-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.8 - type: f1 value: 97.1 - type: precision value: 96.76666666666668 - type: recall value: 97.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (hye-eng) type: mteb/tatoeba-bitext-mining config: hye-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.85714285714286 - type: f1 value: 90.92093441150045 - type: precision value: 90.00449236298293 - type: recall value: 92.85714285714286 - task: type: BitextMining dataset: name: MTEB Tatoeba (tel-eng) type: mteb/tatoeba-bitext-mining config: tel-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.16239316239316 - type: f1 value: 91.33903133903132 - type: precision value: 90.56267806267806 - type: recall value: 93.16239316239316 - task: type: BitextMining dataset: name: MTEB Tatoeba (afr-eng) type: mteb/tatoeba-bitext-mining config: afr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.4 - type: f1 value: 90.25666666666666 - type: precision value: 89.25833333333334 - type: recall value: 92.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (mon-eng) type: mteb/tatoeba-bitext-mining config: mon-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.22727272727272 - type: f1 value: 87.53030303030303 - type: precision value: 86.37121212121211 - type: recall value: 90.22727272727272 - task: type: BitextMining dataset: name: MTEB Tatoeba (arz-eng) type: mteb/tatoeba-bitext-mining config: arz-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 79.03563941299791 - type: f1 value: 74.7349505840072 - type: precision value: 72.9035639412998 - type: recall value: 79.03563941299791 - task: type: BitextMining dataset: name: MTEB Tatoeba (hrv-eng) type: mteb/tatoeba-bitext-mining config: hrv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97 - type: f1 value: 96.15 - type: precision value: 95.76666666666668 - type: recall value: 97 - task: type: BitextMining dataset: name: MTEB Tatoeba (nov-eng) type: mteb/tatoeba-bitext-mining config: nov-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.26459143968872 - type: f1 value: 71.55642023346303 - type: precision value: 69.7544932369835 - type: recall value: 76.26459143968872 - task: type: BitextMining dataset: name: MTEB Tatoeba (gsw-eng) type: mteb/tatoeba-bitext-mining config: gsw-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 58.119658119658126 - type: f1 value: 51.65242165242165 - type: precision value: 49.41768108434775 - type: recall value: 58.119658119658126 - task: type: BitextMining dataset: name: MTEB Tatoeba (nds-eng) type: mteb/tatoeba-bitext-mining config: nds-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 74.3 - type: f1 value: 69.52055555555555 - type: precision value: 67.7574938949939 - type: recall value: 74.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (ukr-eng) type: mteb/tatoeba-bitext-mining config: ukr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.8 - type: f1 value: 93.31666666666666 - type: precision value: 92.60000000000001 - type: recall value: 94.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (uzb-eng) type: mteb/tatoeba-bitext-mining config: uzb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.63551401869158 - type: f1 value: 72.35202492211837 - type: precision value: 70.60358255451713 - type: recall value: 76.63551401869158 - task: type: BitextMining dataset: name: MTEB Tatoeba (lit-eng) type: mteb/tatoeba-bitext-mining config: lit-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.4 - type: f1 value: 88.4811111111111 - type: precision value: 87.7452380952381 - type: recall value: 90.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (ina-eng) type: mteb/tatoeba-bitext-mining config: ina-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95 - type: f1 value: 93.60666666666667 - type: precision value: 92.975 - type: recall value: 95 - task: type: BitextMining dataset: name: MTEB Tatoeba (lfn-eng) type: mteb/tatoeba-bitext-mining config: lfn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 67.2 - type: f1 value: 63.01595782872099 - type: precision value: 61.596587301587306 - type: recall value: 67.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (zsm-eng) type: mteb/tatoeba-bitext-mining config: zsm-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.7 - type: f1 value: 94.52999999999999 - type: precision value: 94 - type: recall value: 95.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (ita-eng) type: mteb/tatoeba-bitext-mining config: ita-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.6 - type: f1 value: 93.28999999999999 - type: precision value: 92.675 - type: recall value: 94.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (cmn-eng) type: mteb/tatoeba-bitext-mining config: cmn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.39999999999999 - type: f1 value: 95.28333333333333 - type: precision value: 94.75 - type: recall value: 96.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (lvs-eng) type: mteb/tatoeba-bitext-mining config: lvs-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.9 - type: f1 value: 89.83 - type: precision value: 88.92 - type: recall value: 91.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (glg-eng) type: mteb/tatoeba-bitext-mining config: glg-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.69999999999999 - type: f1 value: 93.34222222222223 - type: precision value: 92.75416666666668 - type: recall value: 94.69999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (ceb-eng) type: mteb/tatoeba-bitext-mining config: ceb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 60.333333333333336 - type: f1 value: 55.31203703703703 - type: precision value: 53.39971108326371 - type: recall value: 60.333333333333336 - task: type: BitextMining dataset: name: MTEB Tatoeba (bre-eng) type: mteb/tatoeba-bitext-mining config: bre-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 12.9 - type: f1 value: 11.099861903031458 - type: precision value: 10.589187932631877 - type: recall value: 12.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (ben-eng) type: mteb/tatoeba-bitext-mining config: ben-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 86.7 - type: f1 value: 83.0152380952381 - type: precision value: 81.37833333333333 - type: recall value: 86.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (swg-eng) type: mteb/tatoeba-bitext-mining config: swg-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 63.39285714285714 - type: f1 value: 56.832482993197274 - type: precision value: 54.56845238095237 - type: recall value: 63.39285714285714 - task: type: BitextMining dataset: name: MTEB Tatoeba (arq-eng) type: mteb/tatoeba-bitext-mining config: arq-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 48.73765093304062 - type: f1 value: 41.555736920720456 - type: precision value: 39.06874531737319 - type: recall value: 48.73765093304062 - task: type: BitextMining dataset: name: MTEB Tatoeba (kab-eng) type: mteb/tatoeba-bitext-mining config: kab-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 41.099999999999994 - type: f1 value: 36.540165945165946 - type: precision value: 35.05175685425686 - type: recall value: 41.099999999999994 - task: type: BitextMining dataset: name: MTEB Tatoeba (fra-eng) type: mteb/tatoeba-bitext-mining config: fra-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.89999999999999 - type: f1 value: 93.42333333333333 - type: precision value: 92.75833333333333 - type: recall value: 94.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (por-eng) type: mteb/tatoeba-bitext-mining config: por-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.89999999999999 - type: f1 value: 93.63333333333334 - type: precision value: 93.01666666666665 - type: recall value: 94.89999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (tat-eng) type: mteb/tatoeba-bitext-mining config: tat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.9 - type: f1 value: 73.64833333333334 - type: precision value: 71.90282106782105 - type: recall value: 77.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (oci-eng) type: mteb/tatoeba-bitext-mining config: oci-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 59.4 - type: f1 value: 54.90521367521367 - type: precision value: 53.432840025471606 - type: recall value: 59.4 - task: type: BitextMining dataset: name: MTEB Tatoeba (pol-eng) type: mteb/tatoeba-bitext-mining config: pol-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.39999999999999 - type: f1 value: 96.6 - type: precision value: 96.2 - type: recall value: 97.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (war-eng) type: mteb/tatoeba-bitext-mining config: war-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 67.2 - type: f1 value: 62.25926129426129 - type: precision value: 60.408376623376626 - type: recall value: 67.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (aze-eng) type: mteb/tatoeba-bitext-mining config: aze-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.2 - type: f1 value: 87.60666666666667 - type: precision value: 86.45277777777778 - type: recall value: 90.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (vie-eng) type: mteb/tatoeba-bitext-mining config: vie-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.7 - type: f1 value: 97 - type: precision value: 96.65 - type: recall value: 97.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (nno-eng) type: mteb/tatoeba-bitext-mining config: nno-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.2 - type: f1 value: 91.39746031746031 - type: precision value: 90.6125 - type: recall value: 93.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (cha-eng) type: mteb/tatoeba-bitext-mining config: cha-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 32.11678832116788 - type: f1 value: 27.210415386260234 - type: precision value: 26.20408990846947 - type: recall value: 32.11678832116788 - task: type: BitextMining dataset: name: MTEB Tatoeba (mhr-eng) type: mteb/tatoeba-bitext-mining config: mhr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 8.5 - type: f1 value: 6.787319277832475 - type: precision value: 6.3452094433344435 - type: recall value: 8.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (dan-eng) type: mteb/tatoeba-bitext-mining config: dan-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.1 - type: f1 value: 95.08 - type: precision value: 94.61666666666667 - type: recall value: 96.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (ell-eng) type: mteb/tatoeba-bitext-mining config: ell-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.3 - type: f1 value: 93.88333333333333 - type: precision value: 93.18333333333332 - type: recall value: 95.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (amh-eng) type: mteb/tatoeba-bitext-mining config: amh-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 85.11904761904762 - type: f1 value: 80.69444444444444 - type: precision value: 78.72023809523809 - type: recall value: 85.11904761904762 - task: type: BitextMining dataset: name: MTEB Tatoeba (pam-eng) type: mteb/tatoeba-bitext-mining config: pam-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 11.1 - type: f1 value: 9.276381801735853 - type: precision value: 8.798174603174601 - type: recall value: 11.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (hsb-eng) type: mteb/tatoeba-bitext-mining config: hsb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 63.56107660455487 - type: f1 value: 58.70433569191332 - type: precision value: 56.896926581464015 - type: recall value: 63.56107660455487 - task: type: BitextMining dataset: name: MTEB Tatoeba (srp-eng) type: mteb/tatoeba-bitext-mining config: srp-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.69999999999999 - type: f1 value: 93.10000000000001 - type: precision value: 92.35 - type: recall value: 94.69999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (epo-eng) type: mteb/tatoeba-bitext-mining config: epo-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.8 - type: f1 value: 96.01222222222222 - type: precision value: 95.67083333333332 - type: recall value: 96.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (kzj-eng) type: mteb/tatoeba-bitext-mining config: kzj-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 9.2 - type: f1 value: 7.911555250305249 - type: precision value: 7.631246556216846 - type: recall value: 9.2 - task: type: BitextMining dataset: name: MTEB Tatoeba (awa-eng) type: mteb/tatoeba-bitext-mining config: awa-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.48917748917748 - type: f1 value: 72.27375798804371 - type: precision value: 70.14430014430013 - type: recall value: 77.48917748917748 - task: type: BitextMining dataset: name: MTEB Tatoeba (fao-eng) type: mteb/tatoeba-bitext-mining config: fao-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.09923664122137 - type: f1 value: 72.61541257724463 - type: precision value: 70.8998380754106 - type: recall value: 77.09923664122137 - task: type: BitextMining dataset: name: MTEB Tatoeba (mal-eng) type: mteb/tatoeba-bitext-mining config: mal-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 98.2532751091703 - type: f1 value: 97.69529354682193 - type: precision value: 97.42843279961184 - type: recall value: 98.2532751091703 - task: type: BitextMining dataset: name: MTEB Tatoeba (ile-eng) type: mteb/tatoeba-bitext-mining config: ile-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 82.8 - type: f1 value: 79.14672619047619 - type: precision value: 77.59489247311828 - type: recall value: 82.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (bos-eng) type: mteb/tatoeba-bitext-mining config: bos-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.35028248587571 - type: f1 value: 92.86252354048965 - type: precision value: 92.2080979284369 - type: recall value: 94.35028248587571 - task: type: BitextMining dataset: name: MTEB Tatoeba (cor-eng) type: mteb/tatoeba-bitext-mining config: cor-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 8.5 - type: f1 value: 6.282429263935621 - type: precision value: 5.783274240739785 - type: recall value: 8.5 - task: type: BitextMining dataset: name: MTEB Tatoeba (cat-eng) type: mteb/tatoeba-bitext-mining config: cat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.7 - type: f1 value: 91.025 - type: precision value: 90.30428571428571 - type: recall value: 92.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (eus-eng) type: mteb/tatoeba-bitext-mining config: eus-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 81 - type: f1 value: 77.8232380952381 - type: precision value: 76.60194444444444 - type: recall value: 81 - task: type: BitextMining dataset: name: MTEB Tatoeba (yue-eng) type: mteb/tatoeba-bitext-mining config: yue-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91 - type: f1 value: 88.70857142857142 - type: precision value: 87.7 - type: recall value: 91 - task: type: BitextMining dataset: name: MTEB Tatoeba (swe-eng) type: mteb/tatoeba-bitext-mining config: swe-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.39999999999999 - type: f1 value: 95.3 - type: precision value: 94.76666666666667 - type: recall value: 96.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (dtp-eng) type: mteb/tatoeba-bitext-mining config: dtp-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 8.1 - type: f1 value: 7.001008218834307 - type: precision value: 6.708329562594269 - type: recall value: 8.1 - task: type: BitextMining dataset: name: MTEB Tatoeba (kat-eng) type: mteb/tatoeba-bitext-mining config: kat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.1313672922252 - type: f1 value: 84.09070598748882 - type: precision value: 82.79171454104429 - type: recall value: 87.1313672922252 - task: type: BitextMining dataset: name: MTEB Tatoeba (jpn-eng) type: mteb/tatoeba-bitext-mining config: jpn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.39999999999999 - type: f1 value: 95.28333333333333 - type: precision value: 94.73333333333332 - type: recall value: 96.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (csb-eng) type: mteb/tatoeba-bitext-mining config: csb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 42.29249011857708 - type: f1 value: 36.981018542283365 - type: precision value: 35.415877813576024 - type: recall value: 42.29249011857708 - task: type: BitextMining dataset: name: MTEB Tatoeba (xho-eng) type: mteb/tatoeba-bitext-mining config: xho-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 83.80281690140845 - type: f1 value: 80.86854460093896 - type: precision value: 79.60093896713614 - type: recall value: 83.80281690140845 - task: type: BitextMining dataset: name: MTEB Tatoeba (orv-eng) type: mteb/tatoeba-bitext-mining config: orv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 45.26946107784431 - type: f1 value: 39.80235464678088 - type: precision value: 38.14342660001342 - type: recall value: 45.26946107784431 - task: type: BitextMining dataset: name: MTEB Tatoeba (ind-eng) type: mteb/tatoeba-bitext-mining config: ind-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.3 - type: f1 value: 92.9 - type: precision value: 92.26666666666668 - type: recall value: 94.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (tuk-eng) type: mteb/tatoeba-bitext-mining config: tuk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 37.93103448275862 - type: f1 value: 33.15192743764172 - type: precision value: 31.57456528146183 - type: recall value: 37.93103448275862 - task: type: BitextMining dataset: name: MTEB Tatoeba (max-eng) type: mteb/tatoeba-bitext-mining config: max-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 69.01408450704226 - type: f1 value: 63.41549295774648 - type: precision value: 61.342778895595806 - type: recall value: 69.01408450704226 - task: type: BitextMining dataset: name: MTEB Tatoeba (swh-eng) type: mteb/tatoeba-bitext-mining config: swh-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.66666666666667 - type: f1 value: 71.60705960705961 - type: precision value: 69.60683760683762 - type: recall value: 76.66666666666667 - task: type: BitextMining dataset: name: MTEB Tatoeba (hin-eng) type: mteb/tatoeba-bitext-mining config: hin-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.8 - type: f1 value: 94.48333333333333 - type: precision value: 93.83333333333333 - type: recall value: 95.8 - task: type: BitextMining dataset: name: MTEB Tatoeba (dsb-eng) type: mteb/tatoeba-bitext-mining config: dsb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 52.81837160751566 - type: f1 value: 48.435977731384824 - type: precision value: 47.11291973845539 - type: recall value: 52.81837160751566 - task: type: BitextMining dataset: name: MTEB Tatoeba (ber-eng) type: mteb/tatoeba-bitext-mining config: ber-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 44.9 - type: f1 value: 38.88962621607783 - type: precision value: 36.95936507936508 - type: recall value: 44.9 - task: type: BitextMining dataset: name: MTEB Tatoeba (tam-eng) type: mteb/tatoeba-bitext-mining config: tam-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.55374592833876 - type: f1 value: 88.22553125484721 - type: precision value: 87.26927252985884 - type: recall value: 90.55374592833876 - task: type: BitextMining dataset: name: MTEB Tatoeba (slk-eng) type: mteb/tatoeba-bitext-mining config: slk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.6 - type: f1 value: 93.13333333333333 - type: precision value: 92.45333333333333 - type: recall value: 94.6 - task: type: BitextMining dataset: name: MTEB Tatoeba (tgl-eng) type: mteb/tatoeba-bitext-mining config: tgl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.7 - type: f1 value: 91.99666666666667 - type: precision value: 91.26666666666668 - type: recall value: 93.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (ast-eng) type: mteb/tatoeba-bitext-mining config: ast-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 85.03937007874016 - type: f1 value: 81.75853018372703 - type: precision value: 80.34120734908137 - type: recall value: 85.03937007874016 - task: type: BitextMining dataset: name: MTEB Tatoeba (mkd-eng) type: mteb/tatoeba-bitext-mining config: mkd-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 88.3 - type: f1 value: 85.5 - type: precision value: 84.25833333333334 - type: recall value: 88.3 - task: type: BitextMining dataset: name: MTEB Tatoeba (khm-eng) type: mteb/tatoeba-bitext-mining config: khm-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 65.51246537396122 - type: f1 value: 60.02297410192148 - type: precision value: 58.133467727289236 - type: recall value: 65.51246537396122 - task: type: BitextMining dataset: name: MTEB Tatoeba (ces-eng) type: mteb/tatoeba-bitext-mining config: ces-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96 - type: f1 value: 94.89 - type: precision value: 94.39166666666667 - type: recall value: 96 - task: type: BitextMining dataset: name: MTEB Tatoeba (tzl-eng) type: mteb/tatoeba-bitext-mining config: tzl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 57.692307692307686 - type: f1 value: 53.162393162393165 - type: precision value: 51.70673076923077 - type: recall value: 57.692307692307686 - task: type: BitextMining dataset: name: MTEB Tatoeba (urd-eng) type: mteb/tatoeba-bitext-mining config: urd-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.60000000000001 - type: f1 value: 89.21190476190475 - type: precision value: 88.08666666666667 - type: recall value: 91.60000000000001 - task: type: BitextMining dataset: name: MTEB Tatoeba (ara-eng) type: mteb/tatoeba-bitext-mining config: ara-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 88 - type: f1 value: 85.47 - type: precision value: 84.43266233766234 - type: recall value: 88 - task: type: BitextMining dataset: name: MTEB Tatoeba (kor-eng) type: mteb/tatoeba-bitext-mining config: kor-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.7 - type: f1 value: 90.64999999999999 - type: precision value: 89.68333333333332 - type: recall value: 92.7 - task: type: BitextMining dataset: name: MTEB Tatoeba (yid-eng) type: mteb/tatoeba-bitext-mining config: yid-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 80.30660377358491 - type: f1 value: 76.33044137466307 - type: precision value: 74.78970125786164 - type: recall value: 80.30660377358491 - task: type: BitextMining dataset: name: MTEB Tatoeba (fin-eng) type: mteb/tatoeba-bitext-mining config: fin-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.39999999999999 - type: f1 value: 95.44 - type: precision value: 94.99166666666666 - type: recall value: 96.39999999999999 - task: type: BitextMining dataset: name: MTEB Tatoeba (tha-eng) type: mteb/tatoeba-bitext-mining config: tha-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.53284671532847 - type: f1 value: 95.37712895377129 - type: precision value: 94.7992700729927 - type: recall value: 96.53284671532847 - task: type: BitextMining dataset: name: MTEB Tatoeba (wuu-eng) type: mteb/tatoeba-bitext-mining config: wuu-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89 - type: f1 value: 86.23190476190476 - type: precision value: 85.035 - type: recall value: 89 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.585 - type: map_at_10 value: 9.012 - type: map_at_100 value: 14.027000000000001 - type: map_at_1000 value: 15.565000000000001 - type: map_at_3 value: 5.032 - type: map_at_5 value: 6.657 - type: mrr_at_1 value: 28.571 - type: mrr_at_10 value: 45.377 - type: mrr_at_100 value: 46.119 - type: mrr_at_1000 value: 46.127 - type: mrr_at_3 value: 41.156 - type: mrr_at_5 value: 42.585 - type: ndcg_at_1 value: 27.551 - type: ndcg_at_10 value: 23.395 - type: ndcg_at_100 value: 33.342 - type: ndcg_at_1000 value: 45.523 - type: ndcg_at_3 value: 25.158 - type: ndcg_at_5 value: 23.427 - type: precision_at_1 value: 28.571 - type: precision_at_10 value: 21.429000000000002 - type: precision_at_100 value: 6.714 - type: precision_at_1000 value: 1.473 - type: precision_at_3 value: 27.211000000000002 - type: precision_at_5 value: 24.490000000000002 - type: recall_at_1 value: 2.585 - type: recall_at_10 value: 15.418999999999999 - type: recall_at_100 value: 42.485 - type: recall_at_1000 value: 79.536 - type: recall_at_3 value: 6.239999999999999 - type: recall_at_5 value: 8.996 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.3234 - type: ap value: 14.361688653847423 - type: f1 value: 54.819068624319044 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 61.97792869269949 - type: f1 value: 62.28965628513728 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 38.90540145385218 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 86.53513739047506 - type: cos_sim_ap value: 75.27741586677557 - type: cos_sim_f1 value: 69.18792902473774 - type: cos_sim_precision value: 67.94708725515136 - type: cos_sim_recall value: 70.47493403693932 - type: dot_accuracy value: 84.7052512368123 - type: dot_ap value: 69.36075482849378 - type: dot_f1 value: 64.44688376631296 - type: dot_precision value: 59.92288500793831 - type: dot_recall value: 69.70976253298153 - type: euclidean_accuracy value: 86.60666388508076 - type: euclidean_ap value: 75.47512772621097 - type: euclidean_f1 value: 69.413872536473 - type: euclidean_precision value: 67.39562624254472 - type: euclidean_recall value: 71.55672823218997 - type: manhattan_accuracy value: 86.52917684925792 - type: manhattan_ap value: 75.34000110496703 - type: manhattan_f1 value: 69.28489190226429 - type: manhattan_precision value: 67.24608889992551 - type: manhattan_recall value: 71.45118733509234 - type: max_accuracy value: 86.60666388508076 - type: max_ap value: 75.47512772621097 - type: max_f1 value: 69.413872536473 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.01695967710637 - type: cos_sim_ap value: 85.8298270742901 - type: cos_sim_f1 value: 78.46988128389272 - type: cos_sim_precision value: 74.86017897091722 - type: cos_sim_recall value: 82.44533415460425 - type: dot_accuracy value: 88.19420188613343 - type: dot_ap value: 83.82679165901324 - type: dot_f1 value: 76.55833777304208 - type: dot_precision value: 75.6884875846501 - type: dot_recall value: 77.44841392054204 - type: euclidean_accuracy value: 89.03054294252338 - type: euclidean_ap value: 85.89089555185325 - type: euclidean_f1 value: 78.62997658079624 - type: euclidean_precision value: 74.92329149232914 - type: euclidean_recall value: 82.72251308900523 - type: manhattan_accuracy value: 89.0266620095471 - type: manhattan_ap value: 85.86458997929147 - type: manhattan_f1 value: 78.50685331000291 - type: manhattan_precision value: 74.5499861534201 - type: manhattan_recall value: 82.90729904527257 - type: max_accuracy value: 89.03054294252338 - type: max_ap value: 85.89089555185325 - type: max_f1 value: 78.62997658079624 --- ***See Disclaimer below*** ---- # A Teradata Vantage compatible Embeddings Model # intfloat/multilingual-e5-large ## Overview of this Model An Embedding Model which maps text (sentence/ paragraphs) into a vector. The [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) model well known for its effectiveness in capturing semantic meanings in text data. It's a state-of-the-art model trained on a large corpus, capable of generating high-quality text embeddings. - 559.89M params (Sizes in ONNX format - "int8": 535.01MB, "uint8": 535.01MB) - 514 maximum input tokens - 1024 dimensions of output vector - Licence: mit. The released models can be used for commercial purposes free of charge. - Reference to Original Model: https://huggingface.co/intfloat/multilingual-e5-large ## Quickstart: Deploying this Model in Teradata Vantage We have pre-converted the model into the ONNX format compatible with BYOM 6.0, eliminating the need for manual conversion. **Note:** Ensure you have access to a Teradata Database with BYOM 6.0 installed. To get started, clone the pre-converted model directly from the Teradata HuggingFace repository. ```python import teradataml as tdml import getpass from huggingface_hub import hf_hub_download model_name = "multilingual-e5-large" number_dimensions_output = 1024 model_file_name = "model_int8.onnx" # Step 1: Download Model from Teradata HuggingFace Page hf_hub_download(repo_id=f"Teradata/{model_name}", filename=f"onnx/{model_file_name}", local_dir="./") hf_hub_download(repo_id=f"Teradata/{model_name}", filename=f"tokenizer.json", local_dir="./") # Step 2: Create Connection to Vantage tdml.create_context(host = input('enter your hostname'), username=input('enter your username'), password = getpass.getpass("enter your password")) # Step 3: Load Models into Vantage # a) Embedding model tdml.save_byom(model_id = model_name, # must be unique in the models table model_file = f"onnx/{model_file_name}", table_name = 'embeddings_models' ) # b) Tokenizer tdml.save_byom(model_id = model_name, # must be unique in the models table model_file = 'tokenizer.json', table_name = 'embeddings_tokenizers') # Step 4: Test ONNXEmbeddings Function # Note that ONNXEmbeddings expects the 'payload' column to be 'txt'. # If it has got a different name, just rename it in a subquery/CTE. input_table = "emails.emails" embeddings_query = f""" SELECT * from mldb.ONNXEmbeddings( on {input_table} as InputTable on (select * from embeddings_models where model_id = '{model_name}') as ModelTable DIMENSION on (select model as tokenizer from embeddings_tokenizers where model_id = '{model_name}') as TokenizerTable DIMENSION using Accumulate('id', 'txt') ModelOutputTensor('sentence_embedding') EnableMemoryCheck('false') OutputFormat('FLOAT32({number_dimensions_output})') OverwriteCachedModel('true') ) a """ DF_embeddings = tdml.DataFrame.from_query(embeddings_query) DF_embeddings ``` ## What Can I Do with the Embeddings? Teradata Vantage includes pre-built in-database functions to process embeddings further. Explore the following examples: - **Semantic Clustering with TD_KMeans:** [Semantic Clustering Python Notebook](https://github.com/Teradata/jupyter-demos/blob/main/UseCases/Language_Models_InVantage/Semantic_Clustering_Python.ipynb) - **Semantic Distance with TD_VectorDistance:** [Semantic Similarity Python Notebook](https://github.com/Teradata/jupyter-demos/blob/main/UseCases/Language_Models_InVantage/Semantic_Similarity_Python.ipynb) - **RAG-Based Application with TD_VectorDistance:** [RAG and Bedrock Query PDF Notebook](https://github.com/Teradata/jupyter-demos/blob/main/UseCases/Language_Models_InVantage/RAG_and_Bedrock_QueryPDF.ipynb) ## Deep Dive into Model Conversion to ONNX **The steps below outline how we converted the open-source Hugging Face model into an ONNX file compatible with the in-database ONNXEmbeddings function.** You do not need to perform these steps—they are provided solely for documentation and transparency. However, they may be helpful if you wish to convert another model to the required format. ### Part 1. Importing and Converting Model using optimum We start by importing the pre-trained [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) model from Hugging Face. To enhance performance and ensure compatibility with various execution environments, we'll use the [Optimum](https://github.com/huggingface/optimum) utility to convert the model into the ONNX (Open Neural Network Exchange) format. After conversion to ONNX, we are fixing the opset in the ONNX file for compatibility with ONNX runtime used in Teradata Vantage We are generating ONNX files for multiple different precisions: int8, uint8 You can find the detailed conversion steps in the file [convert.py](./convert.py) ### Part 2. Running the model in Python with onnxruntime & compare results Once the fixes are applied, we proceed to test the correctness of the ONNX model by calculating cosine similarity between two texts using native SentenceTransformers and ONNX runtime, comparing the results. If the results are identical, it confirms that the ONNX model gives the same result as the native models, validating its correctness and suitability for further use in the database. ```python import onnxruntime as rt from sentence_transformers.util import cos_sim from sentence_transformers import SentenceTransformer import transformers sentences_1 = 'How is the weather today?' sentences_2 = 'What is the current weather like today?' # Calculate ONNX result tokenizer = transformers.AutoTokenizer.from_pretrained("intfloat/multilingual-e5-large") predef_sess = rt.InferenceSession("onnx/model_int8.onnx") enc1 = tokenizer(sentences_1) embeddings_1_onnx = predef_sess.run(None, {"input_ids": [enc1.input_ids], "attention_mask": [enc1.attention_mask]}) enc2 = tokenizer(sentences_2) embeddings_2_onnx = predef_sess.run(None, {"input_ids": [enc2.input_ids], "attention_mask": [enc2.attention_mask]}) # Calculate embeddings with SentenceTransformer model = SentenceTransformer(model_id, trust_remote_code=True) embeddings_1_sentence_transformer = model.encode(sentences_1, normalize_embeddings=True, trust_remote_code=True) embeddings_2_sentence_transformer = model.encode(sentences_2, normalize_embeddings=True, trust_remote_code=True) # Compare results print("Cosine similiarity for embeddings calculated with ONNX:" + str(cos_sim(embeddings_1_onnx[1][0], embeddings_2_onnx[1][0]))) print("Cosine similiarity for embeddings calculated with SentenceTransformer:" + str(cos_sim(embeddings_1_sentence_transformer, embeddings_2_sentence_transformer))) ``` You can find the detailed ONNX vs. SentenceTransformer result comparison steps in the file [test_local.py](./test_local.py) ----- DISCLAIMER: The content herein (“Content”) is provided “AS IS” and is not covered by any Teradata Operations, Inc. and its affiliates (“Teradata”) agreements. Its listing here does not constitute certification or endorsement by Teradata. To the extent any of the Content contains or is related to any artificial intelligence (“AI”) or other language learning models (“Models”) that interoperate with the products and services of Teradata, by accessing, bringing, deploying or using such Models, you acknowledge and agree that you are solely responsible for ensuring compliance with all applicable laws, regulations, and restrictions governing the use, deployment, and distribution of AI technologies. This includes, but is not limited to, AI Diffusion Rules, European Union AI Act, AI-related laws and regulations, privacy laws, export controls, and financial or sector-specific regulations. While Teradata may provide support, guidance, or assistance in the deployment or implementation of Models to interoperate with Teradata’s products and/or services, you remain fully responsible for ensuring that your Models, data, and applications comply with all relevant legal and regulatory obligations. Our assistance does not constitute legal or regulatory approval, and Teradata disclaims any liability arising from non-compliance with applicable laws. You must determine the suitability of the Models for any purpose. Given the probabilistic nature of machine learning and modeling, the use of the Models may in some situations result in incorrect output that does not accurately reflect the action generated. You should evaluate the accuracy of any output as appropriate for your use case, including by using human review of the output.
[ "BIOSSES", "SCIFACT" ]
thinhrick/distilbert-finetuned-sciq
thinhrick
question-answering
[ "transformers", "tensorboard", "safetensors", "distilbert", "question-answering", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2025-02-20T21:05:48Z
2025-02-20T21:50:51+00:00
14
0
--- base_model: distilbert-base-uncased library_name: transformers license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilbert-finetuned-sciq results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-finetuned-sciq This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.49.0 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
[ "SCIQ" ]
apriadiazriel/bert-cased-jnlpba
apriadiazriel
token-classification
[ "transformers", "tf", "bert", "token-classification", "generated_from_keras_callback", "dataset:jnlpba/jnlpba", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2025-02-27T07:18:45Z
2025-02-27T10:35:33+00:00
14
0
--- base_model: bert-base-cased datasets: - jnlpba/jnlpba library_name: transformers license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: apriadiazriel/bert-cased-jnlpba results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # apriadiazriel/bert-cased-jnlpba This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the [JNLPBA](https://huggingface.co/datasets/jnlpba/jnlpba) dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0851 - Validation Loss: 0.2221 - Precision: 0.6744 - Recall: 0.7808 - F1: 0.7237 - Accuracy: 0.9371 - Epoch: 5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 5795, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Precision | Recall | F1 | Accuracy | Epoch | |:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:| | 0.2424 | 0.1998 | 0.6507 | 0.7606 | 0.7014 | 0.9322 | 0 | | 0.1426 | 0.1975 | 0.6613 | 0.7832 | 0.7171 | 0.9364 | 1 | | 0.1166 | 0.2051 | 0.6527 | 0.7847 | 0.7127 | 0.9353 | 2 | | 0.0984 | 0.2108 | 0.6750 | 0.7811 | 0.7242 | 0.9378 | 3 | | 0.0851 | 0.2221 | 0.6744 | 0.7808 | 0.7237 | 0.9371 | 4 | ### Framework versions - Transformers 4.48.3 - TensorFlow 2.18.0 - Datasets 3.3.2 - Tokenizers 0.21.0
[ "JNLPBA" ]
Bharatdeep-H/stella_finetuned_en_dataset_stella_400_20_translated_query_v3_w_v_MAX_50
Bharatdeep-H
sentence-similarity
[ "sentence-transformers", "safetensors", "new", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:400345", "loss:TripletLoss", "custom_code", "arxiv:1908.10084", "arxiv:1703.07737", "base_model:NovaSearch/stella_en_400M_v5", "base_model:finetune:NovaSearch/stella_en_400M_v5", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2025-03-01T01:25:25Z
2025-03-01T01:28:19+00:00
14
0
--- base_model: NovaSearch/stella_en_400M_v5 library_name: sentence-transformers metrics: - cosine_accuracy pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:400345 - loss:TripletLoss widget: - source_sentence: "KAZAKHSTAN IN THE HANDS OF THE PEOPLE\n\nThe president and prime\ \ minister have fled the country. The government has collapsed. Soldiers and military\ \ personnel are also trying to escape the country or risk arrest by protesters.\ \ Demonstrators have seized control of airport roads and are also detaining police\ \ officers who do not join them. Politicians are being arrested. Healthcare workers\ \ at vaccination centers (nurses and doctors) are also being detained. Protesters\ \ are armed and have seized arsenals. Politicians' homes are on fire.\n\nThe unrest\ \ began when people were no longer able to withdraw money from banks without a\ \ vaccination passport and QR code.\n\n**INTERNATIONAL MEDIA IS LYING, CLAIMING\ \ PEOPLE WERE OUTRAGED BY FUEL PRICE HIKE**\t verdict: ['False information.']" sentences: - The requirement of the vaccination certificate originated the protests of Kazakhstan The protests in Kazakhstan were due to the increase in the price of gas, not because of the health pass - Environmentalists changed the term "global warming" to "climate change" to justify intense cold in Antarctica and Brazil Cold waves in Antarctica and Brazil do not prove that global warming does not exist - Chaos and demonstrations hit Turkey, and Erdogan arrests his interior minister This video depicts clashes between police and counterfeit sellers in South Africa - source_sentence: "Corona Foundation. Committee \"1 Test Reports Lying instead of\ \ transparent File Edit Image Options View Help EXX G Morality or its absence\ \ Vw p me.jpg=frainview 100.0 ,,A caregiver from a nursing home in Überlingen\ \ relates: \"There are 20 seniors in the home living in quarantine, of whom 18\ \ are infected. All 20 are tested daily. Each time, there are 18 positive cases\ \ recorded and reported... This amounts to 140 tests in a week with 126 positive\ \ reports. In reality, there are only 18.... When asked why this is done, the\ \ response is simply: 'On orders.'\"\n\n**Without Words**\t verdict: ['False information']" sentences: - Corona infected people are counted several times to increase the number of cases Corona infected people are not counted more than once in the statistics - 600,000 Canadians illegally live in the United States Posts criticizing US immigration policy exaggerate number of undocumented Canadians - 'The Covid-19 epidemic is over in Belgium, the virus has lost its virulence, PCR tests are too sensitive... "Fear is over": this visual spreads several false claims around the Covid-19 epidemic' - source_sentence: "And when you think things can't get more absurd, we get this:\ \ A feminist group practicing for the \"Vagina Diversity Day\" celebration. If\ \ you want to join, don't say I didn't warn you in advance. 20.\n\n\U0001F926\ ‍♀️\U0001F926‍♀️\U0001F926‍♀️ Follow us on our second account [USER].\n\n#Spain\ \ #SanchezResign #SanchezGoNow #SpainUnited #OutWithCommunism #Hypocrisy #Madrid\ \ #Barcelona #Sevilla #Today #InstaSpain #Like #ETA #Partners #Censorship #Dictatorship\ \ #Chavismo #Ruination #Poverty #SpainDoesntWantYou #SanchezToPrison #IglesiasToPrison\ \ #SocialismIsRuination #CommunismIsRuination #LetTheMajoritySpeak #DemocraticSyrup\ \ #LongLiveSpain\U0001F1EA\U0001F1F8\t verdict: []" sentences: - Government-approved immunity booster for COVID-19 sold online Philippine regulators deny the supplement is approved and warn against ‘deceptive marketing’ related to COVID-19 - Polish public television says that Spain lives under a dictatorship The news of the public television of Poland spoke of “LGBT dictatorship” in Spain - The 'day of the diversity of the vaginas', the new occurrence of Equality to reinforce feminist empowerment The Spanish Ministry of Equality has not promoted in 2022 "the day of the diversity of vaginas" - source_sentence: "7:53 AM - 66% of the text: Elementary school teacher Bernie Gores\ \ passed away while shielding his students from gunfire at Robb Elementary School\ \ in Uvalde, Texas. *Alysa Graves, 21 minutes ago*\n\n7:53 AM - 6:49 AM: Search\ \ results from Bing, Yandex, and Google lead to Encyclopedia Dramatica's \"Picture\ \ of the Now\" from November 11, 2021, featuring a CNN report on Afghanistan.\n\ \n**CNN Breaking News:** Bernie Gores, a CNN journalist, has been executed in\ \ Kabul by Taliban soldiers. We are collaborating with US officials to evacuate\ \ several more journalists from Afghanistan. Our deepest condolences to Bernie's\ \ family. *12:40 AM - August 14, 201[unknown year]* *Alysa Graves, 19 minutes\ \ ago*\n\n40010101 - TinEye OL: 66%\n\n*I'll leave this information here.*\t verdict:\ \ []" sentences: - This video shows Chinese nationals holidaying on the Indian side of a lake that straddles the countries’ border Chinese tourists holiday in Indian hotspot? No, they’re actually in Tibet - CNN made up a dead man in Ukraine. This CNN tweet about a dead man in Ukraine is fake - 'CNN reported same man''s death in Afghanistan and Uvalde school shooting Fact check: False claim that CNN reported man''s death in Uvalde and Afghanistan' - source_sentence: "<OCR_START> BRASILIA time [URL] CDC Director says 75% of COVID\ \ deaths had at least 4 comorbidities [USER] SUPPORT INDEPENDENT JOURNALISM VIA\ \ PIX: [EMAIL] <OCR_END> <TEXT_START> What's your take on this? Follow [USER]\ \ for the full article link in stories. #cdc #covid_19 <TEXT_END>\t verdict: []" sentences: - The US CDC has reduced its COVID-19 death toll US health authorities have not cut reported COVID-19 death toll - The government of Mexico requests to register by WhatsApp to be a beneficiary of its Credits to the Word The Mexican government does not request to register by WhatsApp to obtain your Credits to the Word - 'Rochelle Walensky CDC Director: "Of the overwhelming number of deaths, more than 75% occurred in people who had at least four comorbidities" CDC director spoke about study that found comorbidities in vaccinees killed by covid-19' model-index: - name: SentenceTransformer based on NovaSearch/stella_en_400M_v5 results: - task: type: triplet name: Triplet dataset: name: Unknown type: unknown metrics: - type: cosine_accuracy value: 0.9789782888886669 name: Cosine Accuracy --- # SentenceTransformer based on NovaSearch/stella_en_400M_v5 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [NovaSearch/stella_en_400M_v5](https://huggingface.co/NovaSearch/stella_en_400M_v5) on the csv dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [NovaSearch/stella_en_400M_v5](https://huggingface.co/NovaSearch/stella_en_400M_v5) <!-- at revision 32b4baf84d02a1b1beb2df8952e875232e8ebe1d --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 1024 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - csv <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: NewModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Dense({'in_features': 1024, 'out_features': 1024, 'bias': True, 'activation_function': 'torch.nn.modules.linear.Identity'}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("Bharatdeep-H/stella_finetuned_en_dataset_stella_400_20_translated_query_v3_w_v_MAX_50") # Run inference sentences = [ "<OCR_START> BRASILIA time [URL] CDC Director says 75% of COVID deaths had at least 4 comorbidities [USER] SUPPORT INDEPENDENT JOURNALISM VIA PIX: [EMAIL] <OCR_END> <TEXT_START> What's your take on this? Follow [USER] for the full article link in stories. #cdc #covid_19 <TEXT_END>\t verdict: []", 'Rochelle Walensky CDC Director: "Of the overwhelming number of deaths, more than 75% occurred in people who had at least four comorbidities" CDC director spoke about study that found comorbidities in vaccinees killed by covid-19', 'The US CDC has reduced its COVID-19 death toll US health authorities have not cut reported COVID-19 death toll', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 1024] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Triplet * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) | Metric | Value | |:--------------------|:----------| | **cosine_accuracy** | **0.979** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### csv * Dataset: csv * Size: 400,345 training samples * Columns: <code>query</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | query | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 12 tokens</li><li>mean: 126.05 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 35.3 tokens</li><li>max: 116 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 35.44 tokens</li><li>max: 191 tokens</li></ul> | * Samples: | query | positive | negative | |:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>The rupee's depreciation, along with the exit of multi-national corporations like Pepsi, KFC, Pizza Hut, and Coca-Cola, signifies challenging times for the country. verdict: ['Partly false information']</code> | <code>Four multinational firms are winding up operations in Sri Lanka Four multinational firms reject claims they are 'winding up operations' in Sri Lanka</code> | <code>Duque's economic balance is disastrous compared to Santos' A tweet erroneously compares economic data from the governments of Santos and Duque in Colombia</code> | | <code>KF94 masks and various particle sizes enlarged 50,000 times: Virus <0.125um, 2.5km ultrafine dust 2.5m.<br><br>With the mandatory mask-wearing policy, everyone is diligently wearing masks... Yet, there are more confirmed cases now compared to when mask-wearing wasn't mandatory... This is evidence that masks cannot block the Wuhan pneumonia virus... Still, everywhere you go... masks, masks everywhere... A time when you're considered crazy if you think rationally... verdict: ['Partly false information']</code> | <code>Virus that causes Covid-19 cannot be blocked by face masks Misleading face mask graphic shared in incorrect virus posts</code> | <code>“Stanford study results” show face masks are ineffective and dangerous. Paper about mask wearing was not from Stanford and makes false claims</code> | | <code>RAF 18 Olaf Scholz Source: "Aktuelle Kamera" 1984 Source Actual...<br><br>No, ...right?! Please tell me that's not true! verdict: ['Altered photo']</code> | <code>In 1984, Olaf Scholz appeared under an RAF logo in the GDR news program “Aktuelle Kamera”. This picture of Olaf Scholz in front of an RAF logo is a fake</code> | <code>The photo showing a dilapidated airport is from Kabul in August 2021 The image of an airport in ruins is not of Kabul in 2021, but of Karachi in 2014</code> | * Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters: ```json { "distance_metric": "TripletDistanceMetric.EUCLIDEAN", "triplet_margin": 5 } ``` ### Evaluation Dataset #### csv * Dataset: csv * Size: 400,345 evaluation samples * Columns: <code>query</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | query | positive | negative | |:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 12 tokens</li><li>mean: 117.77 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 35.32 tokens</li><li>max: 141 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 35.57 tokens</li><li>max: 150 tokens</li></ul> | * Samples: | query | positive | negative | |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>#Ecuador An impressive explosion was recorded this afternoon at the Sangay Volcano. A dense ash cloud keeps the population on alert. At least 22 cantons in five provinces of Ecuador have reported ash fall. verdict: ['False']</code> | <code>Video of the eruption of the Sangay volcano in Ecuador in September 2020 The video of an erupting volcano is not recent nor of the Sangay in Ecuador</code> | <code>After Taal, another volcano eruption in the Philippines This video is from a 2015 report about the Kanlaon volcano eruption in the Philippines</code> | | <code>Did you notice, friends, that the number of COVID infections and deaths in India has dramatically decreased over the past two weeks? Previously, they were recording 3-4 million new infections and 3-4 thousand deaths per day, with bodies piling up and being disposed of in rivers. I noticed this and was shocked. I searched for news in Thailand but couldn't find anything. So, I called friends in Singapore and Malaysia, and they told me that India has discovered an extraordinary drug called Ivermectin, which can easily cure COVID. Originally discovered in 1970, it was used to treat eye infections in underdeveloped countries and later for animal diseases. It's easily accessible and very affordable. India started using Ivermectin to treat COVID in various states, including the hard-hit city of Mumbai. The local government even distributed Ivermectin to all residents for self-medication. The results were astonishing, with new infections and deaths plummeting.<br><br>When news spread that Ivermectin...</code> | <code>Ivermectin can be used to treat COVID-19. Ivermectin anthelmintic drug It has not been approved for use in the treatment of COVID-19.</code> | <code>Molnupiravir cures Covid-19 within 24 hours Posts misleadingly claim anti-viral drug could replace Covid-19 vaccines</code> | | <code>Why is the stench from the Trudeaus always buried? Alexandre Trudeau, brother of Canadian Prime Minister Justin Trudeau, was arrested on charges including sexual misconduct with a minor and possession of child pornography. He was booked into jail on Tuesday, as confirmed by the Montreal police (SPMV) in a statement to The Beaver. Additional charges include possession of a small quantity of crystal meth. Why was his bail set at only 50,000 CAD? Was his passport confiscated? verdict: ['False information']</code> | <code>Alexandre Trudeau arrested by the Montreal police Justin Trudeau's brother was not arrested for sexual misconduct</code> | <code>A new method of kidnapping in the Montreal area A new kidnapping technique in Montreal? Police in Quebec deny and photos have been circulating since at least 2019</code> | * Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters: ```json { "distance_metric": "TripletDistanceMetric.EUCLIDEAN", "triplet_margin": 5 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 1 - `learning_rate`: 3e-05 - `max_steps`: 4000 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.2 - `bf16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 1 - `per_device_eval_batch_size`: 8 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 3e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 3.0 - `max_steps`: 4000 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.2 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | Validation Loss | cosine_accuracy | |:------:|:----:|:-------------:|:---------------:|:---------------:| | 0.0003 | 100 | 1.5655 | - | - | | 0.0007 | 200 | 0.7517 | - | - | | 0.0010 | 300 | 0.4132 | - | - | | 0.0013 | 400 | 0.5225 | - | - | | 0.0017 | 500 | 0.5147 | - | - | | 0.0020 | 600 | 0.2795 | - | - | | 0.0023 | 700 | 0.5319 | - | - | | 0.0027 | 800 | 0.4263 | - | - | | 0.0030 | 900 | 0.766 | - | - | | 0.0033 | 1000 | 0.6864 | 0.6357 | 0.9585 | | 0.0037 | 1100 | 0.7172 | - | - | | 0.0040 | 1200 | 0.5172 | - | - | | 0.0043 | 1300 | 1.0088 | - | - | | 0.0047 | 1400 | 0.9853 | - | - | | 0.0050 | 1500 | 0.5999 | - | - | | 0.0053 | 1600 | 0.8267 | - | - | | 0.0057 | 1700 | 0.6835 | - | - | | 0.0060 | 1800 | 0.6136 | - | - | | 0.0063 | 1900 | 1.0621 | - | - | | 0.0067 | 2000 | 0.6602 | 0.6893 | 0.9597 | | 0.0070 | 2100 | 0.6659 | - | - | | 0.0073 | 2200 | 0.4748 | - | - | | 0.0077 | 2300 | 0.717 | - | - | | 0.0080 | 2400 | 0.6511 | - | - | | 0.0083 | 2500 | 0.3336 | - | - | | 0.0087 | 2600 | 0.45 | - | - | | 0.0090 | 2700 | 0.4516 | - | - | | 0.0093 | 2800 | 0.6044 | - | - | | 0.0097 | 2900 | 0.3644 | - | - | | 0.0100 | 3000 | 0.5219 | 0.4026 | 0.9752 | | 0.0103 | 3100 | 0.4302 | - | - | | 0.0107 | 3200 | 0.4322 | - | - | | 0.0110 | 3300 | 0.46 | - | - | | 0.0113 | 3400 | 0.5696 | - | - | | 0.0117 | 3500 | 0.383 | - | - | | 0.0120 | 3600 | 0.3649 | - | - | | 0.0123 | 3700 | 0.4096 | - | - | | 0.0127 | 3800 | 0.3919 | - | - | | 0.0130 | 3900 | 0.3003 | - | - | | 0.0133 | 4000 | 0.2425 | 0.3615 | 0.9790 | ### Framework Versions - Python: 3.10.16 - Sentence Transformers: 3.3.1 - Transformers: 4.49.0 - PyTorch: 2.5.1+cu121 - Accelerate: 1.4.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### TripletLoss ```bibtex @misc{hermans2017defense, title={In Defense of the Triplet Loss for Person Re-Identification}, author={Alexander Hermans and Lucas Beyer and Bastian Leibe}, year={2017}, eprint={1703.07737}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
[ "PCR" ]
DrGwin/setfit-paraphrase-mpnet-base-v2-sst2A
DrGwin
text-classification
[ "setfit", "safetensors", "mpnet", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:sentence-transformers/paraphrase-mpnet-base-v2", "base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2", "model-index", "region:us" ]
2025-03-02T23:44:41Z
2025-03-02T23:44:58+00:00
14
0
--- base_model: sentence-transformers/paraphrase-mpnet-base-v2 library_name: setfit metrics: - accuracy pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: 'for this reason and this reason only -- the power of its own steadfast , hoity-toity convictions -- chelsea walls deserves a medal . ' - text: 'aside from minor tinkering , this is the same movie you probably loved in 1994 , except that it looks even better . ' - text: 'cq ''s reflection of artists and the love of cinema-and-self suggests nothing less than a new voice that deserves to be considered as a possible successor to the best european directors . ' - text: 'i had to look away - this was god awful . ' - text: 'i ''ll bet the video game is a lot more fun than the film . ' inference: true model-index: - name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2 results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 0.89 name: Accuracy --- # SetFit with sentence-transformers/paraphrase-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 2 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:---------|:--------------------------------------------------------------------------------------------------------------------------------------------| | positive | <ul><li>'klein , charming in comedies like american pie and dead-on in election , '</li><li>'be fruitful '</li><li>'soulful and '</li></ul> | | negative | <ul><li>'covered earlier and much better '</li><li>'it too is a bomb . '</li><li>'guilty about it '</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.89 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("DrGwin/setfit-paraphrase-mpnet-base-v2-sst2A") # Run inference preds = model("i had to look away - this was god awful . ") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:-------|:----| | Word count | 2 | 9.55 | 46 | | Label | Training Sample Count | |:---------|:----------------------| | negative | 40 | | positive | 60 | ### Training Hyperparameters - batch_size: (16, 16) - num_epochs: (4, 4) - max_steps: -1 - sampling_strategy: oversampling - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - l2_weight: 0.01 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: True ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0030 | 1 | 0.4181 | - | | 0.1506 | 50 | 0.2514 | - | | 0.3012 | 100 | 0.0932 | - | | 0.4518 | 150 | 0.0029 | - | | 0.6024 | 200 | 0.001 | - | | 0.7530 | 250 | 0.0006 | - | | 0.9036 | 300 | 0.0006 | - | | 1.0 | 332 | - | 0.1722 | | 1.0542 | 350 | 0.0014 | - | | 1.2048 | 400 | 0.0004 | - | | 1.3554 | 450 | 0.0004 | - | | 1.5060 | 500 | 0.0095 | - | | 1.6566 | 550 | 0.0003 | - | | 1.8072 | 600 | 0.0003 | - | | 1.9578 | 650 | 0.0003 | - | | 2.0 | 664 | - | 0.1820 | | 2.1084 | 700 | 0.0003 | - | | 2.2590 | 750 | 0.0023 | - | | 2.4096 | 800 | 0.0003 | - | | 2.5602 | 850 | 0.0002 | - | | 2.7108 | 900 | 0.0002 | - | | 2.8614 | 950 | 0.0002 | - | | 3.0 | 996 | - | 0.1970 | | 3.0120 | 1000 | 0.0002 | - | | 3.1627 | 1050 | 0.0003 | - | | 3.3133 | 1100 | 0.0012 | - | | 3.4639 | 1150 | 0.0002 | - | | 3.6145 | 1200 | 0.0002 | - | | 3.7651 | 1250 | 0.0003 | - | | 3.9157 | 1300 | 0.001 | - | | 4.0 | 1328 | - | 0.1810 | ### Framework Versions - Python: 3.11.11 - SetFit: 1.1.1 - Sentence Transformers: 3.4.1 - Transformers: 4.48.3 - PyTorch: 2.5.1+cu124 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
[ "MEDAL" ]
RomainDarous/large_directFourEpoch_meanPooling_mistranslationModel
RomainDarous
sentence-similarity
[ "sentence-transformers", "safetensors", "xlm-roberta", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:4460010", "loss:CoSENTLoss", "dataset:RomainDarous/corrupted_os_by_language", "arxiv:1908.10084", "base_model:RomainDarous/large_directThreeEpoch_meanPooling_mistranslationModel", "base_model:finetune:RomainDarous/large_directThreeEpoch_meanPooling_mistranslationModel", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2025-03-05T00:02:29Z
2025-03-05T00:03:13+00:00
14
0
--- base_model: RomainDarous/large_directThreeEpoch_meanPooling_mistranslationModel datasets: - RomainDarous/corrupted_os_by_language library_name: sentence-transformers metrics: - pearson_cosine - spearman_cosine pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:4460010 - loss:CoSENTLoss widget: - source_sentence: Malformed target specific variable definition sentences: - Hedefe özgü değişken tanımı bozuk - Kan alle data in die gids lees - "слава Украине! героям слава!\uFEFF" - source_sentence: Can't write an inode bitmap sentences: - Skontrolujte stav aktualizácií alebo to skúste znova neskôr. - Malsukcesis skribi i nodan bitmapon - Zastępuje wersję GL obsługiwaną przez sterownik - source_sentence: Optimize soft proofing color transformations sentences: - 'arkadaslar biz artik her an kirmizi kart yiyecek,bencil,pas yapamayan,isabetsiz orta yapani istemiyoruz. sozde efsaneniz bu sezon Besiktasa en cok zarar verenlerden biriydi. kendini dusunmeden once Besiktasi dusunecek adam lazim bize. o yuzden #GoHomeQuaresma' - Yav bizim dedikodusunu yaptığımız insanın bile bi vizyonu var. Senin hakkında neden oturup konuşalım? - Ik ben een transgender. - source_sentence: 'Pass 1: Checking @is, @bs, and sizes' sentences: - Bu adam cidden kurabiye gibi ben bunu çayın yanında yerim - sagnat. errada. invisible. justificació. idioma - Wilt u echt de primaire sleutel verplaatsen? (j N) - source_sentence: Search for matching log entries sentences: - quem te lembra? caralho tô assustada aqui kkkkk - sendotasunik gabeko\ egoera bistaratuko den ala ez adierazten du - En aquest cas, hem d'incloure les imatges del contenidor )sr iov per a càrregues de treball de telco (per exemple, com a referència, es podrien obtenir des de valors de helm chart) model-index: - name: SentenceTransformer based on RomainDarous/large_directThreeEpoch_meanPooling_mistranslationModel results: - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts eval type: sts-eval metrics: - type: pearson_cosine value: 0.980320627958563 name: Pearson Cosine - type: spearman_cosine value: 0.8655830126826171 name: Spearman Cosine - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts test type: sts-test metrics: - type: pearson_cosine value: 0.9804333155239368 name: Pearson Cosine - type: spearman_cosine value: 0.865640780478526 name: Spearman Cosine --- # SentenceTransformer based on RomainDarous/large_directThreeEpoch_meanPooling_mistranslationModel This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [RomainDarous/large_directThreeEpoch_meanPooling_mistranslationModel](https://huggingface.co/RomainDarous/large_directThreeEpoch_meanPooling_mistranslationModel) on the [corrupted_open_os_by_language](https://huggingface.co/datasets/RomainDarous/corrupted_os_by_language) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [RomainDarous/large_directThreeEpoch_meanPooling_mistranslationModel](https://huggingface.co/RomainDarous/large_directThreeEpoch_meanPooling_mistranslationModel) <!-- at revision bc422140f1c78b1065a14873f780d44f9d659b55 --> - **Maximum Sequence Length:** 128 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - [corrupted_open_os_by_language](https://huggingface.co/datasets/RomainDarous/corrupted_os_by_language) <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("RomainDarous/large_directFourEpoch_meanPooling_mistranslationModel") # Run inference sentences = [ 'Search for matching log entries', 'quem te lembra? caralho tô assustada aqui kkkkk', 'sendotasunik gabeko\\ egoera bistaratuko den ala ez adierazten du', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Datasets: `sts-eval` and `sts-test` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | sts-eval | sts-test | |:--------------------|:-----------|:-----------| | pearson_cosine | 0.9803 | 0.9804 | | **spearman_cosine** | **0.8656** | **0.8656** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### corrupted_open_os_by_language * Dataset: [corrupted_open_os_by_language](https://huggingface.co/datasets/RomainDarous/corrupted_os_by_language) at [9d25780](https://huggingface.co/datasets/RomainDarous/corrupted_os_by_language/tree/9d25780e2032b1e8f06af6a4ff55124d7a930c3c) * Size: 4,460,010 training samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------| | type | string | string | int | | details | <ul><li>min: 6 tokens</li><li>mean: 18.33 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 26.47 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>0: ~50.60%</li><li>1: ~49.40%</li></ul> | * Samples: | sentence1 | sentence2 | score | |:--------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------|:---------------| | <code>Check spelling. Print the document. Show completion window. General. Show help</code> | <code>Kontrolli õigekirja. присоединяюсь. </code> | <code>0</code> | | <code>EXIF not supported for this file format.</code> | <code>Šiam failo formatui EXIF nepalaikomas.</code> | <code>1</code> | | <code>This package includes the documentation for texlive everyhook</code> | <code>Paket ini menyertakan dokumentasi untuk texlive everyhook</code> | <code>1</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` ### Evaluation Dataset #### corrupted_open_os_by_language * Dataset: [corrupted_open_os_by_language](https://huggingface.co/datasets/RomainDarous/corrupted_os_by_language) at [9d25780](https://huggingface.co/datasets/RomainDarous/corrupted_os_by_language/tree/9d25780e2032b1e8f06af6a4ff55124d7a930c3c) * Size: 4,460,010 evaluation samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------| | type | string | string | int | | details | <ul><li>min: 5 tokens</li><li>mean: 17.71 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 26.95 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>0: ~50.60%</li><li>1: ~49.40%</li></ul> | * Samples: | sentence1 | sentence2 | score | |:----------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------| | <code>Could not identify the current seat.</code> | <code> 天天花着男人的钱还这这创造新词汇男权你可真牛批,你也就这一出了一问男权,就说是我是吧,到现在我也没听到你给我们讲的男权,你也就是在网上喷喷,现实走道都不敢探头自卑,你现实要把你女权的劲拿出来总低啥头,您老应该去国家教育局把男权加上是吧,你们女权天天说自己生活不好没地位,给你们地位了你们能干啥?用你们的女权打到全世界男性是吧,能相出男权这一词您老也是人才呀,是不是庆幸自己是个女的,活在自己想想的世界里不觉得孤单吗,假象有男权是吧,自己假象和男权还说自己不是田园女权,田园女权能连自己都骂说自己妈是驴爸是大鼎的也是奇葩呀,那我们国家大肆宣扬过你们这么田园女权吗,国家要的是女性人群自主自理,你们可好看看你们女权干的啥事,给你们女权地位高了,看看你们女权干的事n绿地集团高管怎么都不说呀,人家可是有钱有地位,也不是我们说三从四德洗衣做饭你们女权会吗?,那我问问你们女权干过啥惊天大事,还甩锅给孔子,还封建社会,那我问问你们女权在福利面前为啥说自己是女性呀不是社会主义社会吗不应该男女平等吗,天天自己也不知道是不是抱个手机天天欧巴欧巴,你家那位要是不陪你看一会就会问你是不是不爱我了是吧大姐,您老也就赚这白菜钱操心国家事,中国五千年的历史被您老一句否决,还嘲讽人家日本女性,好意思说自己不是女权,三从四德流传这么久到您这变成日本文化了,我就想问问男权您老是怎么想的,那你问孔子老人家呗为什么女人要三从四德,我说的是女权你干嘛自己对号入座,连中华人民传承的东西都不认跟我这谈男权,还男权您老给我举个例子呗,让我们男权听听都是h啥,这些不都是你们女权的标准吗?,还男权,您老醒醒吧这里是现实,不是你的公主世界,总觉得自己多么多么重要,地球没你是不能转了还是人类要灭亡呀,我真的想问一句你给我找一条男权的新闻,咋了我们男人不能提女权呗你老授权了呗,那我们谈论田园女权你老对号入座干嘛,天天过节要礼物,还嫌弃自己男朋友没有钱,我寻思你找个有钱人包养你呗,对了有钱人怎么可能看上你这种女权的呢,还要孩子跟女方姓我也没看见你没跟你妈姓呀,年年过节男人给你们送礼物你们女人给男人送过礼物吗?,一问我不是陪着他吗我对他说我爱你了这不是最好的礼物吗?,男人只要不送礼物就是不爱你们了呗,人家国际女权讲的男人能做的我们女人也能做,田园女权男人能做的我们女人为啥要做,还男权我笑了,以前结婚几头牛换个衣服原装的,现在几十万彩...</code> | <code>0</code> | | <code>Undoing Date and Time Adjustment</code> | <code>正在取消日期和时间调整</code> | <code>1</code> | | <code>Dependency package for gsl_2_6 gnu hpc</code> | <code>Pacotes de desenvolvimento do KDE</code> | <code>1</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `num_train_epochs`: 1 - `warmup_ratio`: 0.1 #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | corrupted open os by language loss | sts-eval_spearman_cosine | sts-test_spearman_cosine | |:-----:|:-----:|:-------------:|:----------------------------------:|:------------------------:|:------------------------:| | 1.0 | 55751 | 0.0771 | 0.2658 | 0.8656 | - | | -1 | -1 | - | - | - | 0.8656 | ### Framework Versions - Python: 3.10.13 - Sentence Transformers: 3.4.1 - Transformers: 4.48.2 - PyTorch: 2.1.2+cu121 - Accelerate: 1.3.0 - Datasets: 2.16.1 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### CoSENTLoss ```bibtex @online{kexuefm-8847, title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT}, author={Su Jianlin}, year={2022}, month={Jan}, url={https://kexue.fm/archives/8847}, } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
[ "CAS" ]
FriendliAI/MiniCPM-V-2_6
FriendliAI
image-text-to-text
[ "transformers", "safetensors", "minicpmv", "feature-extraction", "minicpm-v", "vision", "ocr", "multi-image", "video", "custom_code", "image-text-to-text", "conversational", "multilingual", "dataset:openbmb/RLAIF-V-Dataset", "arxiv:2408.01800", "region:us" ]
2025-03-05T08:08:03Z
2025-03-05T08:10:42+00:00
14
0
--- datasets: - openbmb/RLAIF-V-Dataset language: - multilingual library_name: transformers pipeline_tag: image-text-to-text tags: - minicpm-v - vision - ocr - multi-image - video - custom_code --- <h1>A GPT-4V Level MLLM for Single Image, Multi Image and Video on Your Phone</h1> [GitHub](https://github.com/OpenBMB/MiniCPM-V) | [Demo](http://120.92.209.146:8887/)</a> ## News <!-- omit in toc --> * [2025.01.14] 🔥🔥 We open source [**MiniCPM-o 2.6**](https://huggingface.co/openbmb/MiniCPM-o-2_6), with significant performance improvement over **MiniCPM-V 2.6**, and support real-time speech-to-speech conversation and multimodal live streaming. Try it now. ## MiniCPM-V 2.6 **MiniCPM-V 2.6** is the latest and most capable model in the MiniCPM-V series. The model is built on SigLip-400M and Qwen2-7B with a total of 8B parameters. It exhibits a significant performance improvement over MiniCPM-Llama3-V 2.5, and introduces new features for multi-image and video understanding. Notable features of MiniCPM-V 2.6 include: - 🔥 **Leading Performance.** MiniCPM-V 2.6 achieves an average score of 65.2 on the latest version of OpenCompass, a comprehensive evaluation over 8 popular benchmarks. **With only 8B parameters, it surpasses widely used proprietary models like GPT-4o mini, GPT-4V, Gemini 1.5 Pro, and Claude 3.5 Sonnet** for single image understanding. - 🖼️ **Multi Image Understanding and In-context Learning.** MiniCPM-V 2.6 can also perform **conversation and reasoning over multiple images**. It achieves **state-of-the-art performance** on popular multi-image benchmarks such as Mantis-Eval, BLINK, Mathverse mv and Sciverse mv, and also shows promising in-context learning capability. - 🎬 **Video Understanding.** MiniCPM-V 2.6 can also **accept video inputs**, performing conversation and providing dense captions for spatial-temporal information. It outperforms **GPT-4V, Claude 3.5 Sonnet and LLaVA-NeXT-Video-34B** on Video-MME with/without subtitles. - 💪 **Strong OCR Capability and Others.** MiniCPM-V 2.6 can process images with any aspect ratio and up to 1.8 million pixels (e.g., 1344x1344). It achieves **state-of-the-art performance on OCRBench, surpassing proprietary models such as GPT-4o, GPT-4V, and Gemini 1.5 Pro**. Based on the the latest [RLAIF-V](https://github.com/RLHF-V/RLAIF-V/) and [VisCPM](https://github.com/OpenBMB/VisCPM) techniques, it features **trustworthy behaviors**, with significantly lower hallucination rates than GPT-4o and GPT-4V on Object HalBench, and supports **multilingual capabilities** on English, Chinese, German, French, Italian, Korean, etc. - 🚀 **Superior Efficiency.** In addition to its friendly size, MiniCPM-V 2.6 also shows **state-of-the-art token density** (i.e., number of pixels encoded into each visual token). **It produces only 640 tokens when processing a 1.8M pixel image, which is 75% fewer than most models**. This directly improves the inference speed, first-token latency, memory usage, and power consumption. As a result, MiniCPM-V 2.6 can efficiently support **real-time video understanding** on end-side devices such as iPad. - 💫 **Easy Usage.** MiniCPM-V 2.6 can be easily used in various ways: (1) [llama.cpp](https://github.com/OpenBMB/llama.cpp/blob/minicpmv-main/examples/llava/README-minicpmv2.6.md) and [ollama](https://github.com/OpenBMB/ollama/tree/minicpm-v2.6) support for efficient CPU inference on local devices, (2) [int4](https://huggingface.co/openbmb/MiniCPM-V-2_6-int4) and [GGUF](https://huggingface.co/openbmb/MiniCPM-V-2_6-gguf) format quantized models in 16 sizes, (3) [vLLM](https://github.com/OpenBMB/MiniCPM-V/tree/main?tab=readme-ov-file#inference-with-vllm) support for high-throughput and memory-efficient inference, (4) fine-tuning on new domains and tasks, (5) quick local WebUI demo setup with [Gradio](https://github.com/OpenBMB/MiniCPM-V/tree/main?tab=readme-ov-file#chat-with-our-demo-on-gradio) and (6) online web [demo](http://120.92.209.146:8887). ### Evaluation <!-- omit in toc --> <div align="center"> <img src="https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/radar_final.png" width=66% /> </div> #### Single image results on OpenCompass, MME, MMVet, OCRBench, MMMU, MathVista, MMB, AI2D, TextVQA, DocVQA, HallusionBench, Object HalBench: <div align="center"> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64abc4aa6cadc7aca585dddf/QVl0iPtT5aUhlvViyEpgs.png) </div> <sup>*</sup> We evaluate this benchmark using chain-of-thought prompting. <sup>+</sup> Token Density: number of pixels encoded into each visual token at maximum resolution, i.e., # pixels at maximum resolution / # visual tokens. Note: For proprietary models, we calculate token density based on the image encoding charging strategy defined in the official API documentation, which provides an upper-bound estimation. #### Multi-image results on Mantis Eval, BLINK Val, Mathverse mv, Sciverse mv, MIRB: <div align="center"> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64abc4aa6cadc7aca585dddf/o6FGHytRhzeatmhxq0Dbi.png) </div> <sup>*</sup> We evaluate the officially released checkpoint by ourselves. #### Video results on Video-MME and Video-ChatGPT: <div align="center"> <!-- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64abc4aa6cadc7aca585dddf/_T1mw5yhqNCqVdYRTQOGu.png) --> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64abc4aa6cadc7aca585dddf/jmrjoRr8SFLkrstjDmpaV.png) </div> <details> <summary>Click to view few-shot results on TextVQA, VizWiz, VQAv2, OK-VQA.</summary> <div align="center"> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64abc4aa6cadc7aca585dddf/zXIuiCTTe-POqKGHszdn0.png) </div> * denotes zero image shot and two additional text shots following Flamingo. <sup>+</sup> We evaluate the pretraining ckpt without SFT. </details> ### Examples <!-- omit in toc --> <div style="display: flex; flex-direction: column; align-items: center;"> <img src="https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/minicpmv2_6/multi_img-bike.png" alt="Bike" style="margin-bottom: -20px;"> <img src="https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/minicpmv2_6/multi_img-menu.png" alt="Menu" style="margin-bottom: -20px;"> <img src="https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/minicpmv2_6/multi_img-code.png" alt="Code" style="margin-bottom: -20px;"> <img src="https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/minicpmv2_6/ICL-Mem.png" alt="Mem" style="margin-bottom: -20px;"> <img src="https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/minicpmv2_6/multiling-medal.png" alt="medal" style="margin-bottom: 10px;"> </div> <details> <summary>Click to view more cases.</summary> <div style="display: flex; flex-direction: column; align-items: center;"> <img src="https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/minicpmv2_6/ICL-elec.png" alt="elec" style="margin-bottom: -20px;"> <img src="https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/minicpmv2_6/multiling-olympic.png" alt="Menu" style="margin-bottom: 10px;"> </div> </details> We deploy MiniCPM-V 2.6 on end devices. The demo video is the raw screen recording on a iPad Pro without edition. <div style="display: flex; justify-content: center;"> <img src="https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/gif_cases/ai.gif" width="48%" style="margin: 0 10px;"/> <img src="https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/gif_cases/beer.gif" width="48%" style="margin: 0 10px;"/> </div> <div style="display: flex; justify-content: center; margin-top: 20px;"> <img src="https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/gif_cases/ticket.gif" width="48%" style="margin: 0 10px;"/> <img src="https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/gif_cases/wfh.gif" width="48%" style="margin: 0 10px;"/> </div> <div style="text-align: center;"> <video controls autoplay src="https://hf.fast360.xyz/production/uploads/64abc4aa6cadc7aca585dddf/mXAEFQFqNd4nnvPk7r5eX.mp4"></video> <!-- <video controls autoplay src="https://hf.fast360.xyz/production/uploads/64abc4aa6cadc7aca585dddf/fEWzfHUdKnpkM7sdmnBQa.mp4"></video> --> </div> ## Demo Click here to try the Demo of [MiniCPM-V 2.6](http://120.92.209.146:8887/). ## Usage Inference using Huggingface transformers on NVIDIA GPUs. Requirements tested on python 3.10: ``` Pillow==10.1.0 torch==2.1.2 torchvision==0.16.2 transformers==4.40.0 sentencepiece==0.1.99 decord ``` ```python # test.py import torch from PIL import Image from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True, attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager model = model.eval().cuda() tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True) image = Image.open('xx.jpg').convert('RGB') question = 'What is in the image?' msgs = [{'role': 'user', 'content': [image, question]}] res = model.chat( image=None, msgs=msgs, tokenizer=tokenizer ) print(res) ## if you want to use streaming, please make sure sampling=True and stream=True ## the model.chat will return a generator res = model.chat( image=None, msgs=msgs, tokenizer=tokenizer, sampling=True, stream=True ) generated_text = "" for new_text in res: generated_text += new_text print(new_text, flush=True, end='') ``` ### Chat with multiple images <details> <summary> Click to show Python code running MiniCPM-V 2.6 with multiple images input. </summary> ```python import torch from PIL import Image from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True, attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager model = model.eval().cuda() tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True) image1 = Image.open('image1.jpg').convert('RGB') image2 = Image.open('image2.jpg').convert('RGB') question = 'Compare image 1 and image 2, tell me about the differences between image 1 and image 2.' msgs = [{'role': 'user', 'content': [image1, image2, question]}] answer = model.chat( image=None, msgs=msgs, tokenizer=tokenizer ) print(answer) ``` </details> ### In-context few-shot learning <details> <summary> Click to view Python code running MiniCPM-V 2.6 with few-shot input. </summary> ```python import torch from PIL import Image from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True, attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager model = model.eval().cuda() tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True) question = "production date" image1 = Image.open('example1.jpg').convert('RGB') answer1 = "2023.08.04" image2 = Image.open('example2.jpg').convert('RGB') answer2 = "2007.04.24" image_test = Image.open('test.jpg').convert('RGB') msgs = [ {'role': 'user', 'content': [image1, question]}, {'role': 'assistant', 'content': [answer1]}, {'role': 'user', 'content': [image2, question]}, {'role': 'assistant', 'content': [answer2]}, {'role': 'user', 'content': [image_test, question]} ] answer = model.chat( image=None, msgs=msgs, tokenizer=tokenizer ) print(answer) ``` </details> ### Chat with video <details> <summary> Click to view Python code running MiniCPM-V 2.6 with video input. </summary> ```python import torch from PIL import Image from transformers import AutoModel, AutoTokenizer from decord import VideoReader, cpu # pip install decord model = AutoModel.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True, attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager model = model.eval().cuda() tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True) MAX_NUM_FRAMES=64 # if cuda OOM set a smaller number def encode_video(video_path): def uniform_sample(l, n): gap = len(l) / n idxs = [int(i * gap + gap / 2) for i in range(n)] return [l[i] for i in idxs] vr = VideoReader(video_path, ctx=cpu(0)) sample_fps = round(vr.get_avg_fps() / 1) # FPS frame_idx = [i for i in range(0, len(vr), sample_fps)] if len(frame_idx) > MAX_NUM_FRAMES: frame_idx = uniform_sample(frame_idx, MAX_NUM_FRAMES) frames = vr.get_batch(frame_idx).asnumpy() frames = [Image.fromarray(v.astype('uint8')) for v in frames] print('num frames:', len(frames)) return frames video_path ="video_test.mp4" frames = encode_video(video_path) question = "Describe the video" msgs = [ {'role': 'user', 'content': frames + [question]}, ] # Set decode params for video params={} params["use_image_id"] = False params["max_slice_nums"] = 2 # use 1 if cuda OOM and video resolution > 448*448 answer = model.chat( image=None, msgs=msgs, tokenizer=tokenizer, **params ) print(answer) ``` </details> Please look at [GitHub](https://github.com/OpenBMB/MiniCPM-V) for more detail about usage. ## Inference with llama.cpp<a id="llamacpp"></a> MiniCPM-V 2.6 can run with llama.cpp. See our fork of [llama.cpp](https://github.com/OpenBMB/llama.cpp/tree/minicpm-v2.5/examples/minicpmv) for more detail. ## Int4 quantized version Download the int4 quantized version for lower GPU memory (7GB) usage: [MiniCPM-V-2_6-int4](https://huggingface.co/openbmb/MiniCPM-V-2_6-int4). ## License #### Model License * The code in this repo is released under the [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) License. * The usage of MiniCPM-V series model weights must strictly follow [MiniCPM Model License.md](https://github.com/OpenBMB/MiniCPM/blob/main/MiniCPM%20Model%20License.md). * The models and weights of MiniCPM are completely free for academic research. After filling out a ["questionnaire"](https://modelbest.feishu.cn/share/base/form/shrcnpV5ZT9EJ6xYjh3Kx0J6v8g) for registration, MiniCPM-V 2.6 weights are also available for free commercial use. #### Statement * As an LMM, MiniCPM-V 2.6 generates contents by learning a large mount of multimodal corpora, but it cannot comprehend, express personal opinions or make value judgement. Anything generated by MiniCPM-V 2.6 does not represent the views and positions of the model developers * We will not be liable for any problems arising from the use of the MinCPM-V models, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination or misuse of the model. ## Key Techniques and Other Multimodal Projects 👏 Welcome to explore key techniques of MiniCPM-V 2.6 and other multimodal projects of our team: [VisCPM](https://github.com/OpenBMB/VisCPM/tree/main) | [RLHF-V](https://github.com/RLHF-V/RLHF-V) | [LLaVA-UHD](https://github.com/thunlp/LLaVA-UHD) | [RLAIF-V](https://github.com/RLHF-V/RLAIF-V) ## Citation If you find our work helpful, please consider citing our papers 📝 and liking this project ❤️! ```bib @article{yao2024minicpm, title={MiniCPM-V: A GPT-4V Level MLLM on Your Phone}, author={Yao, Yuan and Yu, Tianyu and Zhang, Ao and Wang, Chongyi and Cui, Junbo and Zhu, Hongji and Cai, Tianchi and Li, Haoyu and Zhao, Weilin and He, Zhihui and others}, journal={arXiv preprint arXiv:2408.01800}, year={2024} } ```
[ "MEDAL" ]
Darkrider/covidbert_mednli
Darkrider
null
[ "transformers", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04Z
2021-03-07T15:20:12+00:00
13
0
--- {} --- # CovidBERT-MedNLI This is the model **CovidBERT** trained by DeepSet on AllenAI's [CORD19 Dataset](https://pages.semanticscholar.org/coronavirus-research) of scientific articles about coronaviruses. The model uses the original BERT wordpiece vocabulary and was subsequently fine-tuned on the [SNLI](https://nlp.stanford.edu/projects/snli/) and the [MultiNLI](https://www.nyu.edu/projects/bowman/multinli/) datasets using the [`sentence-transformers` library](https://github.com/UKPLab/sentence-transformers/) to produce universal sentence embeddings [1] using the **average pooling strategy** and a **softmax loss**. It is further fine-tuned on both MedNLI datasets available at Physionet. [ACL-BIONLP 2019](https://physionet.org/content/mednli-bionlp19/1.0.1/) [MedNLI from MIMIC](https://physionet.org/content/mednli/1.0.0/) Parameter details for the original training on CORD-19 are available on [DeepSet's MLFlow](https://public-mlflow.deepset.ai/#/experiments/2/runs/ba27d00c30044ef6a33b1d307b4a6cba) **Base model**: `deepset/covid_bert_base` from HuggingFace's `AutoModel`.
[ "MEDNLI" ]
domenicrosati/deberta-v3-large-finetuned-syndag-multiclass-remove-google-scielo
domenicrosati
text-classification
[ "transformers", "pytorch", "tensorboard", "deberta-v2", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-09-09T10:37:53Z
2022-09-09T19:48:03+00:00
13
0
--- license: mit metrics: - f1 - precision - recall tags: - text-classification - generated_from_trainer model-index: - name: deberta-v3-large-finetuned-syndag-multiclass-remove-google-scielo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large-finetuned-syndag-multiclass-remove-google-scielo This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0214 - F1: 0.9967 - Precision: 0.9967 - Recall: 0.9967 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Precision | Recall | |:-------------:|:-----:|:-----:|:---------------:|:------:|:---------:|:------:| | 0.0169 | 1.0 | 10771 | 0.0258 | 0.9943 | 0.9943 | 0.9943 | | 0.0122 | 2.0 | 21542 | 0.0235 | 0.9956 | 0.9956 | 0.9956 | | 0.0111 | 3.0 | 32313 | 0.0219 | 0.9964 | 0.9964 | 0.9964 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
[ "SCIELO" ]
AntoineBlanot/roberta-span-detection
AntoineBlanot
token-classification
[ "transformers", "pytorch", "roberta", "feature-extraction", "token-classification", "en", "dataset:tner/bc5cdr", "dataset:tner/bionlp2004", "dataset:tner/btc", "dataset:tner/conll2003", "dataset:tner/fin", "dataset:tner/mit_movie_trivia", "dataset:tner/mit_restaurant", "dataset:tner/multinerd", "dataset:tner/ontonotes5", "dataset:tner/tweebank_ner", "dataset:tner/tweetner7", "dataset:tner/wikineural", "dataset:tner/wnut2017", "endpoints_compatible", "region:us" ]
2023-05-24T07:35:45Z
2023-06-05T08:24:54+00:00
13
0
--- datasets: - tner/bc5cdr - tner/bionlp2004 - tner/btc - tner/conll2003 - tner/fin - tner/mit_movie_trivia - tner/mit_restaurant - tner/multinerd - tner/ontonotes5 - tner/tweebank_ner - tner/tweetner7 - tner/wikineural - tner/wnut2017 language: - en metrics: - accuracy - f1 pipeline_tag: token-classification --- # RoBERTa Span Detection This model is a fine-tuned model of [roberta-large](https://huggingface.co/roberta-large) after being trained on a **mixture of NER datasets**. Basically, this model can detect NER spans (with <u>no differenciation on classes</u>). Labels use the IBO format and are: - 'B-TAG': beginning token of span - 'I-TAG': inside token of span - 'O': token not a span # Usage This model has been trained in an efficient way and thus cannot be load directly from HuggingFace's hub. To use that model, please follow instructions on this [repo](https://github.com/AntoineBlanot/efficient-llm). # Data used for training - tner/bc5cdr - tner/bionlp2004 - tner/btc - tner/conll2003 - tner/fin - tner/mit_movie_trivia - tner/mit_restaurant - tner/multinerd - tner/ontonotes5 - tner/tweebank_ner - tner/tweetner7 - tner/wikineural - tner/wnut2017 # Evaluation results | Data | Accuracy | |:---:|:---------:| | validation | 0.972 |
[ "BC5CDR" ]
michaelfeil/ct2fast-e5-large-v2
michaelfeil
sentence-similarity
[ "sentence-transformers", "bert", "ctranslate2", "int8", "float16", "mteb", "Sentence Transformers", "sentence-similarity", "en", "arxiv:2212.03533", "arxiv:2104.08663", "arxiv:2210.07316", "license:mit", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2023-06-15T20:11:32Z
2023-10-13T13:37:36+00:00
13
3
--- language: - en license: mit tags: - ctranslate2 - int8 - float16 - mteb - Sentence Transformers - sentence-similarity - sentence-transformers model-index: - name: e5-large-v2 results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 79.22388059701493 - type: ap value: 43.20816505595132 - type: f1 value: 73.27811303522058 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 93.748325 - type: ap value: 90.72534979701297 - type: f1 value: 93.73895874282185 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 48.612 - type: f1 value: 47.61157345898393 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 23.541999999999998 - type: map_at_10 value: 38.208 - type: map_at_100 value: 39.417 - type: map_at_1000 value: 39.428999999999995 - type: map_at_3 value: 33.95 - type: map_at_5 value: 36.329 - type: mrr_at_1 value: 23.755000000000003 - type: mrr_at_10 value: 38.288 - type: mrr_at_100 value: 39.511 - type: mrr_at_1000 value: 39.523 - type: mrr_at_3 value: 34.009 - type: mrr_at_5 value: 36.434 - type: ndcg_at_1 value: 23.541999999999998 - type: ndcg_at_10 value: 46.417 - type: ndcg_at_100 value: 51.812000000000005 - type: ndcg_at_1000 value: 52.137 - type: ndcg_at_3 value: 37.528 - type: ndcg_at_5 value: 41.81 - type: precision_at_1 value: 23.541999999999998 - type: precision_at_10 value: 7.269 - type: precision_at_100 value: 0.9690000000000001 - type: precision_at_1000 value: 0.099 - type: precision_at_3 value: 15.979 - type: precision_at_5 value: 11.664 - type: recall_at_1 value: 23.541999999999998 - type: recall_at_10 value: 72.688 - type: recall_at_100 value: 96.871 - type: recall_at_1000 value: 99.431 - type: recall_at_3 value: 47.937000000000005 - type: recall_at_5 value: 58.321 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 45.546499570522094 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 41.01607489943561 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 59.616107510107774 - type: mrr value: 72.75106626214661 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 84.33018094733868 - type: cos_sim_spearman value: 83.60190492611737 - type: euclidean_pearson value: 82.1492450218961 - type: euclidean_spearman value: 82.70308926526991 - type: manhattan_pearson value: 81.93959600076842 - type: manhattan_spearman value: 82.73260801016369 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 84.54545454545455 - type: f1 value: 84.49582530928923 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 37.362725540120096 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 34.849509608178145 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 31.502999999999997 - type: map_at_10 value: 43.323 - type: map_at_100 value: 44.708999999999996 - type: map_at_1000 value: 44.838 - type: map_at_3 value: 38.987 - type: map_at_5 value: 41.516999999999996 - type: mrr_at_1 value: 38.769999999999996 - type: mrr_at_10 value: 49.13 - type: mrr_at_100 value: 49.697 - type: mrr_at_1000 value: 49.741 - type: mrr_at_3 value: 45.804 - type: mrr_at_5 value: 47.842 - type: ndcg_at_1 value: 38.769999999999996 - type: ndcg_at_10 value: 50.266999999999996 - type: ndcg_at_100 value: 54.967 - type: ndcg_at_1000 value: 56.976000000000006 - type: ndcg_at_3 value: 43.823 - type: ndcg_at_5 value: 47.12 - type: precision_at_1 value: 38.769999999999996 - type: precision_at_10 value: 10.057 - type: precision_at_100 value: 1.554 - type: precision_at_1000 value: 0.202 - type: precision_at_3 value: 21.125 - type: precision_at_5 value: 15.851 - type: recall_at_1 value: 31.502999999999997 - type: recall_at_10 value: 63.715999999999994 - type: recall_at_100 value: 83.61800000000001 - type: recall_at_1000 value: 96.63199999999999 - type: recall_at_3 value: 45.403 - type: recall_at_5 value: 54.481 - type: map_at_1 value: 27.833000000000002 - type: map_at_10 value: 37.330999999999996 - type: map_at_100 value: 38.580999999999996 - type: map_at_1000 value: 38.708 - type: map_at_3 value: 34.713 - type: map_at_5 value: 36.104 - type: mrr_at_1 value: 35.223 - type: mrr_at_10 value: 43.419000000000004 - type: mrr_at_100 value: 44.198 - type: mrr_at_1000 value: 44.249 - type: mrr_at_3 value: 41.614000000000004 - type: mrr_at_5 value: 42.553000000000004 - type: ndcg_at_1 value: 35.223 - type: ndcg_at_10 value: 42.687999999999995 - type: ndcg_at_100 value: 47.447 - type: ndcg_at_1000 value: 49.701 - type: ndcg_at_3 value: 39.162 - type: ndcg_at_5 value: 40.557 - type: precision_at_1 value: 35.223 - type: precision_at_10 value: 7.962 - type: precision_at_100 value: 1.304 - type: precision_at_1000 value: 0.18 - type: precision_at_3 value: 19.023 - type: precision_at_5 value: 13.184999999999999 - type: recall_at_1 value: 27.833000000000002 - type: recall_at_10 value: 51.881 - type: recall_at_100 value: 72.04 - type: recall_at_1000 value: 86.644 - type: recall_at_3 value: 40.778 - type: recall_at_5 value: 45.176 - type: map_at_1 value: 38.175 - type: map_at_10 value: 51.174 - type: map_at_100 value: 52.26499999999999 - type: map_at_1000 value: 52.315999999999995 - type: map_at_3 value: 47.897 - type: map_at_5 value: 49.703 - type: mrr_at_1 value: 43.448 - type: mrr_at_10 value: 54.505 - type: mrr_at_100 value: 55.216 - type: mrr_at_1000 value: 55.242000000000004 - type: mrr_at_3 value: 51.98500000000001 - type: mrr_at_5 value: 53.434000000000005 - type: ndcg_at_1 value: 43.448 - type: ndcg_at_10 value: 57.282 - type: ndcg_at_100 value: 61.537 - type: ndcg_at_1000 value: 62.546 - type: ndcg_at_3 value: 51.73799999999999 - type: ndcg_at_5 value: 54.324 - type: precision_at_1 value: 43.448 - type: precision_at_10 value: 9.292 - type: precision_at_100 value: 1.233 - type: precision_at_1000 value: 0.136 - type: precision_at_3 value: 23.218 - type: precision_at_5 value: 15.887 - type: recall_at_1 value: 38.175 - type: recall_at_10 value: 72.00999999999999 - type: recall_at_100 value: 90.155 - type: recall_at_1000 value: 97.257 - type: recall_at_3 value: 57.133 - type: recall_at_5 value: 63.424 - type: map_at_1 value: 22.405 - type: map_at_10 value: 30.043 - type: map_at_100 value: 31.191000000000003 - type: map_at_1000 value: 31.275 - type: map_at_3 value: 27.034000000000002 - type: map_at_5 value: 28.688000000000002 - type: mrr_at_1 value: 24.068 - type: mrr_at_10 value: 31.993 - type: mrr_at_100 value: 32.992 - type: mrr_at_1000 value: 33.050000000000004 - type: mrr_at_3 value: 28.964000000000002 - type: mrr_at_5 value: 30.653000000000002 - type: ndcg_at_1 value: 24.068 - type: ndcg_at_10 value: 35.198 - type: ndcg_at_100 value: 40.709 - type: ndcg_at_1000 value: 42.855 - type: ndcg_at_3 value: 29.139 - type: ndcg_at_5 value: 32.045 - type: precision_at_1 value: 24.068 - type: precision_at_10 value: 5.65 - type: precision_at_100 value: 0.885 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 12.279 - type: precision_at_5 value: 8.994 - type: recall_at_1 value: 22.405 - type: recall_at_10 value: 49.391 - type: recall_at_100 value: 74.53699999999999 - type: recall_at_1000 value: 90.605 - type: recall_at_3 value: 33.126 - type: recall_at_5 value: 40.073 - type: map_at_1 value: 13.309999999999999 - type: map_at_10 value: 20.688000000000002 - type: map_at_100 value: 22.022 - type: map_at_1000 value: 22.152 - type: map_at_3 value: 17.954 - type: map_at_5 value: 19.439 - type: mrr_at_1 value: 16.294 - type: mrr_at_10 value: 24.479 - type: mrr_at_100 value: 25.515 - type: mrr_at_1000 value: 25.593 - type: mrr_at_3 value: 21.642 - type: mrr_at_5 value: 23.189999999999998 - type: ndcg_at_1 value: 16.294 - type: ndcg_at_10 value: 25.833000000000002 - type: ndcg_at_100 value: 32.074999999999996 - type: ndcg_at_1000 value: 35.083 - type: ndcg_at_3 value: 20.493 - type: ndcg_at_5 value: 22.949 - type: precision_at_1 value: 16.294 - type: precision_at_10 value: 5.112 - type: precision_at_100 value: 0.96 - type: precision_at_1000 value: 0.134 - type: precision_at_3 value: 9.908999999999999 - type: precision_at_5 value: 7.587000000000001 - type: recall_at_1 value: 13.309999999999999 - type: recall_at_10 value: 37.851 - type: recall_at_100 value: 64.835 - type: recall_at_1000 value: 86.334 - type: recall_at_3 value: 23.493 - type: recall_at_5 value: 29.528 - type: map_at_1 value: 25.857999999999997 - type: map_at_10 value: 35.503 - type: map_at_100 value: 36.957 - type: map_at_1000 value: 37.065 - type: map_at_3 value: 32.275999999999996 - type: map_at_5 value: 34.119 - type: mrr_at_1 value: 31.954 - type: mrr_at_10 value: 40.851 - type: mrr_at_100 value: 41.863 - type: mrr_at_1000 value: 41.900999999999996 - type: mrr_at_3 value: 38.129999999999995 - type: mrr_at_5 value: 39.737 - type: ndcg_at_1 value: 31.954 - type: ndcg_at_10 value: 41.343999999999994 - type: ndcg_at_100 value: 47.397 - type: ndcg_at_1000 value: 49.501 - type: ndcg_at_3 value: 36.047000000000004 - type: ndcg_at_5 value: 38.639 - type: precision_at_1 value: 31.954 - type: precision_at_10 value: 7.68 - type: precision_at_100 value: 1.247 - type: precision_at_1000 value: 0.16199999999999998 - type: precision_at_3 value: 17.132 - type: precision_at_5 value: 12.589 - type: recall_at_1 value: 25.857999999999997 - type: recall_at_10 value: 53.43599999999999 - type: recall_at_100 value: 78.82400000000001 - type: recall_at_1000 value: 92.78999999999999 - type: recall_at_3 value: 38.655 - type: recall_at_5 value: 45.216 - type: map_at_1 value: 24.709 - type: map_at_10 value: 34.318 - type: map_at_100 value: 35.657 - type: map_at_1000 value: 35.783 - type: map_at_3 value: 31.326999999999998 - type: map_at_5 value: 33.021 - type: mrr_at_1 value: 30.137000000000004 - type: mrr_at_10 value: 39.093 - type: mrr_at_100 value: 39.992 - type: mrr_at_1000 value: 40.056999999999995 - type: mrr_at_3 value: 36.606 - type: mrr_at_5 value: 37.861 - type: ndcg_at_1 value: 30.137000000000004 - type: ndcg_at_10 value: 39.974 - type: ndcg_at_100 value: 45.647999999999996 - type: ndcg_at_1000 value: 48.259 - type: ndcg_at_3 value: 35.028 - type: ndcg_at_5 value: 37.175999999999995 - type: precision_at_1 value: 30.137000000000004 - type: precision_at_10 value: 7.363 - type: precision_at_100 value: 1.184 - type: precision_at_1000 value: 0.161 - type: precision_at_3 value: 16.857 - type: precision_at_5 value: 11.963 - type: recall_at_1 value: 24.709 - type: recall_at_10 value: 52.087 - type: recall_at_100 value: 76.125 - type: recall_at_1000 value: 93.82300000000001 - type: recall_at_3 value: 38.149 - type: recall_at_5 value: 43.984 - type: map_at_1 value: 23.40791666666667 - type: map_at_10 value: 32.458083333333335 - type: map_at_100 value: 33.691916666666664 - type: map_at_1000 value: 33.81191666666666 - type: map_at_3 value: 29.51625 - type: map_at_5 value: 31.168083333333335 - type: mrr_at_1 value: 27.96591666666666 - type: mrr_at_10 value: 36.528583333333344 - type: mrr_at_100 value: 37.404 - type: mrr_at_1000 value: 37.464333333333336 - type: mrr_at_3 value: 33.92883333333333 - type: mrr_at_5 value: 35.41933333333333 - type: ndcg_at_1 value: 27.96591666666666 - type: ndcg_at_10 value: 37.89141666666666 - type: ndcg_at_100 value: 43.23066666666666 - type: ndcg_at_1000 value: 45.63258333333333 - type: ndcg_at_3 value: 32.811249999999994 - type: ndcg_at_5 value: 35.22566666666667 - type: precision_at_1 value: 27.96591666666666 - type: precision_at_10 value: 6.834083333333332 - type: precision_at_100 value: 1.12225 - type: precision_at_1000 value: 0.15241666666666667 - type: precision_at_3 value: 15.264333333333335 - type: precision_at_5 value: 11.039416666666666 - type: recall_at_1 value: 23.40791666666667 - type: recall_at_10 value: 49.927083333333336 - type: recall_at_100 value: 73.44641666666668 - type: recall_at_1000 value: 90.19950000000001 - type: recall_at_3 value: 35.88341666666667 - type: recall_at_5 value: 42.061249999999994 - type: map_at_1 value: 19.592000000000002 - type: map_at_10 value: 26.895999999999997 - type: map_at_100 value: 27.921000000000003 - type: map_at_1000 value: 28.02 - type: map_at_3 value: 24.883 - type: map_at_5 value: 25.812 - type: mrr_at_1 value: 22.698999999999998 - type: mrr_at_10 value: 29.520999999999997 - type: mrr_at_100 value: 30.458000000000002 - type: mrr_at_1000 value: 30.526999999999997 - type: mrr_at_3 value: 27.633000000000003 - type: mrr_at_5 value: 28.483999999999998 - type: ndcg_at_1 value: 22.698999999999998 - type: ndcg_at_10 value: 31.061 - type: ndcg_at_100 value: 36.398 - type: ndcg_at_1000 value: 38.89 - type: ndcg_at_3 value: 27.149 - type: ndcg_at_5 value: 28.627000000000002 - type: precision_at_1 value: 22.698999999999998 - type: precision_at_10 value: 5.106999999999999 - type: precision_at_100 value: 0.857 - type: precision_at_1000 value: 0.11499999999999999 - type: precision_at_3 value: 11.963 - type: precision_at_5 value: 8.221 - type: recall_at_1 value: 19.592000000000002 - type: recall_at_10 value: 41.329 - type: recall_at_100 value: 66.094 - type: recall_at_1000 value: 84.511 - type: recall_at_3 value: 30.61 - type: recall_at_5 value: 34.213 - type: map_at_1 value: 14.71 - type: map_at_10 value: 20.965 - type: map_at_100 value: 21.994 - type: map_at_1000 value: 22.133 - type: map_at_3 value: 18.741 - type: map_at_5 value: 19.951 - type: mrr_at_1 value: 18.307000000000002 - type: mrr_at_10 value: 24.66 - type: mrr_at_100 value: 25.540000000000003 - type: mrr_at_1000 value: 25.629 - type: mrr_at_3 value: 22.511 - type: mrr_at_5 value: 23.72 - type: ndcg_at_1 value: 18.307000000000002 - type: ndcg_at_10 value: 25.153 - type: ndcg_at_100 value: 30.229 - type: ndcg_at_1000 value: 33.623 - type: ndcg_at_3 value: 21.203 - type: ndcg_at_5 value: 23.006999999999998 - type: precision_at_1 value: 18.307000000000002 - type: precision_at_10 value: 4.725 - type: precision_at_100 value: 0.8659999999999999 - type: precision_at_1000 value: 0.133 - type: precision_at_3 value: 10.14 - type: precision_at_5 value: 7.481 - type: recall_at_1 value: 14.71 - type: recall_at_10 value: 34.087 - type: recall_at_100 value: 57.147999999999996 - type: recall_at_1000 value: 81.777 - type: recall_at_3 value: 22.996 - type: recall_at_5 value: 27.73 - type: map_at_1 value: 23.472 - type: map_at_10 value: 32.699 - type: map_at_100 value: 33.867000000000004 - type: map_at_1000 value: 33.967000000000006 - type: map_at_3 value: 29.718 - type: map_at_5 value: 31.345 - type: mrr_at_1 value: 28.265 - type: mrr_at_10 value: 36.945 - type: mrr_at_100 value: 37.794 - type: mrr_at_1000 value: 37.857 - type: mrr_at_3 value: 34.266000000000005 - type: mrr_at_5 value: 35.768 - type: ndcg_at_1 value: 28.265 - type: ndcg_at_10 value: 38.35 - type: ndcg_at_100 value: 43.739 - type: ndcg_at_1000 value: 46.087 - type: ndcg_at_3 value: 33.004 - type: ndcg_at_5 value: 35.411 - type: precision_at_1 value: 28.265 - type: precision_at_10 value: 6.715999999999999 - type: precision_at_100 value: 1.059 - type: precision_at_1000 value: 0.13799999999999998 - type: precision_at_3 value: 15.299 - type: precision_at_5 value: 10.951 - type: recall_at_1 value: 23.472 - type: recall_at_10 value: 51.413 - type: recall_at_100 value: 75.17 - type: recall_at_1000 value: 91.577 - type: recall_at_3 value: 36.651 - type: recall_at_5 value: 42.814 - type: map_at_1 value: 23.666 - type: map_at_10 value: 32.963 - type: map_at_100 value: 34.544999999999995 - type: map_at_1000 value: 34.792 - type: map_at_3 value: 29.74 - type: map_at_5 value: 31.5 - type: mrr_at_1 value: 29.051 - type: mrr_at_10 value: 38.013000000000005 - type: mrr_at_100 value: 38.997 - type: mrr_at_1000 value: 39.055 - type: mrr_at_3 value: 34.947 - type: mrr_at_5 value: 36.815 - type: ndcg_at_1 value: 29.051 - type: ndcg_at_10 value: 39.361000000000004 - type: ndcg_at_100 value: 45.186 - type: ndcg_at_1000 value: 47.867 - type: ndcg_at_3 value: 33.797 - type: ndcg_at_5 value: 36.456 - type: precision_at_1 value: 29.051 - type: precision_at_10 value: 7.668 - type: precision_at_100 value: 1.532 - type: precision_at_1000 value: 0.247 - type: precision_at_3 value: 15.876000000000001 - type: precision_at_5 value: 11.779 - type: recall_at_1 value: 23.666 - type: recall_at_10 value: 51.858000000000004 - type: recall_at_100 value: 77.805 - type: recall_at_1000 value: 94.504 - type: recall_at_3 value: 36.207 - type: recall_at_5 value: 43.094 - type: map_at_1 value: 15.662 - type: map_at_10 value: 23.594 - type: map_at_100 value: 24.593999999999998 - type: map_at_1000 value: 24.694 - type: map_at_3 value: 20.925 - type: map_at_5 value: 22.817999999999998 - type: mrr_at_1 value: 17.375 - type: mrr_at_10 value: 25.734 - type: mrr_at_100 value: 26.586 - type: mrr_at_1000 value: 26.671 - type: mrr_at_3 value: 23.044 - type: mrr_at_5 value: 24.975 - type: ndcg_at_1 value: 17.375 - type: ndcg_at_10 value: 28.186 - type: ndcg_at_100 value: 33.436 - type: ndcg_at_1000 value: 36.203 - type: ndcg_at_3 value: 23.152 - type: ndcg_at_5 value: 26.397 - type: precision_at_1 value: 17.375 - type: precision_at_10 value: 4.677 - type: precision_at_100 value: 0.786 - type: precision_at_1000 value: 0.109 - type: precision_at_3 value: 10.351 - type: precision_at_5 value: 7.985 - type: recall_at_1 value: 15.662 - type: recall_at_10 value: 40.066 - type: recall_at_100 value: 65.006 - type: recall_at_1000 value: 85.94000000000001 - type: recall_at_3 value: 27.400000000000002 - type: recall_at_5 value: 35.002 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 8.853 - type: map_at_10 value: 15.568000000000001 - type: map_at_100 value: 17.383000000000003 - type: map_at_1000 value: 17.584 - type: map_at_3 value: 12.561 - type: map_at_5 value: 14.056 - type: mrr_at_1 value: 18.958 - type: mrr_at_10 value: 28.288000000000004 - type: mrr_at_100 value: 29.432000000000002 - type: mrr_at_1000 value: 29.498 - type: mrr_at_3 value: 25.049 - type: mrr_at_5 value: 26.857 - type: ndcg_at_1 value: 18.958 - type: ndcg_at_10 value: 22.21 - type: ndcg_at_100 value: 29.596 - type: ndcg_at_1000 value: 33.583 - type: ndcg_at_3 value: 16.994999999999997 - type: ndcg_at_5 value: 18.95 - type: precision_at_1 value: 18.958 - type: precision_at_10 value: 7.192 - type: precision_at_100 value: 1.5 - type: precision_at_1000 value: 0.22399999999999998 - type: precision_at_3 value: 12.573 - type: precision_at_5 value: 10.202 - type: recall_at_1 value: 8.853 - type: recall_at_10 value: 28.087 - type: recall_at_100 value: 53.701 - type: recall_at_1000 value: 76.29899999999999 - type: recall_at_3 value: 15.913 - type: recall_at_5 value: 20.658 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 9.077 - type: map_at_10 value: 20.788999999999998 - type: map_at_100 value: 30.429000000000002 - type: map_at_1000 value: 32.143 - type: map_at_3 value: 14.692 - type: map_at_5 value: 17.139 - type: mrr_at_1 value: 70.75 - type: mrr_at_10 value: 78.036 - type: mrr_at_100 value: 78.401 - type: mrr_at_1000 value: 78.404 - type: mrr_at_3 value: 76.75 - type: mrr_at_5 value: 77.47500000000001 - type: ndcg_at_1 value: 58.12500000000001 - type: ndcg_at_10 value: 44.015 - type: ndcg_at_100 value: 49.247 - type: ndcg_at_1000 value: 56.211999999999996 - type: ndcg_at_3 value: 49.151 - type: ndcg_at_5 value: 46.195 - type: precision_at_1 value: 70.75 - type: precision_at_10 value: 35.5 - type: precision_at_100 value: 11.355 - type: precision_at_1000 value: 2.1950000000000003 - type: precision_at_3 value: 53.083000000000006 - type: precision_at_5 value: 44.800000000000004 - type: recall_at_1 value: 9.077 - type: recall_at_10 value: 26.259 - type: recall_at_100 value: 56.547000000000004 - type: recall_at_1000 value: 78.551 - type: recall_at_3 value: 16.162000000000003 - type: recall_at_5 value: 19.753999999999998 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 49.44500000000001 - type: f1 value: 44.67067691783401 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 68.182 - type: map_at_10 value: 78.223 - type: map_at_100 value: 78.498 - type: map_at_1000 value: 78.512 - type: map_at_3 value: 76.71 - type: map_at_5 value: 77.725 - type: mrr_at_1 value: 73.177 - type: mrr_at_10 value: 82.513 - type: mrr_at_100 value: 82.633 - type: mrr_at_1000 value: 82.635 - type: mrr_at_3 value: 81.376 - type: mrr_at_5 value: 82.182 - type: ndcg_at_1 value: 73.177 - type: ndcg_at_10 value: 82.829 - type: ndcg_at_100 value: 83.84 - type: ndcg_at_1000 value: 84.07900000000001 - type: ndcg_at_3 value: 80.303 - type: ndcg_at_5 value: 81.846 - type: precision_at_1 value: 73.177 - type: precision_at_10 value: 10.241999999999999 - type: precision_at_100 value: 1.099 - type: precision_at_1000 value: 0.11399999999999999 - type: precision_at_3 value: 31.247999999999998 - type: precision_at_5 value: 19.697 - type: recall_at_1 value: 68.182 - type: recall_at_10 value: 92.657 - type: recall_at_100 value: 96.709 - type: recall_at_1000 value: 98.184 - type: recall_at_3 value: 85.9 - type: recall_at_5 value: 89.755 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 21.108 - type: map_at_10 value: 33.342 - type: map_at_100 value: 35.281 - type: map_at_1000 value: 35.478 - type: map_at_3 value: 29.067 - type: map_at_5 value: 31.563000000000002 - type: mrr_at_1 value: 41.667 - type: mrr_at_10 value: 49.913000000000004 - type: mrr_at_100 value: 50.724000000000004 - type: mrr_at_1000 value: 50.766 - type: mrr_at_3 value: 47.504999999999995 - type: mrr_at_5 value: 49.033 - type: ndcg_at_1 value: 41.667 - type: ndcg_at_10 value: 41.144 - type: ndcg_at_100 value: 48.326 - type: ndcg_at_1000 value: 51.486 - type: ndcg_at_3 value: 37.486999999999995 - type: ndcg_at_5 value: 38.78 - type: precision_at_1 value: 41.667 - type: precision_at_10 value: 11.358 - type: precision_at_100 value: 1.873 - type: precision_at_1000 value: 0.244 - type: precision_at_3 value: 25 - type: precision_at_5 value: 18.519 - type: recall_at_1 value: 21.108 - type: recall_at_10 value: 47.249 - type: recall_at_100 value: 74.52 - type: recall_at_1000 value: 93.31 - type: recall_at_3 value: 33.271 - type: recall_at_5 value: 39.723000000000006 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 40.317 - type: map_at_10 value: 64.861 - type: map_at_100 value: 65.697 - type: map_at_1000 value: 65.755 - type: map_at_3 value: 61.258 - type: map_at_5 value: 63.590999999999994 - type: mrr_at_1 value: 80.635 - type: mrr_at_10 value: 86.528 - type: mrr_at_100 value: 86.66199999999999 - type: mrr_at_1000 value: 86.666 - type: mrr_at_3 value: 85.744 - type: mrr_at_5 value: 86.24300000000001 - type: ndcg_at_1 value: 80.635 - type: ndcg_at_10 value: 73.13199999999999 - type: ndcg_at_100 value: 75.927 - type: ndcg_at_1000 value: 76.976 - type: ndcg_at_3 value: 68.241 - type: ndcg_at_5 value: 71.071 - type: precision_at_1 value: 80.635 - type: precision_at_10 value: 15.326 - type: precision_at_100 value: 1.7500000000000002 - type: precision_at_1000 value: 0.189 - type: precision_at_3 value: 43.961 - type: precision_at_5 value: 28.599999999999998 - type: recall_at_1 value: 40.317 - type: recall_at_10 value: 76.631 - type: recall_at_100 value: 87.495 - type: recall_at_1000 value: 94.362 - type: recall_at_3 value: 65.94200000000001 - type: recall_at_5 value: 71.499 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 91.686 - type: ap value: 87.5577120393173 - type: f1 value: 91.6629447355139 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 23.702 - type: map_at_10 value: 36.414 - type: map_at_100 value: 37.561 - type: map_at_1000 value: 37.605 - type: map_at_3 value: 32.456 - type: map_at_5 value: 34.827000000000005 - type: mrr_at_1 value: 24.355 - type: mrr_at_10 value: 37.01 - type: mrr_at_100 value: 38.085 - type: mrr_at_1000 value: 38.123000000000005 - type: mrr_at_3 value: 33.117999999999995 - type: mrr_at_5 value: 35.452 - type: ndcg_at_1 value: 24.384 - type: ndcg_at_10 value: 43.456 - type: ndcg_at_100 value: 48.892 - type: ndcg_at_1000 value: 49.964 - type: ndcg_at_3 value: 35.475 - type: ndcg_at_5 value: 39.711 - type: precision_at_1 value: 24.384 - type: precision_at_10 value: 6.7940000000000005 - type: precision_at_100 value: 0.951 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 15.052999999999999 - type: precision_at_5 value: 11.189 - type: recall_at_1 value: 23.702 - type: recall_at_10 value: 65.057 - type: recall_at_100 value: 90.021 - type: recall_at_1000 value: 98.142 - type: recall_at_3 value: 43.551 - type: recall_at_5 value: 53.738 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 94.62380300957591 - type: f1 value: 94.49871222100734 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 77.14090287277702 - type: f1 value: 60.32101258220515 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 73.84330867518494 - type: f1 value: 71.92248688515255 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 78.10692669804976 - type: f1 value: 77.9904839122866 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 31.822988923078444 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 30.38394880253403 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.82504612539082 - type: mrr value: 32.84462298174977 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 6.029 - type: map_at_10 value: 14.088999999999999 - type: map_at_100 value: 17.601 - type: map_at_1000 value: 19.144 - type: map_at_3 value: 10.156 - type: map_at_5 value: 11.892 - type: mrr_at_1 value: 46.44 - type: mrr_at_10 value: 56.596999999999994 - type: mrr_at_100 value: 57.11000000000001 - type: mrr_at_1000 value: 57.14 - type: mrr_at_3 value: 54.334 - type: mrr_at_5 value: 55.774 - type: ndcg_at_1 value: 44.891999999999996 - type: ndcg_at_10 value: 37.134 - type: ndcg_at_100 value: 33.652 - type: ndcg_at_1000 value: 42.548 - type: ndcg_at_3 value: 41.851 - type: ndcg_at_5 value: 39.842 - type: precision_at_1 value: 46.44 - type: precision_at_10 value: 27.647 - type: precision_at_100 value: 8.309999999999999 - type: precision_at_1000 value: 2.146 - type: precision_at_3 value: 39.422000000000004 - type: precision_at_5 value: 34.675 - type: recall_at_1 value: 6.029 - type: recall_at_10 value: 18.907 - type: recall_at_100 value: 33.76 - type: recall_at_1000 value: 65.14999999999999 - type: recall_at_3 value: 11.584999999999999 - type: recall_at_5 value: 14.626 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 39.373000000000005 - type: map_at_10 value: 55.836 - type: map_at_100 value: 56.611999999999995 - type: map_at_1000 value: 56.63 - type: map_at_3 value: 51.747 - type: map_at_5 value: 54.337999999999994 - type: mrr_at_1 value: 44.147999999999996 - type: mrr_at_10 value: 58.42699999999999 - type: mrr_at_100 value: 58.902 - type: mrr_at_1000 value: 58.914 - type: mrr_at_3 value: 55.156000000000006 - type: mrr_at_5 value: 57.291000000000004 - type: ndcg_at_1 value: 44.119 - type: ndcg_at_10 value: 63.444 - type: ndcg_at_100 value: 66.40599999999999 - type: ndcg_at_1000 value: 66.822 - type: ndcg_at_3 value: 55.962 - type: ndcg_at_5 value: 60.228 - type: precision_at_1 value: 44.119 - type: precision_at_10 value: 10.006 - type: precision_at_100 value: 1.17 - type: precision_at_1000 value: 0.121 - type: precision_at_3 value: 25.135 - type: precision_at_5 value: 17.59 - type: recall_at_1 value: 39.373000000000005 - type: recall_at_10 value: 83.78999999999999 - type: recall_at_100 value: 96.246 - type: recall_at_1000 value: 99.324 - type: recall_at_3 value: 64.71900000000001 - type: recall_at_5 value: 74.508 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 69.199 - type: map_at_10 value: 82.892 - type: map_at_100 value: 83.578 - type: map_at_1000 value: 83.598 - type: map_at_3 value: 79.948 - type: map_at_5 value: 81.779 - type: mrr_at_1 value: 79.67 - type: mrr_at_10 value: 86.115 - type: mrr_at_100 value: 86.249 - type: mrr_at_1000 value: 86.251 - type: mrr_at_3 value: 85.08200000000001 - type: mrr_at_5 value: 85.783 - type: ndcg_at_1 value: 79.67 - type: ndcg_at_10 value: 86.839 - type: ndcg_at_100 value: 88.252 - type: ndcg_at_1000 value: 88.401 - type: ndcg_at_3 value: 83.86200000000001 - type: ndcg_at_5 value: 85.473 - type: precision_at_1 value: 79.67 - type: precision_at_10 value: 13.19 - type: precision_at_100 value: 1.521 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 36.677 - type: precision_at_5 value: 24.118000000000002 - type: recall_at_1 value: 69.199 - type: recall_at_10 value: 94.321 - type: recall_at_100 value: 99.20400000000001 - type: recall_at_1000 value: 99.947 - type: recall_at_3 value: 85.787 - type: recall_at_5 value: 90.365 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 55.82810046856353 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 63.38132611783628 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 5.127000000000001 - type: map_at_10 value: 12.235 - type: map_at_100 value: 14.417 - type: map_at_1000 value: 14.75 - type: map_at_3 value: 8.906 - type: map_at_5 value: 10.591000000000001 - type: mrr_at_1 value: 25.2 - type: mrr_at_10 value: 35.879 - type: mrr_at_100 value: 36.935 - type: mrr_at_1000 value: 36.997 - type: mrr_at_3 value: 32.783 - type: mrr_at_5 value: 34.367999999999995 - type: ndcg_at_1 value: 25.2 - type: ndcg_at_10 value: 20.509 - type: ndcg_at_100 value: 28.67 - type: ndcg_at_1000 value: 34.42 - type: ndcg_at_3 value: 19.948 - type: ndcg_at_5 value: 17.166 - type: precision_at_1 value: 25.2 - type: precision_at_10 value: 10.440000000000001 - type: precision_at_100 value: 2.214 - type: precision_at_1000 value: 0.359 - type: precision_at_3 value: 18.533 - type: precision_at_5 value: 14.860000000000001 - type: recall_at_1 value: 5.127000000000001 - type: recall_at_10 value: 21.147 - type: recall_at_100 value: 44.946999999999996 - type: recall_at_1000 value: 72.89 - type: recall_at_3 value: 11.277 - type: recall_at_5 value: 15.042 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 83.0373011786213 - type: cos_sim_spearman value: 79.27889560856613 - type: euclidean_pearson value: 80.31186315495655 - type: euclidean_spearman value: 79.41630415280811 - type: manhattan_pearson value: 80.31755140442013 - type: manhattan_spearman value: 79.43069870027611 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 84.8659751342045 - type: cos_sim_spearman value: 76.95377612997667 - type: euclidean_pearson value: 81.24552945497848 - type: euclidean_spearman value: 77.18236963555253 - type: manhattan_pearson value: 81.26477607759037 - type: manhattan_spearman value: 77.13821753062756 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 83.34597139044875 - type: cos_sim_spearman value: 84.124169425592 - type: euclidean_pearson value: 83.68590721511401 - type: euclidean_spearman value: 84.18846190846398 - type: manhattan_pearson value: 83.57630235061498 - type: manhattan_spearman value: 84.10244043726902 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 82.67641885599572 - type: cos_sim_spearman value: 80.46450725650428 - type: euclidean_pearson value: 81.61645042715865 - type: euclidean_spearman value: 80.61418394236874 - type: manhattan_pearson value: 81.55712034928871 - type: manhattan_spearman value: 80.57905670523951 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 88.86650310886782 - type: cos_sim_spearman value: 89.76081629222328 - type: euclidean_pearson value: 89.1530747029954 - type: euclidean_spearman value: 89.80990657280248 - type: manhattan_pearson value: 89.10640563278132 - type: manhattan_spearman value: 89.76282108434047 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 83.93864027911118 - type: cos_sim_spearman value: 85.47096193999023 - type: euclidean_pearson value: 85.03141840870533 - type: euclidean_spearman value: 85.43124029598181 - type: manhattan_pearson value: 84.99002664393512 - type: manhattan_spearman value: 85.39169195120834 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 88.7045343749832 - type: cos_sim_spearman value: 89.03262221146677 - type: euclidean_pearson value: 89.56078218264365 - type: euclidean_spearman value: 89.17827006466868 - type: manhattan_pearson value: 89.52717595468582 - type: manhattan_spearman value: 89.15878115952923 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 64.20191302875551 - type: cos_sim_spearman value: 64.11446552557646 - type: euclidean_pearson value: 64.6918197393619 - type: euclidean_spearman value: 63.440182631197764 - type: manhattan_pearson value: 64.55692904121835 - type: manhattan_spearman value: 63.424877742756266 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 86.37793104662344 - type: cos_sim_spearman value: 87.7357802629067 - type: euclidean_pearson value: 87.4286301545109 - type: euclidean_spearman value: 87.78452920777421 - type: manhattan_pearson value: 87.42445169331255 - type: manhattan_spearman value: 87.78537677249598 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 84.31465405081792 - type: mrr value: 95.7173781193389 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 57.760999999999996 - type: map_at_10 value: 67.904 - type: map_at_100 value: 68.539 - type: map_at_1000 value: 68.562 - type: map_at_3 value: 65.415 - type: map_at_5 value: 66.788 - type: mrr_at_1 value: 60.333000000000006 - type: mrr_at_10 value: 68.797 - type: mrr_at_100 value: 69.236 - type: mrr_at_1000 value: 69.257 - type: mrr_at_3 value: 66.667 - type: mrr_at_5 value: 67.967 - type: ndcg_at_1 value: 60.333000000000006 - type: ndcg_at_10 value: 72.24199999999999 - type: ndcg_at_100 value: 74.86 - type: ndcg_at_1000 value: 75.354 - type: ndcg_at_3 value: 67.93400000000001 - type: ndcg_at_5 value: 70.02199999999999 - type: precision_at_1 value: 60.333000000000006 - type: precision_at_10 value: 9.533 - type: precision_at_100 value: 1.09 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 26.778000000000002 - type: precision_at_5 value: 17.467 - type: recall_at_1 value: 57.760999999999996 - type: recall_at_10 value: 84.383 - type: recall_at_100 value: 96.267 - type: recall_at_1000 value: 100 - type: recall_at_3 value: 72.628 - type: recall_at_5 value: 78.094 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.8029702970297 - type: cos_sim_ap value: 94.9210324173411 - type: cos_sim_f1 value: 89.8521162672106 - type: cos_sim_precision value: 91.67533818938605 - type: cos_sim_recall value: 88.1 - type: dot_accuracy value: 99.69504950495049 - type: dot_ap value: 90.4919719146181 - type: dot_f1 value: 84.72289156626506 - type: dot_precision value: 81.76744186046511 - type: dot_recall value: 87.9 - type: euclidean_accuracy value: 99.79702970297029 - type: euclidean_ap value: 94.87827463795753 - type: euclidean_f1 value: 89.55680081507896 - type: euclidean_precision value: 91.27725856697819 - type: euclidean_recall value: 87.9 - type: manhattan_accuracy value: 99.7990099009901 - type: manhattan_ap value: 94.87587025149682 - type: manhattan_f1 value: 89.76298537569339 - type: manhattan_precision value: 90.53916581892166 - type: manhattan_recall value: 89 - type: max_accuracy value: 99.8029702970297 - type: max_ap value: 94.9210324173411 - type: max_f1 value: 89.8521162672106 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 65.92385753948724 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 33.671756975431144 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 50.677928036739004 - type: mrr value: 51.56413133435193 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.523589340819683 - type: cos_sim_spearman value: 30.187407518823235 - type: dot_pearson value: 29.039713969699015 - type: dot_spearman value: 29.114740651155508 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.211 - type: map_at_10 value: 1.6199999999999999 - type: map_at_100 value: 8.658000000000001 - type: map_at_1000 value: 21.538 - type: map_at_3 value: 0.575 - type: map_at_5 value: 0.919 - type: mrr_at_1 value: 78 - type: mrr_at_10 value: 86.18599999999999 - type: mrr_at_100 value: 86.18599999999999 - type: mrr_at_1000 value: 86.18599999999999 - type: mrr_at_3 value: 85 - type: mrr_at_5 value: 85.9 - type: ndcg_at_1 value: 74 - type: ndcg_at_10 value: 66.542 - type: ndcg_at_100 value: 50.163999999999994 - type: ndcg_at_1000 value: 45.696999999999996 - type: ndcg_at_3 value: 71.531 - type: ndcg_at_5 value: 70.45 - type: precision_at_1 value: 78 - type: precision_at_10 value: 69.39999999999999 - type: precision_at_100 value: 51.06 - type: precision_at_1000 value: 20.022000000000002 - type: precision_at_3 value: 76 - type: precision_at_5 value: 74.8 - type: recall_at_1 value: 0.211 - type: recall_at_10 value: 1.813 - type: recall_at_100 value: 12.098 - type: recall_at_1000 value: 42.618 - type: recall_at_3 value: 0.603 - type: recall_at_5 value: 0.987 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.2079999999999997 - type: map_at_10 value: 7.777000000000001 - type: map_at_100 value: 12.825000000000001 - type: map_at_1000 value: 14.196 - type: map_at_3 value: 4.285 - type: map_at_5 value: 6.177 - type: mrr_at_1 value: 30.612000000000002 - type: mrr_at_10 value: 42.635 - type: mrr_at_100 value: 43.955 - type: mrr_at_1000 value: 43.955 - type: mrr_at_3 value: 38.435 - type: mrr_at_5 value: 41.088 - type: ndcg_at_1 value: 28.571 - type: ndcg_at_10 value: 20.666999999999998 - type: ndcg_at_100 value: 31.840000000000003 - type: ndcg_at_1000 value: 43.191 - type: ndcg_at_3 value: 23.45 - type: ndcg_at_5 value: 22.994 - type: precision_at_1 value: 30.612000000000002 - type: precision_at_10 value: 17.959 - type: precision_at_100 value: 6.755 - type: precision_at_1000 value: 1.4200000000000002 - type: precision_at_3 value: 23.810000000000002 - type: precision_at_5 value: 23.673 - type: recall_at_1 value: 2.2079999999999997 - type: recall_at_10 value: 13.144 - type: recall_at_100 value: 42.491 - type: recall_at_1000 value: 77.04299999999999 - type: recall_at_3 value: 5.3469999999999995 - type: recall_at_5 value: 9.139 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 70.9044 - type: ap value: 14.625783489340755 - type: f1 value: 54.814936562590546 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 60.94227504244483 - type: f1 value: 61.22516038508854 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 49.602409155145864 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 86.94641473445789 - type: cos_sim_ap value: 76.91572747061197 - type: cos_sim_f1 value: 70.14348097317529 - type: cos_sim_precision value: 66.53254437869822 - type: cos_sim_recall value: 74.1688654353562 - type: dot_accuracy value: 84.80061989628658 - type: dot_ap value: 70.7952548895177 - type: dot_f1 value: 65.44780728844965 - type: dot_precision value: 61.53310104529617 - type: dot_recall value: 69.89445910290237 - type: euclidean_accuracy value: 86.94641473445789 - type: euclidean_ap value: 76.80774009393652 - type: euclidean_f1 value: 70.30522503879979 - type: euclidean_precision value: 68.94977168949772 - type: euclidean_recall value: 71.71503957783642 - type: manhattan_accuracy value: 86.8629671574179 - type: manhattan_ap value: 76.76518632600317 - type: manhattan_f1 value: 70.16056518946692 - type: manhattan_precision value: 68.360450563204 - type: manhattan_recall value: 72.0580474934037 - type: max_accuracy value: 86.94641473445789 - type: max_ap value: 76.91572747061197 - type: max_f1 value: 70.30522503879979 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.10428066907285 - type: cos_sim_ap value: 86.25114759921435 - type: cos_sim_f1 value: 78.37857884586856 - type: cos_sim_precision value: 75.60818546078993 - type: cos_sim_recall value: 81.35971666153372 - type: dot_accuracy value: 87.41995575736406 - type: dot_ap value: 81.51838010086782 - type: dot_f1 value: 74.77398015435503 - type: dot_precision value: 71.53002390662354 - type: dot_recall value: 78.32614721281182 - type: euclidean_accuracy value: 89.12368533395428 - type: euclidean_ap value: 86.33456799874504 - type: euclidean_f1 value: 78.45496750232127 - type: euclidean_precision value: 75.78388462366364 - type: euclidean_recall value: 81.32121958731136 - type: manhattan_accuracy value: 89.10622113556099 - type: manhattan_ap value: 86.31215061745333 - type: manhattan_f1 value: 78.40684906011539 - type: manhattan_precision value: 75.89536643366722 - type: manhattan_recall value: 81.09023714197721 - type: max_accuracy value: 89.12368533395428 - type: max_ap value: 86.33456799874504 - type: max_f1 value: 78.45496750232127 --- # # Fast-Inference with Ctranslate2 Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU. quantized version of [intfloat/e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) ```bash pip install hf-hub-ctranslate2>=2.12.0 ctranslate2>=3.17.1 ``` ```python # from transformers import AutoTokenizer model_name = "michaelfeil/ct2fast-e5-large-v2" model_name_orig="intfloat/e5-large-v2" from hf_hub_ctranslate2 import EncoderCT2fromHfHub model = EncoderCT2fromHfHub( # load in int8 on CUDA model_name_or_path=model_name, device="cuda", compute_type="int8_float16" ) outputs = model.generate( text=["I like soccer", "I like tennis", "The eiffel tower is in Paris"], max_length=64, ) # perform downstream tasks on outputs outputs["pooler_output"] outputs["last_hidden_state"] outputs["attention_mask"] # alternative, use SentenceTransformer Mix-In # for end-to-end Sentence embeddings generation # (not pulling from this CT2fast-HF repo) from hf_hub_ctranslate2 import CT2SentenceTransformer model = CT2SentenceTransformer( model_name_orig, compute_type="int8_float16", device="cuda" ) embeddings = model.encode( ["I like soccer", "I like tennis", "The eiffel tower is in Paris"], batch_size=32, convert_to_numpy=True, normalize_embeddings=True, ) print(embeddings.shape, embeddings) scores = (embeddings @ embeddings.T) * 100 # Hint: you can also host this code via REST API and # via github.com/michaelfeil/infinity ``` Checkpoint compatible to [ctranslate2>=3.17.1](https://github.com/OpenNMT/CTranslate2) and [hf-hub-ctranslate2>=2.12.0](https://github.com/michaelfeil/hf-hub-ctranslate2) - `compute_type=int8_float16` for `device="cuda"` - `compute_type=int8` for `device="cpu"` Converted on 2023-10-13 using ``` LLama-2 -> removed <pad> token. ``` # Licence and other remarks: This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo. # Original description # E5-large-v2 [Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf). Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022 This model has 24 layers and the embedding size is 1024. ## Usage Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset. ```python import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0) return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] # Each input text should start with "query: " or "passage: ". # For tasks other than retrieval, you can simply use the "query: " prefix. input_texts = ['query: how much protein should a female eat', 'query: summit define', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."] tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-large-v2') model = AutoModel.from_pretrained('intfloat/e5-large-v2') # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:2] @ embeddings[2:].T) * 100 print(scores.tolist()) ``` ## Training Details Please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf). ## Benchmark Evaluation Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316). ## Support for Sentence Transformers Below is an example for usage with sentence_transformers. ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('intfloat/e5-large-v2') input_texts = [ 'query: how much protein should a female eat', 'query: summit define', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments." ] embeddings = model.encode(input_texts, normalize_embeddings=True) ``` Package requirements `pip install sentence_transformers~=2.2.2` Contributors: [michaelfeil](https://huggingface.co/michaelfeil) ## FAQ **1. Do I need to add the prefix "query: " and "passage: " to input texts?** Yes, this is how the model is trained, otherwise you will see a performance degradation. Here are some rules of thumb: - Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval. - Use "query: " prefix for symmetric tasks such as semantic similarity, paraphrase retrieval. - Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering. **2. Why are my reproduced results slightly different from reported in the model card?** Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences. **3. Why does the cosine similarity scores distribute around 0.7 to 1.0?** This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss. For text embedding tasks like text retrieval or semantic similarity, what matters is the relative order of the scores instead of the absolute values, so this should not be an issue. ## Citation If you find our paper or models helpful, please consider cite as follows: ``` @article{wang2022text, title={Text Embeddings by Weakly-Supervised Contrastive Pre-training}, author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu}, journal={arXiv preprint arXiv:2212.03533}, year={2022} } ``` ## Limitations This model only works for English texts. Long texts will be truncated to at most 512 tokens.
[ "BIOSSES", "SCIFACT" ]
michaelfeil/ct2fast-e5-large
michaelfeil
sentence-similarity
[ "sentence-transformers", "bert", "ctranslate2", "int8", "float16", "mteb", "Sentence Transformers", "sentence-similarity", "en", "arxiv:2212.03533", "arxiv:2104.08663", "arxiv:2210.07316", "license:mit", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2023-06-15T20:26:04Z
2023-10-13T13:39:03+00:00
13
2
--- language: - en license: mit tags: - ctranslate2 - int8 - float16 - mteb - Sentence Transformers - sentence-similarity - sentence-transformers model-index: - name: e5-large results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 77.68656716417911 - type: ap value: 41.336896075573584 - type: f1 value: 71.788561468075 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 90.04965 - type: ap value: 86.24637009569418 - type: f1 value: 90.03896671762645 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 43.016000000000005 - type: f1 value: 42.1942431880186 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 25.107000000000003 - type: map_at_10 value: 40.464 - type: map_at_100 value: 41.577999999999996 - type: map_at_1000 value: 41.588 - type: map_at_3 value: 35.301 - type: map_at_5 value: 38.263000000000005 - type: mrr_at_1 value: 25.605 - type: mrr_at_10 value: 40.64 - type: mrr_at_100 value: 41.760000000000005 - type: mrr_at_1000 value: 41.77 - type: mrr_at_3 value: 35.443000000000005 - type: mrr_at_5 value: 38.448 - type: ndcg_at_1 value: 25.107000000000003 - type: ndcg_at_10 value: 49.352000000000004 - type: ndcg_at_100 value: 53.98500000000001 - type: ndcg_at_1000 value: 54.208 - type: ndcg_at_3 value: 38.671 - type: ndcg_at_5 value: 43.991 - type: precision_at_1 value: 25.107000000000003 - type: precision_at_10 value: 7.795000000000001 - type: precision_at_100 value: 0.979 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 16.145 - type: precision_at_5 value: 12.262 - type: recall_at_1 value: 25.107000000000003 - type: recall_at_10 value: 77.952 - type: recall_at_100 value: 97.866 - type: recall_at_1000 value: 99.57300000000001 - type: recall_at_3 value: 48.435 - type: recall_at_5 value: 61.309000000000005 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 46.19278045044154 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 41.37976387757665 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 60.07433334608074 - type: mrr value: 73.44347711383723 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 86.4298072183543 - type: cos_sim_spearman value: 84.73144873582848 - type: euclidean_pearson value: 85.15885058870728 - type: euclidean_spearman value: 85.42062106559356 - type: manhattan_pearson value: 84.89409921792054 - type: manhattan_spearman value: 85.31941394024344 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 84.14285714285714 - type: f1 value: 84.11674412565644 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 37.600076342340785 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 35.08861812135148 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 32.684000000000005 - type: map_at_10 value: 41.675000000000004 - type: map_at_100 value: 42.963 - type: map_at_1000 value: 43.078 - type: map_at_3 value: 38.708999999999996 - type: map_at_5 value: 40.316 - type: mrr_at_1 value: 39.485 - type: mrr_at_10 value: 47.152 - type: mrr_at_100 value: 47.96 - type: mrr_at_1000 value: 48.010000000000005 - type: mrr_at_3 value: 44.754 - type: mrr_at_5 value: 46.285 - type: ndcg_at_1 value: 39.485 - type: ndcg_at_10 value: 46.849000000000004 - type: ndcg_at_100 value: 52.059 - type: ndcg_at_1000 value: 54.358 - type: ndcg_at_3 value: 42.705 - type: ndcg_at_5 value: 44.663000000000004 - type: precision_at_1 value: 39.485 - type: precision_at_10 value: 8.455 - type: precision_at_100 value: 1.3379999999999999 - type: precision_at_1000 value: 0.178 - type: precision_at_3 value: 19.695 - type: precision_at_5 value: 13.905999999999999 - type: recall_at_1 value: 32.684000000000005 - type: recall_at_10 value: 56.227000000000004 - type: recall_at_100 value: 78.499 - type: recall_at_1000 value: 94.021 - type: recall_at_3 value: 44.157999999999994 - type: recall_at_5 value: 49.694 - type: map_at_1 value: 31.875999999999998 - type: map_at_10 value: 41.603 - type: map_at_100 value: 42.825 - type: map_at_1000 value: 42.961 - type: map_at_3 value: 38.655 - type: map_at_5 value: 40.294999999999995 - type: mrr_at_1 value: 40.127 - type: mrr_at_10 value: 47.959 - type: mrr_at_100 value: 48.59 - type: mrr_at_1000 value: 48.634 - type: mrr_at_3 value: 45.786 - type: mrr_at_5 value: 46.964 - type: ndcg_at_1 value: 40.127 - type: ndcg_at_10 value: 47.176 - type: ndcg_at_100 value: 51.346000000000004 - type: ndcg_at_1000 value: 53.502 - type: ndcg_at_3 value: 43.139 - type: ndcg_at_5 value: 44.883 - type: precision_at_1 value: 40.127 - type: precision_at_10 value: 8.72 - type: precision_at_100 value: 1.387 - type: precision_at_1000 value: 0.188 - type: precision_at_3 value: 20.637 - type: precision_at_5 value: 14.446 - type: recall_at_1 value: 31.875999999999998 - type: recall_at_10 value: 56.54900000000001 - type: recall_at_100 value: 73.939 - type: recall_at_1000 value: 87.732 - type: recall_at_3 value: 44.326 - type: recall_at_5 value: 49.445 - type: map_at_1 value: 41.677 - type: map_at_10 value: 52.222 - type: map_at_100 value: 53.229000000000006 - type: map_at_1000 value: 53.288000000000004 - type: map_at_3 value: 49.201 - type: map_at_5 value: 51.00599999999999 - type: mrr_at_1 value: 47.524 - type: mrr_at_10 value: 55.745999999999995 - type: mrr_at_100 value: 56.433 - type: mrr_at_1000 value: 56.464999999999996 - type: mrr_at_3 value: 53.37499999999999 - type: mrr_at_5 value: 54.858 - type: ndcg_at_1 value: 47.524 - type: ndcg_at_10 value: 57.406 - type: ndcg_at_100 value: 61.403 - type: ndcg_at_1000 value: 62.7 - type: ndcg_at_3 value: 52.298 - type: ndcg_at_5 value: 55.02 - type: precision_at_1 value: 47.524 - type: precision_at_10 value: 8.865 - type: precision_at_100 value: 1.179 - type: precision_at_1000 value: 0.134 - type: precision_at_3 value: 22.612 - type: precision_at_5 value: 15.461 - type: recall_at_1 value: 41.677 - type: recall_at_10 value: 69.346 - type: recall_at_100 value: 86.344 - type: recall_at_1000 value: 95.703 - type: recall_at_3 value: 55.789 - type: recall_at_5 value: 62.488 - type: map_at_1 value: 25.991999999999997 - type: map_at_10 value: 32.804 - type: map_at_100 value: 33.812999999999995 - type: map_at_1000 value: 33.897 - type: map_at_3 value: 30.567 - type: map_at_5 value: 31.599 - type: mrr_at_1 value: 27.797 - type: mrr_at_10 value: 34.768 - type: mrr_at_100 value: 35.702 - type: mrr_at_1000 value: 35.766 - type: mrr_at_3 value: 32.637 - type: mrr_at_5 value: 33.614 - type: ndcg_at_1 value: 27.797 - type: ndcg_at_10 value: 36.966 - type: ndcg_at_100 value: 41.972 - type: ndcg_at_1000 value: 44.139 - type: ndcg_at_3 value: 32.547 - type: ndcg_at_5 value: 34.258 - type: precision_at_1 value: 27.797 - type: precision_at_10 value: 5.514 - type: precision_at_100 value: 0.8340000000000001 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 13.333 - type: precision_at_5 value: 9.04 - type: recall_at_1 value: 25.991999999999997 - type: recall_at_10 value: 47.941 - type: recall_at_100 value: 71.039 - type: recall_at_1000 value: 87.32799999999999 - type: recall_at_3 value: 36.01 - type: recall_at_5 value: 40.056000000000004 - type: map_at_1 value: 17.533 - type: map_at_10 value: 24.336 - type: map_at_100 value: 25.445 - type: map_at_1000 value: 25.561 - type: map_at_3 value: 22.116 - type: map_at_5 value: 23.347 - type: mrr_at_1 value: 21.642 - type: mrr_at_10 value: 28.910999999999998 - type: mrr_at_100 value: 29.836000000000002 - type: mrr_at_1000 value: 29.907 - type: mrr_at_3 value: 26.638 - type: mrr_at_5 value: 27.857 - type: ndcg_at_1 value: 21.642 - type: ndcg_at_10 value: 28.949 - type: ndcg_at_100 value: 34.211000000000006 - type: ndcg_at_1000 value: 37.031 - type: ndcg_at_3 value: 24.788 - type: ndcg_at_5 value: 26.685 - type: precision_at_1 value: 21.642 - type: precision_at_10 value: 5.137 - type: precision_at_100 value: 0.893 - type: precision_at_1000 value: 0.127 - type: precision_at_3 value: 11.733 - type: precision_at_5 value: 8.383000000000001 - type: recall_at_1 value: 17.533 - type: recall_at_10 value: 38.839 - type: recall_at_100 value: 61.458999999999996 - type: recall_at_1000 value: 81.58 - type: recall_at_3 value: 27.328999999999997 - type: recall_at_5 value: 32.168 - type: map_at_1 value: 28.126 - type: map_at_10 value: 37.872 - type: map_at_100 value: 39.229 - type: map_at_1000 value: 39.353 - type: map_at_3 value: 34.93 - type: map_at_5 value: 36.59 - type: mrr_at_1 value: 34.071 - type: mrr_at_10 value: 43.056 - type: mrr_at_100 value: 43.944 - type: mrr_at_1000 value: 43.999 - type: mrr_at_3 value: 40.536 - type: mrr_at_5 value: 42.065999999999995 - type: ndcg_at_1 value: 34.071 - type: ndcg_at_10 value: 43.503 - type: ndcg_at_100 value: 49.120000000000005 - type: ndcg_at_1000 value: 51.410999999999994 - type: ndcg_at_3 value: 38.767 - type: ndcg_at_5 value: 41.075 - type: precision_at_1 value: 34.071 - type: precision_at_10 value: 7.843999999999999 - type: precision_at_100 value: 1.2489999999999999 - type: precision_at_1000 value: 0.163 - type: precision_at_3 value: 18.223 - type: precision_at_5 value: 13.050999999999998 - type: recall_at_1 value: 28.126 - type: recall_at_10 value: 54.952 - type: recall_at_100 value: 78.375 - type: recall_at_1000 value: 93.29899999999999 - type: recall_at_3 value: 41.714 - type: recall_at_5 value: 47.635 - type: map_at_1 value: 25.957 - type: map_at_10 value: 34.749 - type: map_at_100 value: 35.929 - type: map_at_1000 value: 36.043 - type: map_at_3 value: 31.947 - type: map_at_5 value: 33.575 - type: mrr_at_1 value: 32.078 - type: mrr_at_10 value: 39.844 - type: mrr_at_100 value: 40.71 - type: mrr_at_1000 value: 40.77 - type: mrr_at_3 value: 37.386 - type: mrr_at_5 value: 38.83 - type: ndcg_at_1 value: 32.078 - type: ndcg_at_10 value: 39.97 - type: ndcg_at_100 value: 45.254 - type: ndcg_at_1000 value: 47.818 - type: ndcg_at_3 value: 35.453 - type: ndcg_at_5 value: 37.631 - type: precision_at_1 value: 32.078 - type: precision_at_10 value: 7.158 - type: precision_at_100 value: 1.126 - type: precision_at_1000 value: 0.153 - type: precision_at_3 value: 16.743 - type: precision_at_5 value: 11.872 - type: recall_at_1 value: 25.957 - type: recall_at_10 value: 50.583 - type: recall_at_100 value: 73.593 - type: recall_at_1000 value: 91.23599999999999 - type: recall_at_3 value: 37.651 - type: recall_at_5 value: 43.626 - type: map_at_1 value: 27.1505 - type: map_at_10 value: 34.844833333333334 - type: map_at_100 value: 35.95216666666667 - type: map_at_1000 value: 36.06675 - type: map_at_3 value: 32.41975 - type: map_at_5 value: 33.74233333333333 - type: mrr_at_1 value: 31.923666666666662 - type: mrr_at_10 value: 38.87983333333334 - type: mrr_at_100 value: 39.706250000000004 - type: mrr_at_1000 value: 39.76708333333333 - type: mrr_at_3 value: 36.72008333333333 - type: mrr_at_5 value: 37.96933333333334 - type: ndcg_at_1 value: 31.923666666666662 - type: ndcg_at_10 value: 39.44258333333334 - type: ndcg_at_100 value: 44.31475 - type: ndcg_at_1000 value: 46.75 - type: ndcg_at_3 value: 35.36299999999999 - type: ndcg_at_5 value: 37.242333333333335 - type: precision_at_1 value: 31.923666666666662 - type: precision_at_10 value: 6.643333333333333 - type: precision_at_100 value: 1.0612499999999998 - type: precision_at_1000 value: 0.14575 - type: precision_at_3 value: 15.875250000000001 - type: precision_at_5 value: 11.088916666666664 - type: recall_at_1 value: 27.1505 - type: recall_at_10 value: 49.06349999999999 - type: recall_at_100 value: 70.60841666666666 - type: recall_at_1000 value: 87.72049999999999 - type: recall_at_3 value: 37.60575000000001 - type: recall_at_5 value: 42.511166666666675 - type: map_at_1 value: 25.101000000000003 - type: map_at_10 value: 30.147000000000002 - type: map_at_100 value: 30.98 - type: map_at_1000 value: 31.080000000000002 - type: map_at_3 value: 28.571 - type: map_at_5 value: 29.319 - type: mrr_at_1 value: 27.761000000000003 - type: mrr_at_10 value: 32.716 - type: mrr_at_100 value: 33.504 - type: mrr_at_1000 value: 33.574 - type: mrr_at_3 value: 31.135 - type: mrr_at_5 value: 32.032 - type: ndcg_at_1 value: 27.761000000000003 - type: ndcg_at_10 value: 33.358 - type: ndcg_at_100 value: 37.569 - type: ndcg_at_1000 value: 40.189 - type: ndcg_at_3 value: 30.291 - type: ndcg_at_5 value: 31.558000000000003 - type: precision_at_1 value: 27.761000000000003 - type: precision_at_10 value: 4.939 - type: precision_at_100 value: 0.759 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 12.577 - type: precision_at_5 value: 8.497 - type: recall_at_1 value: 25.101000000000003 - type: recall_at_10 value: 40.739 - type: recall_at_100 value: 60.089999999999996 - type: recall_at_1000 value: 79.768 - type: recall_at_3 value: 32.16 - type: recall_at_5 value: 35.131 - type: map_at_1 value: 20.112 - type: map_at_10 value: 26.119999999999997 - type: map_at_100 value: 27.031 - type: map_at_1000 value: 27.150000000000002 - type: map_at_3 value: 24.230999999999998 - type: map_at_5 value: 25.15 - type: mrr_at_1 value: 24.535 - type: mrr_at_10 value: 30.198000000000004 - type: mrr_at_100 value: 30.975 - type: mrr_at_1000 value: 31.051000000000002 - type: mrr_at_3 value: 28.338 - type: mrr_at_5 value: 29.269000000000002 - type: ndcg_at_1 value: 24.535 - type: ndcg_at_10 value: 30.147000000000002 - type: ndcg_at_100 value: 34.544000000000004 - type: ndcg_at_1000 value: 37.512 - type: ndcg_at_3 value: 26.726 - type: ndcg_at_5 value: 28.046 - type: precision_at_1 value: 24.535 - type: precision_at_10 value: 5.179 - type: precision_at_100 value: 0.859 - type: precision_at_1000 value: 0.128 - type: precision_at_3 value: 12.159 - type: precision_at_5 value: 8.424 - type: recall_at_1 value: 20.112 - type: recall_at_10 value: 38.312000000000005 - type: recall_at_100 value: 58.406000000000006 - type: recall_at_1000 value: 79.863 - type: recall_at_3 value: 28.358 - type: recall_at_5 value: 31.973000000000003 - type: map_at_1 value: 27.111 - type: map_at_10 value: 34.096 - type: map_at_100 value: 35.181000000000004 - type: map_at_1000 value: 35.276 - type: map_at_3 value: 31.745 - type: map_at_5 value: 33.045 - type: mrr_at_1 value: 31.343 - type: mrr_at_10 value: 37.994 - type: mrr_at_100 value: 38.873000000000005 - type: mrr_at_1000 value: 38.934999999999995 - type: mrr_at_3 value: 35.743 - type: mrr_at_5 value: 37.077 - type: ndcg_at_1 value: 31.343 - type: ndcg_at_10 value: 38.572 - type: ndcg_at_100 value: 43.854 - type: ndcg_at_1000 value: 46.190999999999995 - type: ndcg_at_3 value: 34.247 - type: ndcg_at_5 value: 36.28 - type: precision_at_1 value: 31.343 - type: precision_at_10 value: 6.166 - type: precision_at_100 value: 1 - type: precision_at_1000 value: 0.13 - type: precision_at_3 value: 15.081 - type: precision_at_5 value: 10.428999999999998 - type: recall_at_1 value: 27.111 - type: recall_at_10 value: 48.422 - type: recall_at_100 value: 71.846 - type: recall_at_1000 value: 88.57000000000001 - type: recall_at_3 value: 36.435 - type: recall_at_5 value: 41.765 - type: map_at_1 value: 26.264 - type: map_at_10 value: 33.522 - type: map_at_100 value: 34.963 - type: map_at_1000 value: 35.175 - type: map_at_3 value: 31.366 - type: map_at_5 value: 32.621 - type: mrr_at_1 value: 31.028 - type: mrr_at_10 value: 37.230000000000004 - type: mrr_at_100 value: 38.149 - type: mrr_at_1000 value: 38.218 - type: mrr_at_3 value: 35.046 - type: mrr_at_5 value: 36.617 - type: ndcg_at_1 value: 31.028 - type: ndcg_at_10 value: 37.964999999999996 - type: ndcg_at_100 value: 43.342000000000006 - type: ndcg_at_1000 value: 46.471000000000004 - type: ndcg_at_3 value: 34.67 - type: ndcg_at_5 value: 36.458 - type: precision_at_1 value: 31.028 - type: precision_at_10 value: 6.937 - type: precision_at_100 value: 1.346 - type: precision_at_1000 value: 0.22799999999999998 - type: precision_at_3 value: 15.942 - type: precision_at_5 value: 11.462 - type: recall_at_1 value: 26.264 - type: recall_at_10 value: 45.571 - type: recall_at_100 value: 70.246 - type: recall_at_1000 value: 90.971 - type: recall_at_3 value: 36.276 - type: recall_at_5 value: 41.162 - type: map_at_1 value: 23.372999999999998 - type: map_at_10 value: 28.992 - type: map_at_100 value: 29.837999999999997 - type: map_at_1000 value: 29.939 - type: map_at_3 value: 26.999000000000002 - type: map_at_5 value: 28.044999999999998 - type: mrr_at_1 value: 25.692999999999998 - type: mrr_at_10 value: 30.984 - type: mrr_at_100 value: 31.799 - type: mrr_at_1000 value: 31.875999999999998 - type: mrr_at_3 value: 29.267 - type: mrr_at_5 value: 30.163 - type: ndcg_at_1 value: 25.692999999999998 - type: ndcg_at_10 value: 32.45 - type: ndcg_at_100 value: 37.103 - type: ndcg_at_1000 value: 39.678000000000004 - type: ndcg_at_3 value: 28.725 - type: ndcg_at_5 value: 30.351 - type: precision_at_1 value: 25.692999999999998 - type: precision_at_10 value: 4.806 - type: precision_at_100 value: 0.765 - type: precision_at_1000 value: 0.108 - type: precision_at_3 value: 11.768 - type: precision_at_5 value: 8.096 - type: recall_at_1 value: 23.372999999999998 - type: recall_at_10 value: 41.281 - type: recall_at_100 value: 63.465 - type: recall_at_1000 value: 82.575 - type: recall_at_3 value: 31.063000000000002 - type: recall_at_5 value: 34.991 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 8.821 - type: map_at_10 value: 15.383 - type: map_at_100 value: 17.244999999999997 - type: map_at_1000 value: 17.445 - type: map_at_3 value: 12.64 - type: map_at_5 value: 13.941999999999998 - type: mrr_at_1 value: 19.544 - type: mrr_at_10 value: 29.738999999999997 - type: mrr_at_100 value: 30.923000000000002 - type: mrr_at_1000 value: 30.969 - type: mrr_at_3 value: 26.384 - type: mrr_at_5 value: 28.199 - type: ndcg_at_1 value: 19.544 - type: ndcg_at_10 value: 22.398 - type: ndcg_at_100 value: 30.253999999999998 - type: ndcg_at_1000 value: 33.876 - type: ndcg_at_3 value: 17.473 - type: ndcg_at_5 value: 19.154 - type: precision_at_1 value: 19.544 - type: precision_at_10 value: 7.217999999999999 - type: precision_at_100 value: 1.564 - type: precision_at_1000 value: 0.22300000000000003 - type: precision_at_3 value: 13.225000000000001 - type: precision_at_5 value: 10.319 - type: recall_at_1 value: 8.821 - type: recall_at_10 value: 28.110000000000003 - type: recall_at_100 value: 55.64 - type: recall_at_1000 value: 75.964 - type: recall_at_3 value: 16.195 - type: recall_at_5 value: 20.678 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 9.344 - type: map_at_10 value: 20.301 - type: map_at_100 value: 28.709 - type: map_at_1000 value: 30.470999999999997 - type: map_at_3 value: 14.584 - type: map_at_5 value: 16.930999999999997 - type: mrr_at_1 value: 67.25 - type: mrr_at_10 value: 75.393 - type: mrr_at_100 value: 75.742 - type: mrr_at_1000 value: 75.75 - type: mrr_at_3 value: 73.958 - type: mrr_at_5 value: 74.883 - type: ndcg_at_1 value: 56.00000000000001 - type: ndcg_at_10 value: 42.394 - type: ndcg_at_100 value: 47.091 - type: ndcg_at_1000 value: 54.215 - type: ndcg_at_3 value: 46.995 - type: ndcg_at_5 value: 44.214999999999996 - type: precision_at_1 value: 67.25 - type: precision_at_10 value: 33.525 - type: precision_at_100 value: 10.67 - type: precision_at_1000 value: 2.221 - type: precision_at_3 value: 49.417 - type: precision_at_5 value: 42.15 - type: recall_at_1 value: 9.344 - type: recall_at_10 value: 25.209 - type: recall_at_100 value: 52.329 - type: recall_at_1000 value: 74.2 - type: recall_at_3 value: 15.699 - type: recall_at_5 value: 19.24 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 48.05 - type: f1 value: 43.06718139212933 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 46.452 - type: map_at_10 value: 58.825 - type: map_at_100 value: 59.372 - type: map_at_1000 value: 59.399 - type: map_at_3 value: 56.264 - type: map_at_5 value: 57.879999999999995 - type: mrr_at_1 value: 49.82 - type: mrr_at_10 value: 62.178999999999995 - type: mrr_at_100 value: 62.641999999999996 - type: mrr_at_1000 value: 62.658 - type: mrr_at_3 value: 59.706 - type: mrr_at_5 value: 61.283 - type: ndcg_at_1 value: 49.82 - type: ndcg_at_10 value: 65.031 - type: ndcg_at_100 value: 67.413 - type: ndcg_at_1000 value: 68.014 - type: ndcg_at_3 value: 60.084 - type: ndcg_at_5 value: 62.858000000000004 - type: precision_at_1 value: 49.82 - type: precision_at_10 value: 8.876000000000001 - type: precision_at_100 value: 1.018 - type: precision_at_1000 value: 0.109 - type: precision_at_3 value: 24.477 - type: precision_at_5 value: 16.208 - type: recall_at_1 value: 46.452 - type: recall_at_10 value: 80.808 - type: recall_at_100 value: 91.215 - type: recall_at_1000 value: 95.52000000000001 - type: recall_at_3 value: 67.62899999999999 - type: recall_at_5 value: 74.32900000000001 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 18.351 - type: map_at_10 value: 30.796 - type: map_at_100 value: 32.621 - type: map_at_1000 value: 32.799 - type: map_at_3 value: 26.491 - type: map_at_5 value: 28.933999999999997 - type: mrr_at_1 value: 36.265 - type: mrr_at_10 value: 45.556999999999995 - type: mrr_at_100 value: 46.323 - type: mrr_at_1000 value: 46.359 - type: mrr_at_3 value: 42.695 - type: mrr_at_5 value: 44.324000000000005 - type: ndcg_at_1 value: 36.265 - type: ndcg_at_10 value: 38.558 - type: ndcg_at_100 value: 45.18 - type: ndcg_at_1000 value: 48.292 - type: ndcg_at_3 value: 34.204 - type: ndcg_at_5 value: 35.735 - type: precision_at_1 value: 36.265 - type: precision_at_10 value: 10.879999999999999 - type: precision_at_100 value: 1.77 - type: precision_at_1000 value: 0.234 - type: precision_at_3 value: 23.044999999999998 - type: precision_at_5 value: 17.253 - type: recall_at_1 value: 18.351 - type: recall_at_10 value: 46.116 - type: recall_at_100 value: 70.786 - type: recall_at_1000 value: 89.46300000000001 - type: recall_at_3 value: 31.404 - type: recall_at_5 value: 37.678 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 36.847 - type: map_at_10 value: 54.269999999999996 - type: map_at_100 value: 55.152 - type: map_at_1000 value: 55.223 - type: map_at_3 value: 51.166 - type: map_at_5 value: 53.055 - type: mrr_at_1 value: 73.693 - type: mrr_at_10 value: 79.975 - type: mrr_at_100 value: 80.202 - type: mrr_at_1000 value: 80.214 - type: mrr_at_3 value: 78.938 - type: mrr_at_5 value: 79.595 - type: ndcg_at_1 value: 73.693 - type: ndcg_at_10 value: 63.334999999999994 - type: ndcg_at_100 value: 66.452 - type: ndcg_at_1000 value: 67.869 - type: ndcg_at_3 value: 58.829 - type: ndcg_at_5 value: 61.266 - type: precision_at_1 value: 73.693 - type: precision_at_10 value: 13.122 - type: precision_at_100 value: 1.5559999999999998 - type: precision_at_1000 value: 0.174 - type: precision_at_3 value: 37.083 - type: precision_at_5 value: 24.169999999999998 - type: recall_at_1 value: 36.847 - type: recall_at_10 value: 65.61099999999999 - type: recall_at_100 value: 77.792 - type: recall_at_1000 value: 87.17099999999999 - type: recall_at_3 value: 55.625 - type: recall_at_5 value: 60.425 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 82.1096 - type: ap value: 76.67089212843918 - type: f1 value: 82.03535056754939 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 24.465 - type: map_at_10 value: 37.072 - type: map_at_100 value: 38.188 - type: map_at_1000 value: 38.232 - type: map_at_3 value: 33.134 - type: map_at_5 value: 35.453 - type: mrr_at_1 value: 25.142999999999997 - type: mrr_at_10 value: 37.669999999999995 - type: mrr_at_100 value: 38.725 - type: mrr_at_1000 value: 38.765 - type: mrr_at_3 value: 33.82 - type: mrr_at_5 value: 36.111 - type: ndcg_at_1 value: 25.142999999999997 - type: ndcg_at_10 value: 44.054 - type: ndcg_at_100 value: 49.364000000000004 - type: ndcg_at_1000 value: 50.456 - type: ndcg_at_3 value: 36.095 - type: ndcg_at_5 value: 40.23 - type: precision_at_1 value: 25.142999999999997 - type: precision_at_10 value: 6.845 - type: precision_at_100 value: 0.95 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 15.204999999999998 - type: precision_at_5 value: 11.221 - type: recall_at_1 value: 24.465 - type: recall_at_10 value: 65.495 - type: recall_at_100 value: 89.888 - type: recall_at_1000 value: 98.165 - type: recall_at_3 value: 43.964 - type: recall_at_5 value: 53.891 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 93.86228910168718 - type: f1 value: 93.69177113259104 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 76.3999088007296 - type: f1 value: 58.96668664333438 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 73.21788836583727 - type: f1 value: 71.4545936552952 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.39071956960323 - type: f1 value: 77.12398952847603 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 32.255379528166955 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 29.66423362872814 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 30.782211620375964 - type: mrr value: 31.773479703044956 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 5.863 - type: map_at_10 value: 13.831 - type: map_at_100 value: 17.534 - type: map_at_1000 value: 19.012 - type: map_at_3 value: 10.143 - type: map_at_5 value: 12.034 - type: mrr_at_1 value: 46.749 - type: mrr_at_10 value: 55.376999999999995 - type: mrr_at_100 value: 56.009 - type: mrr_at_1000 value: 56.042 - type: mrr_at_3 value: 53.30200000000001 - type: mrr_at_5 value: 54.85 - type: ndcg_at_1 value: 44.582 - type: ndcg_at_10 value: 36.07 - type: ndcg_at_100 value: 33.39 - type: ndcg_at_1000 value: 41.884 - type: ndcg_at_3 value: 41.441 - type: ndcg_at_5 value: 39.861000000000004 - type: precision_at_1 value: 46.129999999999995 - type: precision_at_10 value: 26.594 - type: precision_at_100 value: 8.365 - type: precision_at_1000 value: 2.1260000000000003 - type: precision_at_3 value: 39.009 - type: precision_at_5 value: 34.861 - type: recall_at_1 value: 5.863 - type: recall_at_10 value: 17.961 - type: recall_at_100 value: 34.026 - type: recall_at_1000 value: 64.46499999999999 - type: recall_at_3 value: 11.242 - type: recall_at_5 value: 14.493 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 38.601 - type: map_at_10 value: 55.293000000000006 - type: map_at_100 value: 56.092 - type: map_at_1000 value: 56.111999999999995 - type: map_at_3 value: 51.269 - type: map_at_5 value: 53.787 - type: mrr_at_1 value: 43.221 - type: mrr_at_10 value: 57.882999999999996 - type: mrr_at_100 value: 58.408 - type: mrr_at_1000 value: 58.421 - type: mrr_at_3 value: 54.765 - type: mrr_at_5 value: 56.809 - type: ndcg_at_1 value: 43.221 - type: ndcg_at_10 value: 62.858999999999995 - type: ndcg_at_100 value: 65.987 - type: ndcg_at_1000 value: 66.404 - type: ndcg_at_3 value: 55.605000000000004 - type: ndcg_at_5 value: 59.723000000000006 - type: precision_at_1 value: 43.221 - type: precision_at_10 value: 9.907 - type: precision_at_100 value: 1.169 - type: precision_at_1000 value: 0.121 - type: precision_at_3 value: 25.019000000000002 - type: precision_at_5 value: 17.474 - type: recall_at_1 value: 38.601 - type: recall_at_10 value: 82.966 - type: recall_at_100 value: 96.154 - type: recall_at_1000 value: 99.223 - type: recall_at_3 value: 64.603 - type: recall_at_5 value: 73.97200000000001 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 70.77 - type: map_at_10 value: 84.429 - type: map_at_100 value: 85.04599999999999 - type: map_at_1000 value: 85.065 - type: map_at_3 value: 81.461 - type: map_at_5 value: 83.316 - type: mrr_at_1 value: 81.51 - type: mrr_at_10 value: 87.52799999999999 - type: mrr_at_100 value: 87.631 - type: mrr_at_1000 value: 87.632 - type: mrr_at_3 value: 86.533 - type: mrr_at_5 value: 87.214 - type: ndcg_at_1 value: 81.47999999999999 - type: ndcg_at_10 value: 88.181 - type: ndcg_at_100 value: 89.39200000000001 - type: ndcg_at_1000 value: 89.52 - type: ndcg_at_3 value: 85.29299999999999 - type: ndcg_at_5 value: 86.88 - type: precision_at_1 value: 81.47999999999999 - type: precision_at_10 value: 13.367 - type: precision_at_100 value: 1.5230000000000001 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.227 - type: precision_at_5 value: 24.494 - type: recall_at_1 value: 70.77 - type: recall_at_10 value: 95.199 - type: recall_at_100 value: 99.37700000000001 - type: recall_at_1000 value: 99.973 - type: recall_at_3 value: 86.895 - type: recall_at_5 value: 91.396 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 50.686353396858344 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 61.3664675312921 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 4.7379999999999995 - type: map_at_10 value: 12.01 - type: map_at_100 value: 14.02 - type: map_at_1000 value: 14.310999999999998 - type: map_at_3 value: 8.459 - type: map_at_5 value: 10.281 - type: mrr_at_1 value: 23.3 - type: mrr_at_10 value: 34.108 - type: mrr_at_100 value: 35.217 - type: mrr_at_1000 value: 35.272 - type: mrr_at_3 value: 30.833 - type: mrr_at_5 value: 32.768 - type: ndcg_at_1 value: 23.3 - type: ndcg_at_10 value: 20.116999999999997 - type: ndcg_at_100 value: 27.961000000000002 - type: ndcg_at_1000 value: 33.149 - type: ndcg_at_3 value: 18.902 - type: ndcg_at_5 value: 16.742 - type: precision_at_1 value: 23.3 - type: precision_at_10 value: 10.47 - type: precision_at_100 value: 2.177 - type: precision_at_1000 value: 0.34299999999999997 - type: precision_at_3 value: 17.567 - type: precision_at_5 value: 14.78 - type: recall_at_1 value: 4.7379999999999995 - type: recall_at_10 value: 21.221999999999998 - type: recall_at_100 value: 44.242 - type: recall_at_1000 value: 69.652 - type: recall_at_3 value: 10.688 - type: recall_at_5 value: 14.982999999999999 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 84.84572946827069 - type: cos_sim_spearman value: 80.48508130408966 - type: euclidean_pearson value: 82.0481530027767 - type: euclidean_spearman value: 80.45902876782752 - type: manhattan_pearson value: 82.03728222483326 - type: manhattan_spearman value: 80.45684282911755 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 84.33476464677516 - type: cos_sim_spearman value: 75.93057758003266 - type: euclidean_pearson value: 80.89685744015691 - type: euclidean_spearman value: 76.29929953441706 - type: manhattan_pearson value: 80.91391345459995 - type: manhattan_spearman value: 76.31985463110914 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 84.63686106359005 - type: cos_sim_spearman value: 85.22240034668202 - type: euclidean_pearson value: 84.6074814189106 - type: euclidean_spearman value: 85.17169644755828 - type: manhattan_pearson value: 84.48329306239368 - type: manhattan_spearman value: 85.0086508544768 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 82.95455774064745 - type: cos_sim_spearman value: 80.54074646118492 - type: euclidean_pearson value: 81.79598955554704 - type: euclidean_spearman value: 80.55837617606814 - type: manhattan_pearson value: 81.78213797905386 - type: manhattan_spearman value: 80.5666746878273 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 87.92813309124739 - type: cos_sim_spearman value: 88.81459873052108 - type: euclidean_pearson value: 88.21193118930564 - type: euclidean_spearman value: 88.87072745043731 - type: manhattan_pearson value: 88.22576929706727 - type: manhattan_spearman value: 88.8867671095791 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 83.6881529671839 - type: cos_sim_spearman value: 85.2807092969554 - type: euclidean_pearson value: 84.62334178652704 - type: euclidean_spearman value: 85.2116373296784 - type: manhattan_pearson value: 84.54948211541777 - type: manhattan_spearman value: 85.10737722637882 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 88.55963694458408 - type: cos_sim_spearman value: 89.36731628848683 - type: euclidean_pearson value: 89.64975952985465 - type: euclidean_spearman value: 89.29689484033007 - type: manhattan_pearson value: 89.61234491713135 - type: manhattan_spearman value: 89.20302520255782 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 62.411800961903886 - type: cos_sim_spearman value: 62.99105515749963 - type: euclidean_pearson value: 65.29826669549443 - type: euclidean_spearman value: 63.29880964105775 - type: manhattan_pearson value: 65.00126190601183 - type: manhattan_spearman value: 63.32011025899179 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 85.83498531837608 - type: cos_sim_spearman value: 87.21366640615442 - type: euclidean_pearson value: 86.74764288798261 - type: euclidean_spearman value: 87.06060470780834 - type: manhattan_pearson value: 86.65971223951476 - type: manhattan_spearman value: 86.99814399831457 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 83.94448463485881 - type: mrr value: 95.36291867174221 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 59.928000000000004 - type: map_at_10 value: 68.577 - type: map_at_100 value: 69.35900000000001 - type: map_at_1000 value: 69.37299999999999 - type: map_at_3 value: 66.217 - type: map_at_5 value: 67.581 - type: mrr_at_1 value: 63 - type: mrr_at_10 value: 69.994 - type: mrr_at_100 value: 70.553 - type: mrr_at_1000 value: 70.56700000000001 - type: mrr_at_3 value: 68.167 - type: mrr_at_5 value: 69.11699999999999 - type: ndcg_at_1 value: 63 - type: ndcg_at_10 value: 72.58 - type: ndcg_at_100 value: 75.529 - type: ndcg_at_1000 value: 76.009 - type: ndcg_at_3 value: 68.523 - type: ndcg_at_5 value: 70.301 - type: precision_at_1 value: 63 - type: precision_at_10 value: 9.333 - type: precision_at_100 value: 1.09 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 26.444000000000003 - type: precision_at_5 value: 17.067 - type: recall_at_1 value: 59.928000000000004 - type: recall_at_10 value: 83.544 - type: recall_at_100 value: 96 - type: recall_at_1000 value: 100 - type: recall_at_3 value: 72.072 - type: recall_at_5 value: 76.683 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.82178217821782 - type: cos_sim_ap value: 95.41507679819003 - type: cos_sim_f1 value: 90.9456740442656 - type: cos_sim_precision value: 91.49797570850203 - type: cos_sim_recall value: 90.4 - type: dot_accuracy value: 99.77227722772277 - type: dot_ap value: 92.50123869445967 - type: dot_f1 value: 88.18414322250638 - type: dot_precision value: 90.26178010471205 - type: dot_recall value: 86.2 - type: euclidean_accuracy value: 99.81782178217821 - type: euclidean_ap value: 95.3935066749006 - type: euclidean_f1 value: 90.66128218071681 - type: euclidean_precision value: 91.53924566768603 - type: euclidean_recall value: 89.8 - type: manhattan_accuracy value: 99.81881188118813 - type: manhattan_ap value: 95.39767454613512 - type: manhattan_f1 value: 90.62019477191186 - type: manhattan_precision value: 92.95478443743428 - type: manhattan_recall value: 88.4 - type: max_accuracy value: 99.82178217821782 - type: max_ap value: 95.41507679819003 - type: max_f1 value: 90.9456740442656 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 64.96313921233748 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 33.602625720956745 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 51.32659230651731 - type: mrr value: 52.33861726508785 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 31.01587644214203 - type: cos_sim_spearman value: 30.974306908731013 - type: dot_pearson value: 29.83339853838187 - type: dot_spearman value: 30.07761671934048 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.22 - type: map_at_10 value: 1.9539999999999997 - type: map_at_100 value: 11.437 - type: map_at_1000 value: 27.861000000000004 - type: map_at_3 value: 0.6479999999999999 - type: map_at_5 value: 1.0410000000000001 - type: mrr_at_1 value: 84 - type: mrr_at_10 value: 90.333 - type: mrr_at_100 value: 90.333 - type: mrr_at_1000 value: 90.333 - type: mrr_at_3 value: 90.333 - type: mrr_at_5 value: 90.333 - type: ndcg_at_1 value: 80 - type: ndcg_at_10 value: 78.31700000000001 - type: ndcg_at_100 value: 59.396 - type: ndcg_at_1000 value: 52.733 - type: ndcg_at_3 value: 81.46900000000001 - type: ndcg_at_5 value: 80.74 - type: precision_at_1 value: 84 - type: precision_at_10 value: 84 - type: precision_at_100 value: 60.980000000000004 - type: precision_at_1000 value: 23.432 - type: precision_at_3 value: 87.333 - type: precision_at_5 value: 86.8 - type: recall_at_1 value: 0.22 - type: recall_at_10 value: 2.156 - type: recall_at_100 value: 14.557999999999998 - type: recall_at_1000 value: 49.553999999999995 - type: recall_at_3 value: 0.685 - type: recall_at_5 value: 1.121 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 3.373 - type: map_at_10 value: 11.701 - type: map_at_100 value: 17.144000000000002 - type: map_at_1000 value: 18.624 - type: map_at_3 value: 6.552 - type: map_at_5 value: 9.372 - type: mrr_at_1 value: 38.775999999999996 - type: mrr_at_10 value: 51.975 - type: mrr_at_100 value: 52.873999999999995 - type: mrr_at_1000 value: 52.873999999999995 - type: mrr_at_3 value: 47.619 - type: mrr_at_5 value: 50.578 - type: ndcg_at_1 value: 36.735 - type: ndcg_at_10 value: 27.212999999999997 - type: ndcg_at_100 value: 37.245 - type: ndcg_at_1000 value: 48.602000000000004 - type: ndcg_at_3 value: 30.916 - type: ndcg_at_5 value: 30.799 - type: precision_at_1 value: 38.775999999999996 - type: precision_at_10 value: 23.469 - type: precision_at_100 value: 7.327 - type: precision_at_1000 value: 1.486 - type: precision_at_3 value: 31.973000000000003 - type: precision_at_5 value: 32.245000000000005 - type: recall_at_1 value: 3.373 - type: recall_at_10 value: 17.404 - type: recall_at_100 value: 46.105000000000004 - type: recall_at_1000 value: 80.35 - type: recall_at_3 value: 7.4399999999999995 - type: recall_at_5 value: 12.183 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 70.5592 - type: ap value: 14.330910591410134 - type: f1 value: 54.45745186286521 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 61.20543293718167 - type: f1 value: 61.45365480309872 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 43.81162998944145 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 86.69011146212075 - type: cos_sim_ap value: 76.09792353652536 - type: cos_sim_f1 value: 70.10202763786646 - type: cos_sim_precision value: 68.65671641791045 - type: cos_sim_recall value: 71.60949868073878 - type: dot_accuracy value: 85.33110806461227 - type: dot_ap value: 70.19304383327554 - type: dot_f1 value: 67.22494202525122 - type: dot_precision value: 65.6847935548842 - type: dot_recall value: 68.83905013192611 - type: euclidean_accuracy value: 86.5410979316922 - type: euclidean_ap value: 75.91906915651882 - type: euclidean_f1 value: 69.6798975672215 - type: euclidean_precision value: 67.6865671641791 - type: euclidean_recall value: 71.79419525065963 - type: manhattan_accuracy value: 86.60070334386363 - type: manhattan_ap value: 75.94617413885031 - type: manhattan_f1 value: 69.52689565780946 - type: manhattan_precision value: 68.3312101910828 - type: manhattan_recall value: 70.76517150395777 - type: max_accuracy value: 86.69011146212075 - type: max_ap value: 76.09792353652536 - type: max_f1 value: 70.10202763786646 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.25951798812434 - type: cos_sim_ap value: 86.31476416599727 - type: cos_sim_f1 value: 78.52709971038477 - type: cos_sim_precision value: 76.7629972792117 - type: cos_sim_recall value: 80.37419156144134 - type: dot_accuracy value: 88.03896456708192 - type: dot_ap value: 83.26963599196237 - type: dot_f1 value: 76.72696459492317 - type: dot_precision value: 73.56411162133521 - type: dot_recall value: 80.17400677548507 - type: euclidean_accuracy value: 89.21682772538519 - type: euclidean_ap value: 86.29306071289969 - type: euclidean_f1 value: 78.40827030519554 - type: euclidean_precision value: 77.42250243939053 - type: euclidean_recall value: 79.41946412072683 - type: manhattan_accuracy value: 89.22458959133776 - type: manhattan_ap value: 86.2901934710645 - type: manhattan_f1 value: 78.54211378440453 - type: manhattan_precision value: 76.85505858079729 - type: manhattan_recall value: 80.30489682784109 - type: max_accuracy value: 89.25951798812434 - type: max_ap value: 86.31476416599727 - type: max_f1 value: 78.54211378440453 --- # # Fast-Inference with Ctranslate2 Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU. quantized version of [intfloat/e5-large](https://huggingface.co/intfloat/e5-large) ```bash pip install hf-hub-ctranslate2>=2.12.0 ctranslate2>=3.17.1 ``` ```python # from transformers import AutoTokenizer model_name = "michaelfeil/ct2fast-e5-large" model_name_orig="intfloat/e5-large" from hf_hub_ctranslate2 import EncoderCT2fromHfHub model = EncoderCT2fromHfHub( # load in int8 on CUDA model_name_or_path=model_name, device="cuda", compute_type="int8_float16" ) outputs = model.generate( text=["I like soccer", "I like tennis", "The eiffel tower is in Paris"], max_length=64, ) # perform downstream tasks on outputs outputs["pooler_output"] outputs["last_hidden_state"] outputs["attention_mask"] # alternative, use SentenceTransformer Mix-In # for end-to-end Sentence embeddings generation # (not pulling from this CT2fast-HF repo) from hf_hub_ctranslate2 import CT2SentenceTransformer model = CT2SentenceTransformer( model_name_orig, compute_type="int8_float16", device="cuda" ) embeddings = model.encode( ["I like soccer", "I like tennis", "The eiffel tower is in Paris"], batch_size=32, convert_to_numpy=True, normalize_embeddings=True, ) print(embeddings.shape, embeddings) scores = (embeddings @ embeddings.T) * 100 # Hint: you can also host this code via REST API and # via github.com/michaelfeil/infinity ``` Checkpoint compatible to [ctranslate2>=3.17.1](https://github.com/OpenNMT/CTranslate2) and [hf-hub-ctranslate2>=2.12.0](https://github.com/michaelfeil/hf-hub-ctranslate2) - `compute_type=int8_float16` for `device="cuda"` - `compute_type=int8` for `device="cpu"` Converted on 2023-10-13 using ``` LLama-2 -> removed <pad> token. ``` # Licence and other remarks: This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo. # Original description ## E5-large **News (May 2023): please switch to [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2), which has better performance and same method of usage.** [Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf). Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022 This model has 24 layers and the embedding size is 1024. ## Usage Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset. ```python import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0) return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] # Each input text should start with "query: " or "passage: ". # For tasks other than retrieval, you can simply use the "query: " prefix. input_texts = ['query: how much protein should a female eat', 'query: summit define', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."] tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-large') model = AutoModel.from_pretrained('intfloat/e5-large') # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:2] @ embeddings[2:].T) * 100 print(scores.tolist()) ``` ## Training Details Please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf). ## Benchmark Evaluation Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316). ## Support for Sentence Transformers Below is an example for usage with sentence_transformers. ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('intfloat/e5-large') input_texts = [ 'query: how much protein should a female eat', 'query: summit define', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments." ] embeddings = model.encode(input_texts, normalize_embeddings=True) ``` Package requirements `pip install sentence_transformers~=2.2.2` Contributors: [michaelfeil](https://huggingface.co/michaelfeil) ## FAQ **1. Do I need to add the prefix "query: " and "passage: " to input texts?** Yes, this is how the model is trained, otherwise you will see a performance degradation. Here are some rules of thumb: - Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval. - Use "query: " prefix for symmetric tasks such as semantic similarity, paraphrase retrieval. - Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering. **2. Why are my reproduced results slightly different from reported in the model card?** Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences. **3. Why does the cosine similarity scores distribute around 0.7 to 1.0?** This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss. For text embedding tasks like text retrieval or semantic similarity, what matters is the relative order of the scores instead of the absolute values, so this should not be an issue. ## Citation If you find our paper or models helpful, please consider cite as follows: ``` @article{wang2022text, title={Text Embeddings by Weakly-Supervised Contrastive Pre-training}, author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu}, journal={arXiv preprint arXiv:2212.03533}, year={2022} } ``` ## Limitations This model only works for English texts. Long texts will be truncated to at most 512 tokens.
[ "BIOSSES", "SCIFACT" ]
IIC/xlm-roberta-large-ctebmsp
IIC
token-classification
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "biomedical", "clinical", "spanish", "xlm-roberta-large", "token-classification", "es", "dataset:lcampillos/ctebmsp", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-06-21T06:53:20Z
2024-11-25T10:41:33+00:00
13
0
--- datasets: - lcampillos/ctebmsp language: es license: mit metrics: - f1 pipeline_tag: token-classification tags: - biomedical - clinical - spanish - xlm-roberta-large model-index: - name: IIC/xlm-roberta-large-ctebmsp results: - task: type: token-classification dataset: name: CT-EBM-SP (Clinical Trials for Evidence-based Medicine in Spanish) type: lcampillos/ctebmsp split: test metrics: - type: f1 value: 0.906 name: f1 --- # xlm-roberta-large-ctebmsp This model is a finetuned version of xlm-roberta-large for the CT-EBM-SP (Clinical Trials for Evidence-based Medicine in Spanish) dataset used in a benchmark in the paper `A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks`. The model has a F1 of 0.906 Please refer to the [original publication](https://doi.org/10.1093/jamia/ocae054) for more information. ## Parameters used | parameter | Value | |-------------------------|:-----:| | batch size | 64 | | learning rate | 2e-05 | | classifier dropout | 0.1 | | warmup ratio | 0 | | warmup steps | 0 | | weight decay | 0 | | optimizer | AdamW | | epochs | 10 | | early stopping patience | 3 | ## BibTeX entry and citation info ```bibtext @article{10.1093/jamia/ocae054, author = {García Subies, Guillem and Barbero Jiménez, Álvaro and Martínez Fernández, Paloma}, title = {A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks}, journal = {Journal of the American Medical Informatics Association}, volume = {31}, number = {9}, pages = {2137-2146}, year = {2024}, month = {03}, issn = {1527-974X}, doi = {10.1093/jamia/ocae054}, url = {https://doi.org/10.1093/jamia/ocae054}, } ```
[ "CT-EBM-SP" ]
IIC/XLM_R_Galen-ctebmsp
IIC
token-classification
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "biomedical", "clinical", "spanish", "XLM_R_Galen", "token-classification", "es", "dataset:lcampillos/ctebmsp", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-06-21T06:58:19Z
2024-11-25T10:41:36+00:00
13
0
--- datasets: - lcampillos/ctebmsp language: es license: mit metrics: - f1 pipeline_tag: token-classification tags: - biomedical - clinical - spanish - XLM_R_Galen model-index: - name: IIC/XLM_R_Galen-ctebmsp results: - task: type: token-classification dataset: name: CT-EBM-SP (Clinical Trials for Evidence-based Medicine in Spanish) type: lcampillos/ctebmsp split: test metrics: - type: f1 value: 0.881 name: f1 --- # XLM_R_Galen-ctebmsp This model is a finetuned version of XLM_R_Galen for the CT-EBM-SP (Clinical Trials for Evidence-based Medicine in Spanish) dataset used in a benchmark in the paper `A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks`. The model has a F1 of 0.881 Please refer to the [original publication](https://doi.org/10.1093/jamia/ocae054) for more information. ## Parameters used | parameter | Value | |-------------------------|:-----:| | batch size | 16 | | learning rate | 4e-05 | | classifier dropout | 0.1 | | warmup ratio | 0 | | warmup steps | 0 | | weight decay | 0 | | optimizer | AdamW | | epochs | 10 | | early stopping patience | 3 | ## BibTeX entry and citation info ```bibtext @article{10.1093/jamia/ocae054, author = {García Subies, Guillem and Barbero Jiménez, Álvaro and Martínez Fernández, Paloma}, title = {A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks}, journal = {Journal of the American Medical Informatics Association}, volume = {31}, number = {9}, pages = {2137-2146}, year = {2024}, month = {03}, issn = {1527-974X}, doi = {10.1093/jamia/ocae054}, url = {https://doi.org/10.1093/jamia/ocae054}, } ```
[ "CT-EBM-SP" ]
newsrx/instructor-large-newsrx
newsrx
sentence-similarity
[ "sentence-transformers", "pytorch", "t5", "text-embedding", "embeddings", "information-retrieval", "beir", "text-classification", "language-model", "text-clustering", "text-semantic-similarity", "text-evaluation", "prompt-retrieval", "text-reranking", "feature-extraction", "sentence-similarity", "transformers", "English", "Sentence Similarity", "natural_questions", "ms_marco", "fever", "hotpot_qa", "mteb", "en", "arxiv:2212.09741", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "region:us" ]
2023-06-21T20:05:33Z
2023-06-21T20:05:33+00:00
13
0
--- language: en license: apache-2.0 pipeline_tag: sentence-similarity tags: - text-embedding - embeddings - information-retrieval - beir - text-classification - language-model - text-clustering - text-semantic-similarity - text-evaluation - prompt-retrieval - text-reranking - sentence-transformers - feature-extraction - sentence-similarity - transformers - t5 - English - Sentence Similarity - natural_questions - ms_marco - fever - hotpot_qa - mteb inference: false duplicated_from: hkunlp/instructor-large model-index: - name: INSTRUCTOR results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 88.13432835820896 - type: ap value: 59.298209334395665 - type: f1 value: 83.31769058643586 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 91.526375 - type: ap value: 88.16327709705504 - type: f1 value: 91.51095801287843 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 47.856 - type: f1 value: 45.41490917650942 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 31.223 - type: map_at_10 value: 47.947 - type: map_at_100 value: 48.742000000000004 - type: map_at_1000 value: 48.745 - type: map_at_3 value: 43.137 - type: map_at_5 value: 45.992 - type: mrr_at_1 value: 32.432 - type: mrr_at_10 value: 48.4 - type: mrr_at_100 value: 49.202 - type: mrr_at_1000 value: 49.205 - type: mrr_at_3 value: 43.551 - type: mrr_at_5 value: 46.467999999999996 - type: ndcg_at_1 value: 31.223 - type: ndcg_at_10 value: 57.045 - type: ndcg_at_100 value: 60.175 - type: ndcg_at_1000 value: 60.233000000000004 - type: ndcg_at_3 value: 47.171 - type: ndcg_at_5 value: 52.322 - type: precision_at_1 value: 31.223 - type: precision_at_10 value: 8.599 - type: precision_at_100 value: 0.991 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 19.63 - type: precision_at_5 value: 14.282 - type: recall_at_1 value: 31.223 - type: recall_at_10 value: 85.989 - type: recall_at_100 value: 99.075 - type: recall_at_1000 value: 99.502 - type: recall_at_3 value: 58.89 - type: recall_at_5 value: 71.408 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 43.1621946393635 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 32.56417132407894 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 64.29539304390207 - type: mrr value: 76.44484017060196 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_spearman value: 84.38746499431112 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 78.51298701298701 - type: f1 value: 77.49041754069235 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 37.61848554098577 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 31.32623280148178 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 35.803000000000004 - type: map_at_10 value: 48.848 - type: map_at_100 value: 50.5 - type: map_at_1000 value: 50.602999999999994 - type: map_at_3 value: 45.111000000000004 - type: map_at_5 value: 47.202 - type: mrr_at_1 value: 44.635000000000005 - type: mrr_at_10 value: 55.593 - type: mrr_at_100 value: 56.169999999999995 - type: mrr_at_1000 value: 56.19499999999999 - type: mrr_at_3 value: 53.361999999999995 - type: mrr_at_5 value: 54.806999999999995 - type: ndcg_at_1 value: 44.635000000000005 - type: ndcg_at_10 value: 55.899 - type: ndcg_at_100 value: 60.958 - type: ndcg_at_1000 value: 62.302 - type: ndcg_at_3 value: 51.051 - type: ndcg_at_5 value: 53.351000000000006 - type: precision_at_1 value: 44.635000000000005 - type: precision_at_10 value: 10.786999999999999 - type: precision_at_100 value: 1.6580000000000001 - type: precision_at_1000 value: 0.213 - type: precision_at_3 value: 24.893 - type: precision_at_5 value: 17.740000000000002 - type: recall_at_1 value: 35.803000000000004 - type: recall_at_10 value: 68.657 - type: recall_at_100 value: 89.77199999999999 - type: recall_at_1000 value: 97.67 - type: recall_at_3 value: 54.066 - type: recall_at_5 value: 60.788 - type: map_at_1 value: 33.706 - type: map_at_10 value: 44.896 - type: map_at_100 value: 46.299 - type: map_at_1000 value: 46.44 - type: map_at_3 value: 41.721000000000004 - type: map_at_5 value: 43.486000000000004 - type: mrr_at_1 value: 41.592 - type: mrr_at_10 value: 50.529 - type: mrr_at_100 value: 51.22 - type: mrr_at_1000 value: 51.258 - type: mrr_at_3 value: 48.205999999999996 - type: mrr_at_5 value: 49.528 - type: ndcg_at_1 value: 41.592 - type: ndcg_at_10 value: 50.77199999999999 - type: ndcg_at_100 value: 55.383 - type: ndcg_at_1000 value: 57.288 - type: ndcg_at_3 value: 46.324 - type: ndcg_at_5 value: 48.346000000000004 - type: precision_at_1 value: 41.592 - type: precision_at_10 value: 9.516 - type: precision_at_100 value: 1.541 - type: precision_at_1000 value: 0.2 - type: precision_at_3 value: 22.399 - type: precision_at_5 value: 15.770999999999999 - type: recall_at_1 value: 33.706 - type: recall_at_10 value: 61.353 - type: recall_at_100 value: 80.182 - type: recall_at_1000 value: 91.896 - type: recall_at_3 value: 48.204 - type: recall_at_5 value: 53.89699999999999 - type: map_at_1 value: 44.424 - type: map_at_10 value: 57.169000000000004 - type: map_at_100 value: 58.202 - type: map_at_1000 value: 58.242000000000004 - type: map_at_3 value: 53.825 - type: map_at_5 value: 55.714 - type: mrr_at_1 value: 50.470000000000006 - type: mrr_at_10 value: 60.489000000000004 - type: mrr_at_100 value: 61.096 - type: mrr_at_1000 value: 61.112 - type: mrr_at_3 value: 58.192 - type: mrr_at_5 value: 59.611999999999995 - type: ndcg_at_1 value: 50.470000000000006 - type: ndcg_at_10 value: 63.071999999999996 - type: ndcg_at_100 value: 66.964 - type: ndcg_at_1000 value: 67.659 - type: ndcg_at_3 value: 57.74399999999999 - type: ndcg_at_5 value: 60.367000000000004 - type: precision_at_1 value: 50.470000000000006 - type: precision_at_10 value: 10.019 - type: precision_at_100 value: 1.29 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 25.558999999999997 - type: precision_at_5 value: 17.467 - type: recall_at_1 value: 44.424 - type: recall_at_10 value: 77.02 - type: recall_at_100 value: 93.738 - type: recall_at_1000 value: 98.451 - type: recall_at_3 value: 62.888 - type: recall_at_5 value: 69.138 - type: map_at_1 value: 26.294 - type: map_at_10 value: 34.503 - type: map_at_100 value: 35.641 - type: map_at_1000 value: 35.724000000000004 - type: map_at_3 value: 31.753999999999998 - type: map_at_5 value: 33.190999999999995 - type: mrr_at_1 value: 28.362 - type: mrr_at_10 value: 36.53 - type: mrr_at_100 value: 37.541000000000004 - type: mrr_at_1000 value: 37.602000000000004 - type: mrr_at_3 value: 33.917 - type: mrr_at_5 value: 35.358000000000004 - type: ndcg_at_1 value: 28.362 - type: ndcg_at_10 value: 39.513999999999996 - type: ndcg_at_100 value: 44.815 - type: ndcg_at_1000 value: 46.839 - type: ndcg_at_3 value: 34.02 - type: ndcg_at_5 value: 36.522 - type: precision_at_1 value: 28.362 - type: precision_at_10 value: 6.101999999999999 - type: precision_at_100 value: 0.9129999999999999 - type: precision_at_1000 value: 0.11399999999999999 - type: precision_at_3 value: 14.161999999999999 - type: precision_at_5 value: 9.966 - type: recall_at_1 value: 26.294 - type: recall_at_10 value: 53.098 - type: recall_at_100 value: 76.877 - type: recall_at_1000 value: 91.834 - type: recall_at_3 value: 38.266 - type: recall_at_5 value: 44.287 - type: map_at_1 value: 16.407 - type: map_at_10 value: 25.185999999999996 - type: map_at_100 value: 26.533 - type: map_at_1000 value: 26.657999999999998 - type: map_at_3 value: 22.201999999999998 - type: map_at_5 value: 23.923 - type: mrr_at_1 value: 20.522000000000002 - type: mrr_at_10 value: 29.522 - type: mrr_at_100 value: 30.644 - type: mrr_at_1000 value: 30.713 - type: mrr_at_3 value: 26.679000000000002 - type: mrr_at_5 value: 28.483000000000004 - type: ndcg_at_1 value: 20.522000000000002 - type: ndcg_at_10 value: 30.656 - type: ndcg_at_100 value: 36.864999999999995 - type: ndcg_at_1000 value: 39.675 - type: ndcg_at_3 value: 25.319000000000003 - type: ndcg_at_5 value: 27.992 - type: precision_at_1 value: 20.522000000000002 - type: precision_at_10 value: 5.795999999999999 - type: precision_at_100 value: 1.027 - type: precision_at_1000 value: 0.13999999999999999 - type: precision_at_3 value: 12.396 - type: precision_at_5 value: 9.328 - type: recall_at_1 value: 16.407 - type: recall_at_10 value: 43.164 - type: recall_at_100 value: 69.695 - type: recall_at_1000 value: 89.41900000000001 - type: recall_at_3 value: 28.634999999999998 - type: recall_at_5 value: 35.308 - type: map_at_1 value: 30.473 - type: map_at_10 value: 41.676 - type: map_at_100 value: 43.120999999999995 - type: map_at_1000 value: 43.230000000000004 - type: map_at_3 value: 38.306000000000004 - type: map_at_5 value: 40.355999999999995 - type: mrr_at_1 value: 37.536 - type: mrr_at_10 value: 47.643 - type: mrr_at_100 value: 48.508 - type: mrr_at_1000 value: 48.551 - type: mrr_at_3 value: 45.348 - type: mrr_at_5 value: 46.744 - type: ndcg_at_1 value: 37.536 - type: ndcg_at_10 value: 47.823 - type: ndcg_at_100 value: 53.395 - type: ndcg_at_1000 value: 55.271 - type: ndcg_at_3 value: 42.768 - type: ndcg_at_5 value: 45.373000000000005 - type: precision_at_1 value: 37.536 - type: precision_at_10 value: 8.681 - type: precision_at_100 value: 1.34 - type: precision_at_1000 value: 0.165 - type: precision_at_3 value: 20.468 - type: precision_at_5 value: 14.495 - type: recall_at_1 value: 30.473 - type: recall_at_10 value: 60.092999999999996 - type: recall_at_100 value: 82.733 - type: recall_at_1000 value: 94.875 - type: recall_at_3 value: 45.734 - type: recall_at_5 value: 52.691 - type: map_at_1 value: 29.976000000000003 - type: map_at_10 value: 41.097 - type: map_at_100 value: 42.547000000000004 - type: map_at_1000 value: 42.659000000000006 - type: map_at_3 value: 37.251 - type: map_at_5 value: 39.493 - type: mrr_at_1 value: 37.557 - type: mrr_at_10 value: 46.605000000000004 - type: mrr_at_100 value: 47.487 - type: mrr_at_1000 value: 47.54 - type: mrr_at_3 value: 43.721 - type: mrr_at_5 value: 45.411 - type: ndcg_at_1 value: 37.557 - type: ndcg_at_10 value: 47.449000000000005 - type: ndcg_at_100 value: 53.052 - type: ndcg_at_1000 value: 55.010999999999996 - type: ndcg_at_3 value: 41.439 - type: ndcg_at_5 value: 44.292 - type: precision_at_1 value: 37.557 - type: precision_at_10 value: 8.847 - type: precision_at_100 value: 1.357 - type: precision_at_1000 value: 0.16999999999999998 - type: precision_at_3 value: 20.091 - type: precision_at_5 value: 14.384 - type: recall_at_1 value: 29.976000000000003 - type: recall_at_10 value: 60.99099999999999 - type: recall_at_100 value: 84.245 - type: recall_at_1000 value: 96.97200000000001 - type: recall_at_3 value: 43.794 - type: recall_at_5 value: 51.778999999999996 - type: map_at_1 value: 28.099166666666665 - type: map_at_10 value: 38.1365 - type: map_at_100 value: 39.44491666666667 - type: map_at_1000 value: 39.55858333333334 - type: map_at_3 value: 35.03641666666666 - type: map_at_5 value: 36.79833333333334 - type: mrr_at_1 value: 33.39966666666667 - type: mrr_at_10 value: 42.42583333333333 - type: mrr_at_100 value: 43.28575 - type: mrr_at_1000 value: 43.33741666666667 - type: mrr_at_3 value: 39.94975 - type: mrr_at_5 value: 41.41633333333334 - type: ndcg_at_1 value: 33.39966666666667 - type: ndcg_at_10 value: 43.81741666666667 - type: ndcg_at_100 value: 49.08166666666667 - type: ndcg_at_1000 value: 51.121166666666674 - type: ndcg_at_3 value: 38.73575 - type: ndcg_at_5 value: 41.18158333333333 - type: precision_at_1 value: 33.39966666666667 - type: precision_at_10 value: 7.738916666666667 - type: precision_at_100 value: 1.2265833333333331 - type: precision_at_1000 value: 0.15983333333333336 - type: precision_at_3 value: 17.967416666666665 - type: precision_at_5 value: 12.78675 - type: recall_at_1 value: 28.099166666666665 - type: recall_at_10 value: 56.27049999999999 - type: recall_at_100 value: 78.93291666666667 - type: recall_at_1000 value: 92.81608333333334 - type: recall_at_3 value: 42.09775 - type: recall_at_5 value: 48.42533333333334 - type: map_at_1 value: 23.663 - type: map_at_10 value: 30.377 - type: map_at_100 value: 31.426 - type: map_at_1000 value: 31.519000000000002 - type: map_at_3 value: 28.069 - type: map_at_5 value: 29.256999999999998 - type: mrr_at_1 value: 26.687 - type: mrr_at_10 value: 33.107 - type: mrr_at_100 value: 34.055 - type: mrr_at_1000 value: 34.117999999999995 - type: mrr_at_3 value: 31.058000000000003 - type: mrr_at_5 value: 32.14 - type: ndcg_at_1 value: 26.687 - type: ndcg_at_10 value: 34.615 - type: ndcg_at_100 value: 39.776 - type: ndcg_at_1000 value: 42.05 - type: ndcg_at_3 value: 30.322 - type: ndcg_at_5 value: 32.157000000000004 - type: precision_at_1 value: 26.687 - type: precision_at_10 value: 5.491 - type: precision_at_100 value: 0.877 - type: precision_at_1000 value: 0.11499999999999999 - type: precision_at_3 value: 13.139000000000001 - type: precision_at_5 value: 9.049 - type: recall_at_1 value: 23.663 - type: recall_at_10 value: 45.035 - type: recall_at_100 value: 68.554 - type: recall_at_1000 value: 85.077 - type: recall_at_3 value: 32.982 - type: recall_at_5 value: 37.688 - type: map_at_1 value: 17.403 - type: map_at_10 value: 25.197000000000003 - type: map_at_100 value: 26.355 - type: map_at_1000 value: 26.487 - type: map_at_3 value: 22.733 - type: map_at_5 value: 24.114 - type: mrr_at_1 value: 21.37 - type: mrr_at_10 value: 29.091 - type: mrr_at_100 value: 30.018 - type: mrr_at_1000 value: 30.096 - type: mrr_at_3 value: 26.887 - type: mrr_at_5 value: 28.157 - type: ndcg_at_1 value: 21.37 - type: ndcg_at_10 value: 30.026000000000003 - type: ndcg_at_100 value: 35.416 - type: ndcg_at_1000 value: 38.45 - type: ndcg_at_3 value: 25.764 - type: ndcg_at_5 value: 27.742 - type: precision_at_1 value: 21.37 - type: precision_at_10 value: 5.609 - type: precision_at_100 value: 0.9860000000000001 - type: precision_at_1000 value: 0.14300000000000002 - type: precision_at_3 value: 12.423 - type: precision_at_5 value: 9.009 - type: recall_at_1 value: 17.403 - type: recall_at_10 value: 40.573 - type: recall_at_100 value: 64.818 - type: recall_at_1000 value: 86.53699999999999 - type: recall_at_3 value: 28.493000000000002 - type: recall_at_5 value: 33.660000000000004 - type: map_at_1 value: 28.639 - type: map_at_10 value: 38.951 - type: map_at_100 value: 40.238 - type: map_at_1000 value: 40.327 - type: map_at_3 value: 35.842 - type: map_at_5 value: 37.617 - type: mrr_at_1 value: 33.769 - type: mrr_at_10 value: 43.088 - type: mrr_at_100 value: 44.03 - type: mrr_at_1000 value: 44.072 - type: mrr_at_3 value: 40.656 - type: mrr_at_5 value: 42.138999999999996 - type: ndcg_at_1 value: 33.769 - type: ndcg_at_10 value: 44.676 - type: ndcg_at_100 value: 50.416000000000004 - type: ndcg_at_1000 value: 52.227999999999994 - type: ndcg_at_3 value: 39.494 - type: ndcg_at_5 value: 42.013 - type: precision_at_1 value: 33.769 - type: precision_at_10 value: 7.668 - type: precision_at_100 value: 1.18 - type: precision_at_1000 value: 0.145 - type: precision_at_3 value: 18.221 - type: precision_at_5 value: 12.966 - type: recall_at_1 value: 28.639 - type: recall_at_10 value: 57.687999999999995 - type: recall_at_100 value: 82.541 - type: recall_at_1000 value: 94.896 - type: recall_at_3 value: 43.651 - type: recall_at_5 value: 49.925999999999995 - type: map_at_1 value: 29.57 - type: map_at_10 value: 40.004 - type: map_at_100 value: 41.75 - type: map_at_1000 value: 41.97 - type: map_at_3 value: 36.788 - type: map_at_5 value: 38.671 - type: mrr_at_1 value: 35.375 - type: mrr_at_10 value: 45.121 - type: mrr_at_100 value: 45.994 - type: mrr_at_1000 value: 46.04 - type: mrr_at_3 value: 42.227 - type: mrr_at_5 value: 43.995 - type: ndcg_at_1 value: 35.375 - type: ndcg_at_10 value: 46.392 - type: ndcg_at_100 value: 52.196 - type: ndcg_at_1000 value: 54.274 - type: ndcg_at_3 value: 41.163 - type: ndcg_at_5 value: 43.813 - type: precision_at_1 value: 35.375 - type: precision_at_10 value: 8.676 - type: precision_at_100 value: 1.678 - type: precision_at_1000 value: 0.253 - type: precision_at_3 value: 19.104 - type: precision_at_5 value: 13.913 - type: recall_at_1 value: 29.57 - type: recall_at_10 value: 58.779 - type: recall_at_100 value: 83.337 - type: recall_at_1000 value: 95.979 - type: recall_at_3 value: 44.005 - type: recall_at_5 value: 50.975 - type: map_at_1 value: 20.832 - type: map_at_10 value: 29.733999999999998 - type: map_at_100 value: 30.727 - type: map_at_1000 value: 30.843999999999998 - type: map_at_3 value: 26.834999999999997 - type: map_at_5 value: 28.555999999999997 - type: mrr_at_1 value: 22.921 - type: mrr_at_10 value: 31.791999999999998 - type: mrr_at_100 value: 32.666000000000004 - type: mrr_at_1000 value: 32.751999999999995 - type: mrr_at_3 value: 29.144 - type: mrr_at_5 value: 30.622 - type: ndcg_at_1 value: 22.921 - type: ndcg_at_10 value: 34.915 - type: ndcg_at_100 value: 39.744 - type: ndcg_at_1000 value: 42.407000000000004 - type: ndcg_at_3 value: 29.421000000000003 - type: ndcg_at_5 value: 32.211 - type: precision_at_1 value: 22.921 - type: precision_at_10 value: 5.675 - type: precision_at_100 value: 0.872 - type: precision_at_1000 value: 0.121 - type: precision_at_3 value: 12.753999999999998 - type: precision_at_5 value: 9.353 - type: recall_at_1 value: 20.832 - type: recall_at_10 value: 48.795 - type: recall_at_100 value: 70.703 - type: recall_at_1000 value: 90.187 - type: recall_at_3 value: 34.455000000000005 - type: recall_at_5 value: 40.967 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 10.334 - type: map_at_10 value: 19.009999999999998 - type: map_at_100 value: 21.129 - type: map_at_1000 value: 21.328 - type: map_at_3 value: 15.152 - type: map_at_5 value: 17.084 - type: mrr_at_1 value: 23.453 - type: mrr_at_10 value: 36.099 - type: mrr_at_100 value: 37.069 - type: mrr_at_1000 value: 37.104 - type: mrr_at_3 value: 32.096000000000004 - type: mrr_at_5 value: 34.451 - type: ndcg_at_1 value: 23.453 - type: ndcg_at_10 value: 27.739000000000004 - type: ndcg_at_100 value: 35.836 - type: ndcg_at_1000 value: 39.242 - type: ndcg_at_3 value: 21.263 - type: ndcg_at_5 value: 23.677 - type: precision_at_1 value: 23.453 - type: precision_at_10 value: 9.199 - type: precision_at_100 value: 1.791 - type: precision_at_1000 value: 0.242 - type: precision_at_3 value: 16.2 - type: precision_at_5 value: 13.147 - type: recall_at_1 value: 10.334 - type: recall_at_10 value: 35.177 - type: recall_at_100 value: 63.009 - type: recall_at_1000 value: 81.938 - type: recall_at_3 value: 19.914 - type: recall_at_5 value: 26.077 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 8.212 - type: map_at_10 value: 17.386 - type: map_at_100 value: 24.234 - type: map_at_1000 value: 25.724999999999998 - type: map_at_3 value: 12.727 - type: map_at_5 value: 14.785 - type: mrr_at_1 value: 59.25 - type: mrr_at_10 value: 68.687 - type: mrr_at_100 value: 69.133 - type: mrr_at_1000 value: 69.14099999999999 - type: mrr_at_3 value: 66.917 - type: mrr_at_5 value: 67.742 - type: ndcg_at_1 value: 48.625 - type: ndcg_at_10 value: 36.675999999999995 - type: ndcg_at_100 value: 41.543 - type: ndcg_at_1000 value: 49.241 - type: ndcg_at_3 value: 41.373 - type: ndcg_at_5 value: 38.707 - type: precision_at_1 value: 59.25 - type: precision_at_10 value: 28.525 - type: precision_at_100 value: 9.027000000000001 - type: precision_at_1000 value: 1.8339999999999999 - type: precision_at_3 value: 44.833 - type: precision_at_5 value: 37.35 - type: recall_at_1 value: 8.212 - type: recall_at_10 value: 23.188 - type: recall_at_100 value: 48.613 - type: recall_at_1000 value: 73.093 - type: recall_at_3 value: 14.419 - type: recall_at_5 value: 17.798 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 52.725 - type: f1 value: 46.50743309855908 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 55.086 - type: map_at_10 value: 66.914 - type: map_at_100 value: 67.321 - type: map_at_1000 value: 67.341 - type: map_at_3 value: 64.75800000000001 - type: map_at_5 value: 66.189 - type: mrr_at_1 value: 59.28600000000001 - type: mrr_at_10 value: 71.005 - type: mrr_at_100 value: 71.304 - type: mrr_at_1000 value: 71.313 - type: mrr_at_3 value: 69.037 - type: mrr_at_5 value: 70.35 - type: ndcg_at_1 value: 59.28600000000001 - type: ndcg_at_10 value: 72.695 - type: ndcg_at_100 value: 74.432 - type: ndcg_at_1000 value: 74.868 - type: ndcg_at_3 value: 68.72200000000001 - type: ndcg_at_5 value: 71.081 - type: precision_at_1 value: 59.28600000000001 - type: precision_at_10 value: 9.499 - type: precision_at_100 value: 1.052 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_3 value: 27.503 - type: precision_at_5 value: 17.854999999999997 - type: recall_at_1 value: 55.086 - type: recall_at_10 value: 86.453 - type: recall_at_100 value: 94.028 - type: recall_at_1000 value: 97.052 - type: recall_at_3 value: 75.821 - type: recall_at_5 value: 81.6 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 22.262999999999998 - type: map_at_10 value: 37.488 - type: map_at_100 value: 39.498 - type: map_at_1000 value: 39.687 - type: map_at_3 value: 32.529 - type: map_at_5 value: 35.455 - type: mrr_at_1 value: 44.907000000000004 - type: mrr_at_10 value: 53.239000000000004 - type: mrr_at_100 value: 54.086 - type: mrr_at_1000 value: 54.122 - type: mrr_at_3 value: 51.235 - type: mrr_at_5 value: 52.415 - type: ndcg_at_1 value: 44.907000000000004 - type: ndcg_at_10 value: 45.446 - type: ndcg_at_100 value: 52.429 - type: ndcg_at_1000 value: 55.169000000000004 - type: ndcg_at_3 value: 41.882000000000005 - type: ndcg_at_5 value: 43.178 - type: precision_at_1 value: 44.907000000000004 - type: precision_at_10 value: 12.931999999999999 - type: precision_at_100 value: 2.025 - type: precision_at_1000 value: 0.248 - type: precision_at_3 value: 28.652 - type: precision_at_5 value: 21.204 - type: recall_at_1 value: 22.262999999999998 - type: recall_at_10 value: 52.447 - type: recall_at_100 value: 78.045 - type: recall_at_1000 value: 94.419 - type: recall_at_3 value: 38.064 - type: recall_at_5 value: 44.769 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 32.519 - type: map_at_10 value: 45.831 - type: map_at_100 value: 46.815 - type: map_at_1000 value: 46.899 - type: map_at_3 value: 42.836 - type: map_at_5 value: 44.65 - type: mrr_at_1 value: 65.037 - type: mrr_at_10 value: 72.16 - type: mrr_at_100 value: 72.51100000000001 - type: mrr_at_1000 value: 72.53 - type: mrr_at_3 value: 70.682 - type: mrr_at_5 value: 71.54599999999999 - type: ndcg_at_1 value: 65.037 - type: ndcg_at_10 value: 55.17999999999999 - type: ndcg_at_100 value: 58.888 - type: ndcg_at_1000 value: 60.648 - type: ndcg_at_3 value: 50.501 - type: ndcg_at_5 value: 52.977 - type: precision_at_1 value: 65.037 - type: precision_at_10 value: 11.530999999999999 - type: precision_at_100 value: 1.4460000000000002 - type: precision_at_1000 value: 0.168 - type: precision_at_3 value: 31.483 - type: precision_at_5 value: 20.845 - type: recall_at_1 value: 32.519 - type: recall_at_10 value: 57.657000000000004 - type: recall_at_100 value: 72.30199999999999 - type: recall_at_1000 value: 84.024 - type: recall_at_3 value: 47.225 - type: recall_at_5 value: 52.113 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 88.3168 - type: ap value: 83.80165516037135 - type: f1 value: 88.29942471066407 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 20.724999999999998 - type: map_at_10 value: 32.736 - type: map_at_100 value: 33.938 - type: map_at_1000 value: 33.991 - type: map_at_3 value: 28.788000000000004 - type: map_at_5 value: 31.016 - type: mrr_at_1 value: 21.361 - type: mrr_at_10 value: 33.323 - type: mrr_at_100 value: 34.471000000000004 - type: mrr_at_1000 value: 34.518 - type: mrr_at_3 value: 29.453000000000003 - type: mrr_at_5 value: 31.629 - type: ndcg_at_1 value: 21.361 - type: ndcg_at_10 value: 39.649 - type: ndcg_at_100 value: 45.481 - type: ndcg_at_1000 value: 46.775 - type: ndcg_at_3 value: 31.594 - type: ndcg_at_5 value: 35.543 - type: precision_at_1 value: 21.361 - type: precision_at_10 value: 6.3740000000000006 - type: precision_at_100 value: 0.931 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 13.514999999999999 - type: precision_at_5 value: 10.100000000000001 - type: recall_at_1 value: 20.724999999999998 - type: recall_at_10 value: 61.034 - type: recall_at_100 value: 88.062 - type: recall_at_1000 value: 97.86399999999999 - type: recall_at_3 value: 39.072 - type: recall_at_5 value: 48.53 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 93.8919288645691 - type: f1 value: 93.57059586398059 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 67.97993616051072 - type: f1 value: 48.244319183606535 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 68.90047074646941 - type: f1 value: 66.48999056063725 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.34566240753195 - type: f1 value: 73.54164154290658 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 34.21866934757011 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 32.000936217235534 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.68189362520352 - type: mrr value: 32.69603637784303 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 6.078 - type: map_at_10 value: 12.671 - type: map_at_100 value: 16.291 - type: map_at_1000 value: 17.855999999999998 - type: map_at_3 value: 9.610000000000001 - type: map_at_5 value: 11.152 - type: mrr_at_1 value: 43.963 - type: mrr_at_10 value: 53.173 - type: mrr_at_100 value: 53.718999999999994 - type: mrr_at_1000 value: 53.756 - type: mrr_at_3 value: 50.980000000000004 - type: mrr_at_5 value: 52.42 - type: ndcg_at_1 value: 42.415000000000006 - type: ndcg_at_10 value: 34.086 - type: ndcg_at_100 value: 32.545 - type: ndcg_at_1000 value: 41.144999999999996 - type: ndcg_at_3 value: 39.434999999999995 - type: ndcg_at_5 value: 37.888 - type: precision_at_1 value: 43.653 - type: precision_at_10 value: 25.014999999999997 - type: precision_at_100 value: 8.594 - type: precision_at_1000 value: 2.169 - type: precision_at_3 value: 37.049 - type: precision_at_5 value: 33.065 - type: recall_at_1 value: 6.078 - type: recall_at_10 value: 16.17 - type: recall_at_100 value: 34.512 - type: recall_at_1000 value: 65.447 - type: recall_at_3 value: 10.706 - type: recall_at_5 value: 13.158 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 27.378000000000004 - type: map_at_10 value: 42.178 - type: map_at_100 value: 43.32 - type: map_at_1000 value: 43.358000000000004 - type: map_at_3 value: 37.474000000000004 - type: map_at_5 value: 40.333000000000006 - type: mrr_at_1 value: 30.823 - type: mrr_at_10 value: 44.626 - type: mrr_at_100 value: 45.494 - type: mrr_at_1000 value: 45.519 - type: mrr_at_3 value: 40.585 - type: mrr_at_5 value: 43.146 - type: ndcg_at_1 value: 30.794 - type: ndcg_at_10 value: 50.099000000000004 - type: ndcg_at_100 value: 54.900999999999996 - type: ndcg_at_1000 value: 55.69499999999999 - type: ndcg_at_3 value: 41.238 - type: ndcg_at_5 value: 46.081 - type: precision_at_1 value: 30.794 - type: precision_at_10 value: 8.549 - type: precision_at_100 value: 1.124 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 18.926000000000002 - type: precision_at_5 value: 14.16 - type: recall_at_1 value: 27.378000000000004 - type: recall_at_10 value: 71.842 - type: recall_at_100 value: 92.565 - type: recall_at_1000 value: 98.402 - type: recall_at_3 value: 49.053999999999995 - type: recall_at_5 value: 60.207 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 70.557 - type: map_at_10 value: 84.729 - type: map_at_100 value: 85.369 - type: map_at_1000 value: 85.382 - type: map_at_3 value: 81.72 - type: map_at_5 value: 83.613 - type: mrr_at_1 value: 81.3 - type: mrr_at_10 value: 87.488 - type: mrr_at_100 value: 87.588 - type: mrr_at_1000 value: 87.589 - type: mrr_at_3 value: 86.53 - type: mrr_at_5 value: 87.18599999999999 - type: ndcg_at_1 value: 81.28999999999999 - type: ndcg_at_10 value: 88.442 - type: ndcg_at_100 value: 89.637 - type: ndcg_at_1000 value: 89.70700000000001 - type: ndcg_at_3 value: 85.55199999999999 - type: ndcg_at_5 value: 87.154 - type: precision_at_1 value: 81.28999999999999 - type: precision_at_10 value: 13.489999999999998 - type: precision_at_100 value: 1.54 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.553 - type: precision_at_5 value: 24.708 - type: recall_at_1 value: 70.557 - type: recall_at_10 value: 95.645 - type: recall_at_100 value: 99.693 - type: recall_at_1000 value: 99.995 - type: recall_at_3 value: 87.359 - type: recall_at_5 value: 91.89699999999999 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 63.65060114776209 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 64.63271250680617 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 4.263 - type: map_at_10 value: 10.801 - type: map_at_100 value: 12.888 - type: map_at_1000 value: 13.224 - type: map_at_3 value: 7.362 - type: map_at_5 value: 9.149000000000001 - type: mrr_at_1 value: 21 - type: mrr_at_10 value: 31.416 - type: mrr_at_100 value: 32.513 - type: mrr_at_1000 value: 32.58 - type: mrr_at_3 value: 28.116999999999997 - type: mrr_at_5 value: 29.976999999999997 - type: ndcg_at_1 value: 21 - type: ndcg_at_10 value: 18.551000000000002 - type: ndcg_at_100 value: 26.657999999999998 - type: ndcg_at_1000 value: 32.485 - type: ndcg_at_3 value: 16.834 - type: ndcg_at_5 value: 15.204999999999998 - type: precision_at_1 value: 21 - type: precision_at_10 value: 9.84 - type: precision_at_100 value: 2.16 - type: precision_at_1000 value: 0.35500000000000004 - type: precision_at_3 value: 15.667 - type: precision_at_5 value: 13.62 - type: recall_at_1 value: 4.263 - type: recall_at_10 value: 19.922 - type: recall_at_100 value: 43.808 - type: recall_at_1000 value: 72.14500000000001 - type: recall_at_3 value: 9.493 - type: recall_at_5 value: 13.767999999999999 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_spearman value: 81.27446313317233 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_spearman value: 76.27963301217527 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_spearman value: 88.18495048450949 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_spearman value: 81.91982338692046 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_spearman value: 89.00896818385291 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_spearman value: 85.48814644586132 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_spearman value: 90.30116926966582 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_spearman value: 67.74132963032342 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_spearman value: 86.87741355780479 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 82.0019012295875 - type: mrr value: 94.70267024188593 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 50.05 - type: map_at_10 value: 59.36 - type: map_at_100 value: 59.967999999999996 - type: map_at_1000 value: 60.023 - type: map_at_3 value: 56.515 - type: map_at_5 value: 58.272999999999996 - type: mrr_at_1 value: 53 - type: mrr_at_10 value: 61.102000000000004 - type: mrr_at_100 value: 61.476 - type: mrr_at_1000 value: 61.523 - type: mrr_at_3 value: 58.778 - type: mrr_at_5 value: 60.128 - type: ndcg_at_1 value: 53 - type: ndcg_at_10 value: 64.43100000000001 - type: ndcg_at_100 value: 66.73599999999999 - type: ndcg_at_1000 value: 68.027 - type: ndcg_at_3 value: 59.279 - type: ndcg_at_5 value: 61.888 - type: precision_at_1 value: 53 - type: precision_at_10 value: 8.767 - type: precision_at_100 value: 1.01 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_3 value: 23.444000000000003 - type: precision_at_5 value: 15.667 - type: recall_at_1 value: 50.05 - type: recall_at_10 value: 78.511 - type: recall_at_100 value: 88.5 - type: recall_at_1000 value: 98.333 - type: recall_at_3 value: 64.117 - type: recall_at_5 value: 70.867 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.72178217821782 - type: cos_sim_ap value: 93.0728601593541 - type: cos_sim_f1 value: 85.6727976766699 - type: cos_sim_precision value: 83.02063789868667 - type: cos_sim_recall value: 88.5 - type: dot_accuracy value: 99.72178217821782 - type: dot_ap value: 93.07287396168348 - type: dot_f1 value: 85.6727976766699 - type: dot_precision value: 83.02063789868667 - type: dot_recall value: 88.5 - type: euclidean_accuracy value: 99.72178217821782 - type: euclidean_ap value: 93.07285657982895 - type: euclidean_f1 value: 85.6727976766699 - type: euclidean_precision value: 83.02063789868667 - type: euclidean_recall value: 88.5 - type: manhattan_accuracy value: 99.72475247524753 - type: manhattan_ap value: 93.02792973059809 - type: manhattan_f1 value: 85.7727737973388 - type: manhattan_precision value: 87.84067085953879 - type: manhattan_recall value: 83.8 - type: max_accuracy value: 99.72475247524753 - type: max_ap value: 93.07287396168348 - type: max_f1 value: 85.7727737973388 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 68.77583615550819 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 36.151636938606956 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 52.16607939471187 - type: mrr value: 52.95172046091163 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 31.314646669495666 - type: cos_sim_spearman value: 31.83562491439455 - type: dot_pearson value: 31.314590842874157 - type: dot_spearman value: 31.83363065810437 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.198 - type: map_at_10 value: 1.3010000000000002 - type: map_at_100 value: 7.2139999999999995 - type: map_at_1000 value: 20.179 - type: map_at_3 value: 0.528 - type: map_at_5 value: 0.8019999999999999 - type: mrr_at_1 value: 72 - type: mrr_at_10 value: 83.39999999999999 - type: mrr_at_100 value: 83.39999999999999 - type: mrr_at_1000 value: 83.39999999999999 - type: mrr_at_3 value: 81.667 - type: mrr_at_5 value: 83.06700000000001 - type: ndcg_at_1 value: 66 - type: ndcg_at_10 value: 58.059000000000005 - type: ndcg_at_100 value: 44.316 - type: ndcg_at_1000 value: 43.147000000000006 - type: ndcg_at_3 value: 63.815999999999995 - type: ndcg_at_5 value: 63.005 - type: precision_at_1 value: 72 - type: precision_at_10 value: 61.4 - type: precision_at_100 value: 45.62 - type: precision_at_1000 value: 19.866 - type: precision_at_3 value: 70 - type: precision_at_5 value: 68.8 - type: recall_at_1 value: 0.198 - type: recall_at_10 value: 1.517 - type: recall_at_100 value: 10.587 - type: recall_at_1000 value: 41.233 - type: recall_at_3 value: 0.573 - type: recall_at_5 value: 0.907 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 1.894 - type: map_at_10 value: 8.488999999999999 - type: map_at_100 value: 14.445 - type: map_at_1000 value: 16.078 - type: map_at_3 value: 4.589 - type: map_at_5 value: 6.019 - type: mrr_at_1 value: 22.448999999999998 - type: mrr_at_10 value: 39.82 - type: mrr_at_100 value: 40.752 - type: mrr_at_1000 value: 40.771 - type: mrr_at_3 value: 34.354 - type: mrr_at_5 value: 37.721 - type: ndcg_at_1 value: 19.387999999999998 - type: ndcg_at_10 value: 21.563 - type: ndcg_at_100 value: 33.857 - type: ndcg_at_1000 value: 46.199 - type: ndcg_at_3 value: 22.296 - type: ndcg_at_5 value: 21.770999999999997 - type: precision_at_1 value: 22.448999999999998 - type: precision_at_10 value: 19.796 - type: precision_at_100 value: 7.142999999999999 - type: precision_at_1000 value: 1.541 - type: precision_at_3 value: 24.490000000000002 - type: precision_at_5 value: 22.448999999999998 - type: recall_at_1 value: 1.894 - type: recall_at_10 value: 14.931 - type: recall_at_100 value: 45.524 - type: recall_at_1000 value: 83.243 - type: recall_at_3 value: 5.712 - type: recall_at_5 value: 8.386000000000001 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.049 - type: ap value: 13.85116971310922 - type: f1 value: 54.37504302487686 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 64.1312959818902 - type: f1 value: 64.11413877009383 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 54.13103431861502 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 87.327889372355 - type: cos_sim_ap value: 77.42059895975699 - type: cos_sim_f1 value: 71.02706903250873 - type: cos_sim_precision value: 69.75324344950394 - type: cos_sim_recall value: 72.34828496042216 - type: dot_accuracy value: 87.327889372355 - type: dot_ap value: 77.4209479346677 - type: dot_f1 value: 71.02706903250873 - type: dot_precision value: 69.75324344950394 - type: dot_recall value: 72.34828496042216 - type: euclidean_accuracy value: 87.327889372355 - type: euclidean_ap value: 77.42096495861037 - type: euclidean_f1 value: 71.02706903250873 - type: euclidean_precision value: 69.75324344950394 - type: euclidean_recall value: 72.34828496042216 - type: manhattan_accuracy value: 87.31000774870358 - type: manhattan_ap value: 77.38930750711619 - type: manhattan_f1 value: 71.07935314027831 - type: manhattan_precision value: 67.70957726295677 - type: manhattan_recall value: 74.80211081794195 - type: max_accuracy value: 87.327889372355 - type: max_ap value: 77.42096495861037 - type: max_f1 value: 71.07935314027831 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.58939729110878 - type: cos_sim_ap value: 87.17594155025475 - type: cos_sim_f1 value: 79.21146953405018 - type: cos_sim_precision value: 76.8918527109307 - type: cos_sim_recall value: 81.67539267015707 - type: dot_accuracy value: 89.58939729110878 - type: dot_ap value: 87.17593963273593 - type: dot_f1 value: 79.21146953405018 - type: dot_precision value: 76.8918527109307 - type: dot_recall value: 81.67539267015707 - type: euclidean_accuracy value: 89.58939729110878 - type: euclidean_ap value: 87.17592466925834 - type: euclidean_f1 value: 79.21146953405018 - type: euclidean_precision value: 76.8918527109307 - type: euclidean_recall value: 81.67539267015707 - type: manhattan_accuracy value: 89.62626615438352 - type: manhattan_ap value: 87.16589873161546 - type: manhattan_f1 value: 79.25143598295348 - type: manhattan_precision value: 76.39494177323712 - type: manhattan_recall value: 82.32984293193716 - type: max_accuracy value: 89.62626615438352 - type: max_ap value: 87.17594155025475 - type: max_f1 value: 79.25143598295348 --- # hkunlp/instructor-large We introduce **Instructor**👨‍🏫, an instruction-finetuned text embedding model that can generate text embeddings tailored to any task (e.g., classification, retrieval, clustering, text evaluation, etc.) and domains (e.g., science, finance, etc.) ***by simply providing the task instruction, without any finetuning***. Instructor👨‍ achieves sota on 70 diverse embedding tasks ([MTEB leaderboard](https://huggingface.co/spaces/mteb/leaderboard))! The model is easy to use with **our customized** `sentence-transformer` library. For more details, check out [our paper](https://arxiv.org/abs/2212.09741) and [project page](https://instructor-embedding.github.io/)! **************************** **Updates** **************************** * 12/28: We released a new [checkpoint](https://huggingface.co/hkunlp/instructor-large) trained with hard negatives, which gives better performance. * 12/21: We released our [paper](https://arxiv.org/abs/2212.09741), [code](https://github.com/HKUNLP/instructor-embedding), [checkpoint](https://huggingface.co/hkunlp/instructor-large) and [project page](https://instructor-embedding.github.io/)! Check them out! ## Quick start <hr /> ## Installation ```bash pip install InstructorEmbedding ``` ## Compute your customized embeddings Then you can use the model like this to calculate domain-specific and task-aware embeddings: ```python from InstructorEmbedding import INSTRUCTOR model = INSTRUCTOR('hkunlp/instructor-large') sentence = "3D ActionSLAM: wearable person tracking in multi-floor environments" instruction = "Represent the Science title:" embeddings = model.encode([[instruction,sentence]]) print(embeddings) ``` ## Use cases <hr /> ## Calculate embeddings for your customized texts If you want to calculate customized embeddings for specific sentences, you may follow the unified template to write instructions: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Represent the `domain` `text_type` for `task_objective`: * `domain` is optional, and it specifies the domain of the text, e.g., science, finance, medicine, etc. * `text_type` is required, and it specifies the encoding unit, e.g., sentence, document, paragraph, etc. * `task_objective` is optional, and it specifies the objective of embedding, e.g., retrieve a document, classify the sentence, etc. ## Calculate Sentence similarities You can further use the model to compute similarities between two groups of sentences, with **customized embeddings**. ```python from sklearn.metrics.pairwise import cosine_similarity sentences_a = [['Represent the Science sentence: ','Parton energy loss in QCD matter'], ['Represent the Financial statement: ','The Federal Reserve on Wednesday raised its benchmark interest rate.']] sentences_b = [['Represent the Science sentence: ','The Chiral Phase Transition in Dissipative Dynamics'], ['Represent the Financial statement: ','The funds rose less than 0.5 per cent on Friday']] embeddings_a = model.encode(sentences_a) embeddings_b = model.encode(sentences_b) similarities = cosine_similarity(embeddings_a,embeddings_b) print(similarities) ``` ## Information Retrieval You can also use **customized embeddings** for information retrieval. ```python import numpy as np from sklearn.metrics.pairwise import cosine_similarity query = [['Represent the Wikipedia question for retrieving supporting documents: ','where is the food stored in a yam plant']] corpus = [['Represent the Wikipedia document for retrieval: ','Capitalism has been dominant in the Western world since the end of feudalism, but most feel[who?] that the term "mixed economies" more precisely describes most contemporary economies, due to their containing both private-owned and state-owned enterprises. In capitalism, prices determine the demand-supply scale. For example, higher demand for certain goods and services lead to higher prices and lower demand for certain goods lead to lower prices.'], ['Represent the Wikipedia document for retrieval: ',"The disparate impact theory is especially controversial under the Fair Housing Act because the Act regulates many activities relating to housing, insurance, and mortgage loans—and some scholars have argued that the theory's use under the Fair Housing Act, combined with extensions of the Community Reinvestment Act, contributed to rise of sub-prime lending and the crash of the U.S. housing market and ensuing global economic recession"], ['Represent the Wikipedia document for retrieval: ','Disparate impact in United States labor law refers to practices in employment, housing, and other areas that adversely affect one group of people of a protected characteristic more than another, even though rules applied by employers or landlords are formally neutral. Although the protected classes vary by statute, most federal civil rights laws protect based on race, color, religion, national origin, and sex as protected traits, and some laws include disability status and other traits as well.']] query_embeddings = model.encode(query) corpus_embeddings = model.encode(corpus) similarities = cosine_similarity(query_embeddings,corpus_embeddings) retrieved_doc_id = np.argmax(similarities) print(retrieved_doc_id) ``` ## Clustering Use **customized embeddings** for clustering texts in groups. ```python import sklearn.cluster sentences = [['Represent the Medicine sentence for clustering: ','Dynamical Scalar Degree of Freedom in Horava-Lifshitz Gravity'], ['Represent the Medicine sentence for clustering: ','Comparison of Atmospheric Neutrino Flux Calculations at Low Energies'], ['Represent the Medicine sentence for clustering: ','Fermion Bags in the Massive Gross-Neveu Model'], ['Represent the Medicine sentence for clustering: ',"QCD corrections to Associated t-tbar-H production at the Tevatron"], ['Represent the Medicine sentence for clustering: ','A New Analysis of the R Measurements: Resonance Parameters of the Higher, Vector States of Charmonium']] embeddings = model.encode(sentences) clustering_model = sklearn.cluster.MiniBatchKMeans(n_clusters=2) clustering_model.fit(embeddings) cluster_assignment = clustering_model.labels_ print(cluster_assignment) ```
[ "BIOSSES", "SCIFACT" ]
Panchovix/WizardLM-Uncensored-SuperCOT-StoryTelling-30b-SuperHOT-8k
Panchovix
text-generation
[ "transformers", "pytorch", "llama", "text-generation", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2023-06-26T20:36:56Z
2023-07-04T21:39:29+00:00
13
2
--- license: other --- [WizardLM-Uncensored-SuperCOT-StoryTelling-30b](https://huggingface.co/Monero/WizardLM-Uncensored-SuperCOT-StoryTelling-30b) merged with kaiokendev's [33b SuperHOT 8k LoRA](https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test), without quant. (Full FP16 model)
[ "MONERO" ]
Jumtra/calm-v3-ep1
Jumtra
text-generation
[ "transformers", "pytorch", "gpt_neox", "text-generation", "ja", "lm", "nlp", "dataset:kunishou/databricks-dolly-15k-ja", "dataset:kunishou/hh-rlhf-49k-ja", "dataset:Jumtra/oasst1_ja", "dataset:Jumtra/jglue_jnli", "dataset:Jumtra/jglue_jsquad", "dataset:Jumtra/jglue_jsquads_with_input", "license:cc-by-sa-4.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
2023-06-26T23:57:35Z
2023-06-27T13:22:19+00:00
13
0
--- datasets: - kunishou/databricks-dolly-15k-ja - kunishou/hh-rlhf-49k-ja - Jumtra/oasst1_ja - Jumtra/jglue_jnli - Jumtra/jglue_jsquad - Jumtra/jglue_jsquads_with_input language: - ja license: cc-by-sa-4.0 tags: - ja - gpt_neox - text-generation - lm - nlp inference: false --- # open-calm-7b このモデルは、MosaicMLのllm-foundryリポジトリを使用して[Jumtra/calm-7b-tune-ep4](https://huggingface.co/Jumtra/calm-7b-tune-ep4)をファインチューニングしたモデルです。 ## Model Date June 28, 2023 ## Model License cc-by-sa-4.0 ## 評価 [Jumtra/test_data_100QA](https://huggingface.co/datasets/Jumtra/test_data_100QA)を用いてモデルの正答率を評価した また、学習時のvalidateデータに対してのPerplexityを記載した。 | model name | 正答率 | Perplexity | | ---- | ---- | ---- | | [Jumtra/rinna-3.6b-tune-ep5](https://huggingface.co/Jumtra/rinna-3.6b-tune-ep5)| 40/100 | 8.105 | | [Jumtra/rinna-v1-tune-ep1](https://huggingface.co/Jumtra/rinna-v1-tune-ep1) | 42/100 | 7.458 | | [Jumtra/rinna-v1-tune-ep3](https://huggingface.co/Jumtra/rinna-v1-tune-ep3) | 41/100 | 7.034 | | [Jumtra/calm-7b-tune-ep4](https://huggingface.co/Jumtra/calm-7b-tune-ep4) | 40/100 | 9.766 | | [Jumtra/calm-v3-ep1](https://huggingface.co/Jumtra/calm-v3-ep1) | 35/100 | 9.305 | | [Jumtra/calm-v3-ep3](https://huggingface.co/Jumtra/calm-v3-ep3) | 37/100 | 13.276 | 以下のプロンプトを用いた ```python INSTRUCTION_KEY = "### 入力:" RESPONSE_KEY = "### 回答:" INTRO_BLURB = "以下はタスクを説明する指示と文脈のある文章が含まれた入力です。要求を適切に満たす回答を生成しなさい。" JP_PROMPT_FOR_GENERATION_FORMAT = """{intro} {instruction_key} {instruction} {response_key} """.format( intro=INTRO_BLURB, instruction_key=INSTRUCTION_KEY, instruction="{instruction}", response_key=RESPONSE_KEY, ) ```
[ "BLURB" ]
anttip/ct2fast-e5-small-v2-hfie
anttip
feature-extraction
[ "transformers", "bert", "feature-extraction", "ctranslate2", "int8", "float16", "mteb", "en", "arxiv:2212.03533", "arxiv:2104.08663", "arxiv:2210.07316", "license:mit", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2023-07-07T19:30:13Z
2023-07-07T20:04:37+00:00
13
2
--- language: - en license: mit tags: - ctranslate2 - int8 - float16 - mteb duplicated_from: michaelfeil/ct2fast-e5-small-v2 model-index: - name: e5-small-v2 results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 77.59701492537313 - type: ap value: 41.67064885731708 - type: f1 value: 71.86465946398573 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 91.265875 - type: ap value: 87.67633085349644 - type: f1 value: 91.24297521425744 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 45.882000000000005 - type: f1 value: 45.08058870381236 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 20.697 - type: map_at_10 value: 33.975 - type: map_at_100 value: 35.223 - type: map_at_1000 value: 35.260000000000005 - type: map_at_3 value: 29.776999999999997 - type: map_at_5 value: 32.035000000000004 - type: mrr_at_1 value: 20.982 - type: mrr_at_10 value: 34.094 - type: mrr_at_100 value: 35.343 - type: mrr_at_1000 value: 35.38 - type: mrr_at_3 value: 29.884 - type: mrr_at_5 value: 32.141999999999996 - type: ndcg_at_1 value: 20.697 - type: ndcg_at_10 value: 41.668 - type: ndcg_at_100 value: 47.397 - type: ndcg_at_1000 value: 48.305 - type: ndcg_at_3 value: 32.928000000000004 - type: ndcg_at_5 value: 36.998999999999995 - type: precision_at_1 value: 20.697 - type: precision_at_10 value: 6.636 - type: precision_at_100 value: 0.924 - type: precision_at_1000 value: 0.099 - type: precision_at_3 value: 14.035 - type: precision_at_5 value: 10.398 - type: recall_at_1 value: 20.697 - type: recall_at_10 value: 66.35799999999999 - type: recall_at_100 value: 92.39 - type: recall_at_1000 value: 99.36 - type: recall_at_3 value: 42.105 - type: recall_at_5 value: 51.991 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 42.1169517447068 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 34.79553720107097 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 58.10811337308168 - type: mrr value: 71.56410763751482 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 78.46834918248696 - type: cos_sim_spearman value: 79.4289182755206 - type: euclidean_pearson value: 76.26662973727008 - type: euclidean_spearman value: 78.11744260952536 - type: manhattan_pearson value: 76.08175262609434 - type: manhattan_spearman value: 78.29395265552289 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 81.63636363636364 - type: f1 value: 81.55779952376953 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 35.88541137137571 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 30.05205685274407 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 30.293999999999997 - type: map_at_10 value: 39.876 - type: map_at_100 value: 41.315000000000005 - type: map_at_1000 value: 41.451 - type: map_at_3 value: 37.194 - type: map_at_5 value: 38.728 - type: mrr_at_1 value: 37.053000000000004 - type: mrr_at_10 value: 45.281 - type: mrr_at_100 value: 46.188 - type: mrr_at_1000 value: 46.245999999999995 - type: mrr_at_3 value: 43.228 - type: mrr_at_5 value: 44.366 - type: ndcg_at_1 value: 37.053000000000004 - type: ndcg_at_10 value: 45.086 - type: ndcg_at_100 value: 50.756 - type: ndcg_at_1000 value: 53.123 - type: ndcg_at_3 value: 41.416 - type: ndcg_at_5 value: 43.098 - type: precision_at_1 value: 37.053000000000004 - type: precision_at_10 value: 8.34 - type: precision_at_100 value: 1.346 - type: precision_at_1000 value: 0.186 - type: precision_at_3 value: 19.647000000000002 - type: precision_at_5 value: 13.877 - type: recall_at_1 value: 30.293999999999997 - type: recall_at_10 value: 54.309 - type: recall_at_100 value: 78.59 - type: recall_at_1000 value: 93.82300000000001 - type: recall_at_3 value: 43.168 - type: recall_at_5 value: 48.192 - type: map_at_1 value: 28.738000000000003 - type: map_at_10 value: 36.925999999999995 - type: map_at_100 value: 38.017 - type: map_at_1000 value: 38.144 - type: map_at_3 value: 34.446 - type: map_at_5 value: 35.704 - type: mrr_at_1 value: 35.478 - type: mrr_at_10 value: 42.786 - type: mrr_at_100 value: 43.458999999999996 - type: mrr_at_1000 value: 43.507 - type: mrr_at_3 value: 40.648 - type: mrr_at_5 value: 41.804 - type: ndcg_at_1 value: 35.478 - type: ndcg_at_10 value: 42.044 - type: ndcg_at_100 value: 46.249 - type: ndcg_at_1000 value: 48.44 - type: ndcg_at_3 value: 38.314 - type: ndcg_at_5 value: 39.798 - type: precision_at_1 value: 35.478 - type: precision_at_10 value: 7.764 - type: precision_at_100 value: 1.253 - type: precision_at_1000 value: 0.174 - type: precision_at_3 value: 18.047 - type: precision_at_5 value: 12.637 - type: recall_at_1 value: 28.738000000000003 - type: recall_at_10 value: 50.659 - type: recall_at_100 value: 68.76299999999999 - type: recall_at_1000 value: 82.811 - type: recall_at_3 value: 39.536 - type: recall_at_5 value: 43.763999999999996 - type: map_at_1 value: 38.565 - type: map_at_10 value: 50.168 - type: map_at_100 value: 51.11 - type: map_at_1000 value: 51.173 - type: map_at_3 value: 47.044000000000004 - type: map_at_5 value: 48.838 - type: mrr_at_1 value: 44.201 - type: mrr_at_10 value: 53.596999999999994 - type: mrr_at_100 value: 54.211 - type: mrr_at_1000 value: 54.247 - type: mrr_at_3 value: 51.202000000000005 - type: mrr_at_5 value: 52.608999999999995 - type: ndcg_at_1 value: 44.201 - type: ndcg_at_10 value: 55.694 - type: ndcg_at_100 value: 59.518 - type: ndcg_at_1000 value: 60.907 - type: ndcg_at_3 value: 50.395999999999994 - type: ndcg_at_5 value: 53.022999999999996 - type: precision_at_1 value: 44.201 - type: precision_at_10 value: 8.84 - type: precision_at_100 value: 1.162 - type: precision_at_1000 value: 0.133 - type: precision_at_3 value: 22.153 - type: precision_at_5 value: 15.260000000000002 - type: recall_at_1 value: 38.565 - type: recall_at_10 value: 68.65 - type: recall_at_100 value: 85.37400000000001 - type: recall_at_1000 value: 95.37400000000001 - type: recall_at_3 value: 54.645999999999994 - type: recall_at_5 value: 60.958 - type: map_at_1 value: 23.945 - type: map_at_10 value: 30.641000000000002 - type: map_at_100 value: 31.599 - type: map_at_1000 value: 31.691000000000003 - type: map_at_3 value: 28.405 - type: map_at_5 value: 29.704000000000004 - type: mrr_at_1 value: 25.537 - type: mrr_at_10 value: 32.22 - type: mrr_at_100 value: 33.138 - type: mrr_at_1000 value: 33.214 - type: mrr_at_3 value: 30.151 - type: mrr_at_5 value: 31.298 - type: ndcg_at_1 value: 25.537 - type: ndcg_at_10 value: 34.638000000000005 - type: ndcg_at_100 value: 39.486 - type: ndcg_at_1000 value: 41.936 - type: ndcg_at_3 value: 30.333 - type: ndcg_at_5 value: 32.482 - type: precision_at_1 value: 25.537 - type: precision_at_10 value: 5.153 - type: precision_at_100 value: 0.7929999999999999 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 12.429 - type: precision_at_5 value: 8.723 - type: recall_at_1 value: 23.945 - type: recall_at_10 value: 45.412 - type: recall_at_100 value: 67.836 - type: recall_at_1000 value: 86.467 - type: recall_at_3 value: 34.031 - type: recall_at_5 value: 39.039 - type: map_at_1 value: 14.419 - type: map_at_10 value: 20.858999999999998 - type: map_at_100 value: 22.067999999999998 - type: map_at_1000 value: 22.192 - type: map_at_3 value: 18.673000000000002 - type: map_at_5 value: 19.968 - type: mrr_at_1 value: 17.785999999999998 - type: mrr_at_10 value: 24.878 - type: mrr_at_100 value: 26.021 - type: mrr_at_1000 value: 26.095000000000002 - type: mrr_at_3 value: 22.616 - type: mrr_at_5 value: 23.785 - type: ndcg_at_1 value: 17.785999999999998 - type: ndcg_at_10 value: 25.153 - type: ndcg_at_100 value: 31.05 - type: ndcg_at_1000 value: 34.052 - type: ndcg_at_3 value: 21.117 - type: ndcg_at_5 value: 23.048 - type: precision_at_1 value: 17.785999999999998 - type: precision_at_10 value: 4.590000000000001 - type: precision_at_100 value: 0.864 - type: precision_at_1000 value: 0.125 - type: precision_at_3 value: 9.908999999999999 - type: precision_at_5 value: 7.313 - type: recall_at_1 value: 14.419 - type: recall_at_10 value: 34.477999999999994 - type: recall_at_100 value: 60.02499999999999 - type: recall_at_1000 value: 81.646 - type: recall_at_3 value: 23.515 - type: recall_at_5 value: 28.266999999999996 - type: map_at_1 value: 26.268 - type: map_at_10 value: 35.114000000000004 - type: map_at_100 value: 36.212 - type: map_at_1000 value: 36.333 - type: map_at_3 value: 32.436 - type: map_at_5 value: 33.992 - type: mrr_at_1 value: 31.761 - type: mrr_at_10 value: 40.355999999999995 - type: mrr_at_100 value: 41.125 - type: mrr_at_1000 value: 41.186 - type: mrr_at_3 value: 37.937 - type: mrr_at_5 value: 39.463 - type: ndcg_at_1 value: 31.761 - type: ndcg_at_10 value: 40.422000000000004 - type: ndcg_at_100 value: 45.458999999999996 - type: ndcg_at_1000 value: 47.951 - type: ndcg_at_3 value: 35.972 - type: ndcg_at_5 value: 38.272 - type: precision_at_1 value: 31.761 - type: precision_at_10 value: 7.103 - type: precision_at_100 value: 1.133 - type: precision_at_1000 value: 0.152 - type: precision_at_3 value: 16.779 - type: precision_at_5 value: 11.877 - type: recall_at_1 value: 26.268 - type: recall_at_10 value: 51.053000000000004 - type: recall_at_100 value: 72.702 - type: recall_at_1000 value: 89.521 - type: recall_at_3 value: 38.619 - type: recall_at_5 value: 44.671 - type: map_at_1 value: 25.230999999999998 - type: map_at_10 value: 34.227000000000004 - type: map_at_100 value: 35.370000000000005 - type: map_at_1000 value: 35.488 - type: map_at_3 value: 31.496000000000002 - type: map_at_5 value: 33.034 - type: mrr_at_1 value: 30.822 - type: mrr_at_10 value: 39.045 - type: mrr_at_100 value: 39.809 - type: mrr_at_1000 value: 39.873 - type: mrr_at_3 value: 36.663000000000004 - type: mrr_at_5 value: 37.964 - type: ndcg_at_1 value: 30.822 - type: ndcg_at_10 value: 39.472 - type: ndcg_at_100 value: 44.574999999999996 - type: ndcg_at_1000 value: 47.162 - type: ndcg_at_3 value: 34.929 - type: ndcg_at_5 value: 37.002 - type: precision_at_1 value: 30.822 - type: precision_at_10 value: 7.055 - type: precision_at_100 value: 1.124 - type: precision_at_1000 value: 0.152 - type: precision_at_3 value: 16.591 - type: precision_at_5 value: 11.667 - type: recall_at_1 value: 25.230999999999998 - type: recall_at_10 value: 50.42100000000001 - type: recall_at_100 value: 72.685 - type: recall_at_1000 value: 90.469 - type: recall_at_3 value: 37.503 - type: recall_at_5 value: 43.123 - type: map_at_1 value: 24.604166666666664 - type: map_at_10 value: 32.427166666666665 - type: map_at_100 value: 33.51474999999999 - type: map_at_1000 value: 33.6345 - type: map_at_3 value: 30.02366666666667 - type: map_at_5 value: 31.382333333333328 - type: mrr_at_1 value: 29.001166666666666 - type: mrr_at_10 value: 36.3315 - type: mrr_at_100 value: 37.16683333333333 - type: mrr_at_1000 value: 37.23341666666668 - type: mrr_at_3 value: 34.19916666666667 - type: mrr_at_5 value: 35.40458333333334 - type: ndcg_at_1 value: 29.001166666666666 - type: ndcg_at_10 value: 37.06883333333334 - type: ndcg_at_100 value: 41.95816666666666 - type: ndcg_at_1000 value: 44.501583333333336 - type: ndcg_at_3 value: 32.973499999999994 - type: ndcg_at_5 value: 34.90833333333334 - type: precision_at_1 value: 29.001166666666666 - type: precision_at_10 value: 6.336 - type: precision_at_100 value: 1.0282499999999999 - type: precision_at_1000 value: 0.14391666666666664 - type: precision_at_3 value: 14.932499999999996 - type: precision_at_5 value: 10.50825 - type: recall_at_1 value: 24.604166666666664 - type: recall_at_10 value: 46.9525 - type: recall_at_100 value: 68.67816666666667 - type: recall_at_1000 value: 86.59783333333334 - type: recall_at_3 value: 35.49783333333333 - type: recall_at_5 value: 40.52525000000001 - type: map_at_1 value: 23.559 - type: map_at_10 value: 29.023 - type: map_at_100 value: 29.818 - type: map_at_1000 value: 29.909000000000002 - type: map_at_3 value: 27.037 - type: map_at_5 value: 28.225 - type: mrr_at_1 value: 26.994 - type: mrr_at_10 value: 31.962000000000003 - type: mrr_at_100 value: 32.726 - type: mrr_at_1000 value: 32.800000000000004 - type: mrr_at_3 value: 30.266 - type: mrr_at_5 value: 31.208999999999996 - type: ndcg_at_1 value: 26.994 - type: ndcg_at_10 value: 32.53 - type: ndcg_at_100 value: 36.758 - type: ndcg_at_1000 value: 39.362 - type: ndcg_at_3 value: 28.985 - type: ndcg_at_5 value: 30.757 - type: precision_at_1 value: 26.994 - type: precision_at_10 value: 4.968999999999999 - type: precision_at_100 value: 0.759 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 12.219 - type: precision_at_5 value: 8.527999999999999 - type: recall_at_1 value: 23.559 - type: recall_at_10 value: 40.585 - type: recall_at_100 value: 60.306000000000004 - type: recall_at_1000 value: 80.11 - type: recall_at_3 value: 30.794 - type: recall_at_5 value: 35.186 - type: map_at_1 value: 16.384999999999998 - type: map_at_10 value: 22.142 - type: map_at_100 value: 23.057 - type: map_at_1000 value: 23.177 - type: map_at_3 value: 20.29 - type: map_at_5 value: 21.332 - type: mrr_at_1 value: 19.89 - type: mrr_at_10 value: 25.771 - type: mrr_at_100 value: 26.599 - type: mrr_at_1000 value: 26.680999999999997 - type: mrr_at_3 value: 23.962 - type: mrr_at_5 value: 24.934 - type: ndcg_at_1 value: 19.89 - type: ndcg_at_10 value: 25.97 - type: ndcg_at_100 value: 30.605 - type: ndcg_at_1000 value: 33.619 - type: ndcg_at_3 value: 22.704 - type: ndcg_at_5 value: 24.199 - type: precision_at_1 value: 19.89 - type: precision_at_10 value: 4.553 - type: precision_at_100 value: 0.8049999999999999 - type: precision_at_1000 value: 0.122 - type: precision_at_3 value: 10.541 - type: precision_at_5 value: 7.46 - type: recall_at_1 value: 16.384999999999998 - type: recall_at_10 value: 34.001 - type: recall_at_100 value: 55.17100000000001 - type: recall_at_1000 value: 77.125 - type: recall_at_3 value: 24.618000000000002 - type: recall_at_5 value: 28.695999999999998 - type: map_at_1 value: 23.726 - type: map_at_10 value: 31.227 - type: map_at_100 value: 32.311 - type: map_at_1000 value: 32.419 - type: map_at_3 value: 28.765 - type: map_at_5 value: 30.229 - type: mrr_at_1 value: 27.705000000000002 - type: mrr_at_10 value: 35.085 - type: mrr_at_100 value: 35.931000000000004 - type: mrr_at_1000 value: 36 - type: mrr_at_3 value: 32.603 - type: mrr_at_5 value: 34.117999999999995 - type: ndcg_at_1 value: 27.705000000000002 - type: ndcg_at_10 value: 35.968 - type: ndcg_at_100 value: 41.197 - type: ndcg_at_1000 value: 43.76 - type: ndcg_at_3 value: 31.304 - type: ndcg_at_5 value: 33.661 - type: precision_at_1 value: 27.705000000000002 - type: precision_at_10 value: 5.942 - type: precision_at_100 value: 0.964 - type: precision_at_1000 value: 0.13 - type: precision_at_3 value: 13.868 - type: precision_at_5 value: 9.944 - type: recall_at_1 value: 23.726 - type: recall_at_10 value: 46.786 - type: recall_at_100 value: 70.072 - type: recall_at_1000 value: 88.2 - type: recall_at_3 value: 33.981 - type: recall_at_5 value: 39.893 - type: map_at_1 value: 23.344 - type: map_at_10 value: 31.636999999999997 - type: map_at_100 value: 33.065 - type: map_at_1000 value: 33.300000000000004 - type: map_at_3 value: 29.351 - type: map_at_5 value: 30.432 - type: mrr_at_1 value: 27.866000000000003 - type: mrr_at_10 value: 35.587 - type: mrr_at_100 value: 36.52 - type: mrr_at_1000 value: 36.597 - type: mrr_at_3 value: 33.696 - type: mrr_at_5 value: 34.713 - type: ndcg_at_1 value: 27.866000000000003 - type: ndcg_at_10 value: 36.61 - type: ndcg_at_100 value: 41.88 - type: ndcg_at_1000 value: 45.105000000000004 - type: ndcg_at_3 value: 33.038000000000004 - type: ndcg_at_5 value: 34.331 - type: precision_at_1 value: 27.866000000000003 - type: precision_at_10 value: 6.917 - type: precision_at_100 value: 1.3599999999999999 - type: precision_at_1000 value: 0.233 - type: precision_at_3 value: 15.547 - type: precision_at_5 value: 10.791 - type: recall_at_1 value: 23.344 - type: recall_at_10 value: 45.782000000000004 - type: recall_at_100 value: 69.503 - type: recall_at_1000 value: 90.742 - type: recall_at_3 value: 35.160000000000004 - type: recall_at_5 value: 39.058 - type: map_at_1 value: 20.776 - type: map_at_10 value: 27.285999999999998 - type: map_at_100 value: 28.235 - type: map_at_1000 value: 28.337 - type: map_at_3 value: 25.147000000000002 - type: map_at_5 value: 26.401999999999997 - type: mrr_at_1 value: 22.921 - type: mrr_at_10 value: 29.409999999999997 - type: mrr_at_100 value: 30.275000000000002 - type: mrr_at_1000 value: 30.354999999999997 - type: mrr_at_3 value: 27.418 - type: mrr_at_5 value: 28.592000000000002 - type: ndcg_at_1 value: 22.921 - type: ndcg_at_10 value: 31.239 - type: ndcg_at_100 value: 35.965 - type: ndcg_at_1000 value: 38.602 - type: ndcg_at_3 value: 27.174 - type: ndcg_at_5 value: 29.229 - type: precision_at_1 value: 22.921 - type: precision_at_10 value: 4.806 - type: precision_at_100 value: 0.776 - type: precision_at_1000 value: 0.11 - type: precision_at_3 value: 11.459999999999999 - type: precision_at_5 value: 8.022 - type: recall_at_1 value: 20.776 - type: recall_at_10 value: 41.294 - type: recall_at_100 value: 63.111 - type: recall_at_1000 value: 82.88600000000001 - type: recall_at_3 value: 30.403000000000002 - type: recall_at_5 value: 35.455999999999996 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 9.376 - type: map_at_10 value: 15.926000000000002 - type: map_at_100 value: 17.585 - type: map_at_1000 value: 17.776 - type: map_at_3 value: 13.014000000000001 - type: map_at_5 value: 14.417 - type: mrr_at_1 value: 20.195 - type: mrr_at_10 value: 29.95 - type: mrr_at_100 value: 31.052000000000003 - type: mrr_at_1000 value: 31.108000000000004 - type: mrr_at_3 value: 26.667 - type: mrr_at_5 value: 28.458 - type: ndcg_at_1 value: 20.195 - type: ndcg_at_10 value: 22.871 - type: ndcg_at_100 value: 29.921999999999997 - type: ndcg_at_1000 value: 33.672999999999995 - type: ndcg_at_3 value: 17.782999999999998 - type: ndcg_at_5 value: 19.544 - type: precision_at_1 value: 20.195 - type: precision_at_10 value: 7.394 - type: precision_at_100 value: 1.493 - type: precision_at_1000 value: 0.218 - type: precision_at_3 value: 13.073 - type: precision_at_5 value: 10.436 - type: recall_at_1 value: 9.376 - type: recall_at_10 value: 28.544999999999998 - type: recall_at_100 value: 53.147999999999996 - type: recall_at_1000 value: 74.62 - type: recall_at_3 value: 16.464000000000002 - type: recall_at_5 value: 21.004 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 8.415000000000001 - type: map_at_10 value: 18.738 - type: map_at_100 value: 27.291999999999998 - type: map_at_1000 value: 28.992 - type: map_at_3 value: 13.196 - type: map_at_5 value: 15.539 - type: mrr_at_1 value: 66.5 - type: mrr_at_10 value: 74.518 - type: mrr_at_100 value: 74.86 - type: mrr_at_1000 value: 74.87 - type: mrr_at_3 value: 72.375 - type: mrr_at_5 value: 73.86200000000001 - type: ndcg_at_1 value: 54.37499999999999 - type: ndcg_at_10 value: 41.317 - type: ndcg_at_100 value: 45.845 - type: ndcg_at_1000 value: 52.92 - type: ndcg_at_3 value: 44.983000000000004 - type: ndcg_at_5 value: 42.989 - type: precision_at_1 value: 66.5 - type: precision_at_10 value: 33.6 - type: precision_at_100 value: 10.972999999999999 - type: precision_at_1000 value: 2.214 - type: precision_at_3 value: 48.583 - type: precision_at_5 value: 42.15 - type: recall_at_1 value: 8.415000000000001 - type: recall_at_10 value: 24.953 - type: recall_at_100 value: 52.48199999999999 - type: recall_at_1000 value: 75.093 - type: recall_at_3 value: 14.341000000000001 - type: recall_at_5 value: 18.468 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 47.06499999999999 - type: f1 value: 41.439327599975385 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 66.02 - type: map_at_10 value: 76.68599999999999 - type: map_at_100 value: 76.959 - type: map_at_1000 value: 76.972 - type: map_at_3 value: 75.024 - type: map_at_5 value: 76.153 - type: mrr_at_1 value: 71.197 - type: mrr_at_10 value: 81.105 - type: mrr_at_100 value: 81.232 - type: mrr_at_1000 value: 81.233 - type: mrr_at_3 value: 79.758 - type: mrr_at_5 value: 80.69 - type: ndcg_at_1 value: 71.197 - type: ndcg_at_10 value: 81.644 - type: ndcg_at_100 value: 82.645 - type: ndcg_at_1000 value: 82.879 - type: ndcg_at_3 value: 78.792 - type: ndcg_at_5 value: 80.528 - type: precision_at_1 value: 71.197 - type: precision_at_10 value: 10.206999999999999 - type: precision_at_100 value: 1.093 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 30.868000000000002 - type: precision_at_5 value: 19.559 - type: recall_at_1 value: 66.02 - type: recall_at_10 value: 92.50699999999999 - type: recall_at_100 value: 96.497 - type: recall_at_1000 value: 97.956 - type: recall_at_3 value: 84.866 - type: recall_at_5 value: 89.16199999999999 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 17.948 - type: map_at_10 value: 29.833 - type: map_at_100 value: 31.487 - type: map_at_1000 value: 31.674000000000003 - type: map_at_3 value: 26.029999999999998 - type: map_at_5 value: 28.038999999999998 - type: mrr_at_1 value: 34.721999999999994 - type: mrr_at_10 value: 44.214999999999996 - type: mrr_at_100 value: 44.994 - type: mrr_at_1000 value: 45.051 - type: mrr_at_3 value: 41.667 - type: mrr_at_5 value: 43.032 - type: ndcg_at_1 value: 34.721999999999994 - type: ndcg_at_10 value: 37.434 - type: ndcg_at_100 value: 43.702000000000005 - type: ndcg_at_1000 value: 46.993 - type: ndcg_at_3 value: 33.56 - type: ndcg_at_5 value: 34.687 - type: precision_at_1 value: 34.721999999999994 - type: precision_at_10 value: 10.401 - type: precision_at_100 value: 1.7049999999999998 - type: precision_at_1000 value: 0.22799999999999998 - type: precision_at_3 value: 22.531000000000002 - type: precision_at_5 value: 16.42 - type: recall_at_1 value: 17.948 - type: recall_at_10 value: 45.062999999999995 - type: recall_at_100 value: 68.191 - type: recall_at_1000 value: 87.954 - type: recall_at_3 value: 31.112000000000002 - type: recall_at_5 value: 36.823 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 36.644 - type: map_at_10 value: 57.658 - type: map_at_100 value: 58.562000000000005 - type: map_at_1000 value: 58.62500000000001 - type: map_at_3 value: 54.022999999999996 - type: map_at_5 value: 56.293000000000006 - type: mrr_at_1 value: 73.288 - type: mrr_at_10 value: 80.51700000000001 - type: mrr_at_100 value: 80.72 - type: mrr_at_1000 value: 80.728 - type: mrr_at_3 value: 79.33200000000001 - type: mrr_at_5 value: 80.085 - type: ndcg_at_1 value: 73.288 - type: ndcg_at_10 value: 66.61 - type: ndcg_at_100 value: 69.723 - type: ndcg_at_1000 value: 70.96000000000001 - type: ndcg_at_3 value: 61.358999999999995 - type: ndcg_at_5 value: 64.277 - type: precision_at_1 value: 73.288 - type: precision_at_10 value: 14.17 - type: precision_at_100 value: 1.659 - type: precision_at_1000 value: 0.182 - type: precision_at_3 value: 39.487 - type: precision_at_5 value: 25.999 - type: recall_at_1 value: 36.644 - type: recall_at_10 value: 70.851 - type: recall_at_100 value: 82.94399999999999 - type: recall_at_1000 value: 91.134 - type: recall_at_3 value: 59.230000000000004 - type: recall_at_5 value: 64.997 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 86.00280000000001 - type: ap value: 80.46302061021223 - type: f1 value: 85.9592921596419 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 22.541 - type: map_at_10 value: 34.625 - type: map_at_100 value: 35.785 - type: map_at_1000 value: 35.831 - type: map_at_3 value: 30.823 - type: map_at_5 value: 32.967999999999996 - type: mrr_at_1 value: 23.180999999999997 - type: mrr_at_10 value: 35.207 - type: mrr_at_100 value: 36.315 - type: mrr_at_1000 value: 36.355 - type: mrr_at_3 value: 31.483 - type: mrr_at_5 value: 33.589999999999996 - type: ndcg_at_1 value: 23.195 - type: ndcg_at_10 value: 41.461 - type: ndcg_at_100 value: 47.032000000000004 - type: ndcg_at_1000 value: 48.199999999999996 - type: ndcg_at_3 value: 33.702 - type: ndcg_at_5 value: 37.522 - type: precision_at_1 value: 23.195 - type: precision_at_10 value: 6.526999999999999 - type: precision_at_100 value: 0.932 - type: precision_at_1000 value: 0.10300000000000001 - type: precision_at_3 value: 14.308000000000002 - type: precision_at_5 value: 10.507 - type: recall_at_1 value: 22.541 - type: recall_at_10 value: 62.524 - type: recall_at_100 value: 88.228 - type: recall_at_1000 value: 97.243 - type: recall_at_3 value: 41.38 - type: recall_at_5 value: 50.55 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 92.69949840401279 - type: f1 value: 92.54141471311786 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 72.56041951664386 - type: f1 value: 55.88499977508287 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.62071284465365 - type: f1 value: 69.36717546572152 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.35843981170142 - type: f1 value: 76.15496453538884 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 31.33664956793118 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 27.883839621715524 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 30.096874986740758 - type: mrr value: 30.97300481932132 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 5.4 - type: map_at_10 value: 11.852 - type: map_at_100 value: 14.758 - type: map_at_1000 value: 16.134 - type: map_at_3 value: 8.558 - type: map_at_5 value: 10.087 - type: mrr_at_1 value: 44.272 - type: mrr_at_10 value: 52.05800000000001 - type: mrr_at_100 value: 52.689 - type: mrr_at_1000 value: 52.742999999999995 - type: mrr_at_3 value: 50.205999999999996 - type: mrr_at_5 value: 51.367 - type: ndcg_at_1 value: 42.57 - type: ndcg_at_10 value: 32.449 - type: ndcg_at_100 value: 29.596 - type: ndcg_at_1000 value: 38.351 - type: ndcg_at_3 value: 37.044 - type: ndcg_at_5 value: 35.275 - type: precision_at_1 value: 44.272 - type: precision_at_10 value: 23.87 - type: precision_at_100 value: 7.625 - type: precision_at_1000 value: 2.045 - type: precision_at_3 value: 34.365 - type: precision_at_5 value: 30.341 - type: recall_at_1 value: 5.4 - type: recall_at_10 value: 15.943999999999999 - type: recall_at_100 value: 29.805 - type: recall_at_1000 value: 61.695 - type: recall_at_3 value: 9.539 - type: recall_at_5 value: 12.127 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 36.047000000000004 - type: map_at_10 value: 51.6 - type: map_at_100 value: 52.449999999999996 - type: map_at_1000 value: 52.476 - type: map_at_3 value: 47.452 - type: map_at_5 value: 49.964 - type: mrr_at_1 value: 40.382 - type: mrr_at_10 value: 54.273 - type: mrr_at_100 value: 54.859 - type: mrr_at_1000 value: 54.876000000000005 - type: mrr_at_3 value: 51.014 - type: mrr_at_5 value: 52.983999999999995 - type: ndcg_at_1 value: 40.353 - type: ndcg_at_10 value: 59.11300000000001 - type: ndcg_at_100 value: 62.604000000000006 - type: ndcg_at_1000 value: 63.187000000000005 - type: ndcg_at_3 value: 51.513 - type: ndcg_at_5 value: 55.576 - type: precision_at_1 value: 40.353 - type: precision_at_10 value: 9.418 - type: precision_at_100 value: 1.1440000000000001 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 23.078000000000003 - type: precision_at_5 value: 16.250999999999998 - type: recall_at_1 value: 36.047000000000004 - type: recall_at_10 value: 79.22200000000001 - type: recall_at_100 value: 94.23 - type: recall_at_1000 value: 98.51100000000001 - type: recall_at_3 value: 59.678 - type: recall_at_5 value: 68.967 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 68.232 - type: map_at_10 value: 81.674 - type: map_at_100 value: 82.338 - type: map_at_1000 value: 82.36099999999999 - type: map_at_3 value: 78.833 - type: map_at_5 value: 80.58 - type: mrr_at_1 value: 78.64 - type: mrr_at_10 value: 85.164 - type: mrr_at_100 value: 85.317 - type: mrr_at_1000 value: 85.319 - type: mrr_at_3 value: 84.127 - type: mrr_at_5 value: 84.789 - type: ndcg_at_1 value: 78.63 - type: ndcg_at_10 value: 85.711 - type: ndcg_at_100 value: 87.238 - type: ndcg_at_1000 value: 87.444 - type: ndcg_at_3 value: 82.788 - type: ndcg_at_5 value: 84.313 - type: precision_at_1 value: 78.63 - type: precision_at_10 value: 12.977 - type: precision_at_100 value: 1.503 - type: precision_at_1000 value: 0.156 - type: precision_at_3 value: 36.113 - type: precision_at_5 value: 23.71 - type: recall_at_1 value: 68.232 - type: recall_at_10 value: 93.30199999999999 - type: recall_at_100 value: 98.799 - type: recall_at_1000 value: 99.885 - type: recall_at_3 value: 84.827 - type: recall_at_5 value: 89.188 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 45.71879170816294 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 59.65866311751794 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 4.218 - type: map_at_10 value: 10.337 - type: map_at_100 value: 12.131 - type: map_at_1000 value: 12.411 - type: map_at_3 value: 7.4270000000000005 - type: map_at_5 value: 8.913 - type: mrr_at_1 value: 20.8 - type: mrr_at_10 value: 30.868000000000002 - type: mrr_at_100 value: 31.903 - type: mrr_at_1000 value: 31.972 - type: mrr_at_3 value: 27.367 - type: mrr_at_5 value: 29.372 - type: ndcg_at_1 value: 20.8 - type: ndcg_at_10 value: 17.765 - type: ndcg_at_100 value: 24.914 - type: ndcg_at_1000 value: 30.206 - type: ndcg_at_3 value: 16.64 - type: ndcg_at_5 value: 14.712 - type: precision_at_1 value: 20.8 - type: precision_at_10 value: 9.24 - type: precision_at_100 value: 1.9560000000000002 - type: precision_at_1000 value: 0.32299999999999995 - type: precision_at_3 value: 15.467 - type: precision_at_5 value: 12.94 - type: recall_at_1 value: 4.218 - type: recall_at_10 value: 18.752 - type: recall_at_100 value: 39.7 - type: recall_at_1000 value: 65.57300000000001 - type: recall_at_3 value: 9.428 - type: recall_at_5 value: 13.133000000000001 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 83.04338850207233 - type: cos_sim_spearman value: 78.5054651430423 - type: euclidean_pearson value: 80.30739451228612 - type: euclidean_spearman value: 78.48377464299097 - type: manhattan_pearson value: 80.40795049052781 - type: manhattan_spearman value: 78.49506205443114 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 84.11596224442962 - type: cos_sim_spearman value: 76.20997388935461 - type: euclidean_pearson value: 80.56858451349109 - type: euclidean_spearman value: 75.92659183871186 - type: manhattan_pearson value: 80.60246102203844 - type: manhattan_spearman value: 76.03018971432664 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 81.34691640755737 - type: cos_sim_spearman value: 82.4018369631579 - type: euclidean_pearson value: 81.87673092245366 - type: euclidean_spearman value: 82.3671489960678 - type: manhattan_pearson value: 81.88222387719948 - type: manhattan_spearman value: 82.3816590344736 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 81.2836092579524 - type: cos_sim_spearman value: 78.99982781772064 - type: euclidean_pearson value: 80.5184271010527 - type: euclidean_spearman value: 78.89777392101904 - type: manhattan_pearson value: 80.53585705018664 - type: manhattan_spearman value: 78.92898405472994 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 86.7349907750784 - type: cos_sim_spearman value: 87.7611234446225 - type: euclidean_pearson value: 86.98759326731624 - type: euclidean_spearman value: 87.58321319424618 - type: manhattan_pearson value: 87.03483090370842 - type: manhattan_spearman value: 87.63278333060288 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 81.75873694924825 - type: cos_sim_spearman value: 83.80237999094724 - type: euclidean_pearson value: 83.55023725861537 - type: euclidean_spearman value: 84.12744338577744 - type: manhattan_pearson value: 83.58816983036232 - type: manhattan_spearman value: 84.18520748676501 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.21630882940174 - type: cos_sim_spearman value: 87.72382883437031 - type: euclidean_pearson value: 88.69933350930333 - type: euclidean_spearman value: 88.24660814383081 - type: manhattan_pearson value: 88.77331018833499 - type: manhattan_spearman value: 88.26109989380632 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 61.11854063060489 - type: cos_sim_spearman value: 63.14678634195072 - type: euclidean_pearson value: 61.679090067000864 - type: euclidean_spearman value: 62.28876589509653 - type: manhattan_pearson value: 62.082324165511004 - type: manhattan_spearman value: 62.56030932816679 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 84.00319882832645 - type: cos_sim_spearman value: 85.94529772647257 - type: euclidean_pearson value: 85.6661390122756 - type: euclidean_spearman value: 85.97747815545827 - type: manhattan_pearson value: 85.58422770541893 - type: manhattan_spearman value: 85.9237139181532 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 79.16198731863916 - type: mrr value: 94.25202702163487 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 54.761 - type: map_at_10 value: 64.396 - type: map_at_100 value: 65.07 - type: map_at_1000 value: 65.09899999999999 - type: map_at_3 value: 61.846000000000004 - type: map_at_5 value: 63.284 - type: mrr_at_1 value: 57.667 - type: mrr_at_10 value: 65.83099999999999 - type: mrr_at_100 value: 66.36800000000001 - type: mrr_at_1000 value: 66.39399999999999 - type: mrr_at_3 value: 64.056 - type: mrr_at_5 value: 65.206 - type: ndcg_at_1 value: 57.667 - type: ndcg_at_10 value: 68.854 - type: ndcg_at_100 value: 71.59100000000001 - type: ndcg_at_1000 value: 72.383 - type: ndcg_at_3 value: 64.671 - type: ndcg_at_5 value: 66.796 - type: precision_at_1 value: 57.667 - type: precision_at_10 value: 9.167 - type: precision_at_100 value: 1.053 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 25.444 - type: precision_at_5 value: 16.667 - type: recall_at_1 value: 54.761 - type: recall_at_10 value: 80.9 - type: recall_at_100 value: 92.767 - type: recall_at_1000 value: 99 - type: recall_at_3 value: 69.672 - type: recall_at_5 value: 75.083 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.8079207920792 - type: cos_sim_ap value: 94.88470927617445 - type: cos_sim_f1 value: 90.08179959100204 - type: cos_sim_precision value: 92.15481171548117 - type: cos_sim_recall value: 88.1 - type: dot_accuracy value: 99.58613861386138 - type: dot_ap value: 82.94822578881316 - type: dot_f1 value: 77.33333333333333 - type: dot_precision value: 79.36842105263158 - type: dot_recall value: 75.4 - type: euclidean_accuracy value: 99.8069306930693 - type: euclidean_ap value: 94.81367858031837 - type: euclidean_f1 value: 90.01009081735621 - type: euclidean_precision value: 90.83503054989816 - type: euclidean_recall value: 89.2 - type: manhattan_accuracy value: 99.81188118811882 - type: manhattan_ap value: 94.91405337220161 - type: manhattan_f1 value: 90.2763561924258 - type: manhattan_precision value: 92.45283018867924 - type: manhattan_recall value: 88.2 - type: max_accuracy value: 99.81188118811882 - type: max_ap value: 94.91405337220161 - type: max_f1 value: 90.2763561924258 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 58.511599500053094 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 31.984728147814707 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 49.93428193939015 - type: mrr value: 50.916557911043206 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 31.562500894537145 - type: cos_sim_spearman value: 31.162587976726307 - type: dot_pearson value: 22.633662187735762 - type: dot_spearman value: 22.723000282378962 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.219 - type: map_at_10 value: 1.871 - type: map_at_100 value: 10.487 - type: map_at_1000 value: 25.122 - type: map_at_3 value: 0.657 - type: map_at_5 value: 1.0699999999999998 - type: mrr_at_1 value: 84 - type: mrr_at_10 value: 89.567 - type: mrr_at_100 value: 89.748 - type: mrr_at_1000 value: 89.748 - type: mrr_at_3 value: 88.667 - type: mrr_at_5 value: 89.567 - type: ndcg_at_1 value: 80 - type: ndcg_at_10 value: 74.533 - type: ndcg_at_100 value: 55.839000000000006 - type: ndcg_at_1000 value: 49.748 - type: ndcg_at_3 value: 79.53099999999999 - type: ndcg_at_5 value: 78.245 - type: precision_at_1 value: 84 - type: precision_at_10 value: 78.4 - type: precision_at_100 value: 56.99999999999999 - type: precision_at_1000 value: 21.98 - type: precision_at_3 value: 85.333 - type: precision_at_5 value: 84.8 - type: recall_at_1 value: 0.219 - type: recall_at_10 value: 2.02 - type: recall_at_100 value: 13.555 - type: recall_at_1000 value: 46.739999999999995 - type: recall_at_3 value: 0.685 - type: recall_at_5 value: 1.13 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 3.5029999999999997 - type: map_at_10 value: 11.042 - type: map_at_100 value: 16.326999999999998 - type: map_at_1000 value: 17.836 - type: map_at_3 value: 6.174 - type: map_at_5 value: 7.979 - type: mrr_at_1 value: 42.857 - type: mrr_at_10 value: 52.617000000000004 - type: mrr_at_100 value: 53.351000000000006 - type: mrr_at_1000 value: 53.351000000000006 - type: mrr_at_3 value: 46.939 - type: mrr_at_5 value: 50.714000000000006 - type: ndcg_at_1 value: 38.775999999999996 - type: ndcg_at_10 value: 27.125 - type: ndcg_at_100 value: 35.845 - type: ndcg_at_1000 value: 47.377 - type: ndcg_at_3 value: 29.633 - type: ndcg_at_5 value: 28.378999999999998 - type: precision_at_1 value: 42.857 - type: precision_at_10 value: 24.082 - type: precision_at_100 value: 6.877999999999999 - type: precision_at_1000 value: 1.463 - type: precision_at_3 value: 29.932 - type: precision_at_5 value: 28.571 - type: recall_at_1 value: 3.5029999999999997 - type: recall_at_10 value: 17.068 - type: recall_at_100 value: 43.361 - type: recall_at_1000 value: 78.835 - type: recall_at_3 value: 6.821000000000001 - type: recall_at_5 value: 10.357 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.0954 - type: ap value: 14.216844153511959 - type: f1 value: 54.63687418565117 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 61.46293152235427 - type: f1 value: 61.744177921638645 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 41.12708617788644 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 85.75430649102938 - type: cos_sim_ap value: 73.34252536948081 - type: cos_sim_f1 value: 67.53758935173774 - type: cos_sim_precision value: 63.3672525439408 - type: cos_sim_recall value: 72.29551451187335 - type: dot_accuracy value: 81.71305954580676 - type: dot_ap value: 59.5532209082386 - type: dot_f1 value: 56.18466898954705 - type: dot_precision value: 47.830923248053395 - type: dot_recall value: 68.07387862796834 - type: euclidean_accuracy value: 85.81987244441795 - type: euclidean_ap value: 73.34325409809446 - type: euclidean_f1 value: 67.83451360417443 - type: euclidean_precision value: 64.09955388588871 - type: euclidean_recall value: 72.0316622691293 - type: manhattan_accuracy value: 85.68277999642368 - type: manhattan_ap value: 73.1535450121903 - type: manhattan_f1 value: 67.928237896289 - type: manhattan_precision value: 63.56945722171113 - type: manhattan_recall value: 72.9287598944591 - type: max_accuracy value: 85.81987244441795 - type: max_ap value: 73.34325409809446 - type: max_f1 value: 67.928237896289 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.90441262079403 - type: cos_sim_ap value: 85.79331880741438 - type: cos_sim_f1 value: 78.31563529842548 - type: cos_sim_precision value: 74.6683424102779 - type: cos_sim_recall value: 82.33754234678165 - type: dot_accuracy value: 84.89928978926534 - type: dot_ap value: 75.25819218316 - type: dot_f1 value: 69.88730119720536 - type: dot_precision value: 64.23362374959665 - type: dot_recall value: 76.63227594702803 - type: euclidean_accuracy value: 89.01695967710637 - type: euclidean_ap value: 85.98986606038852 - type: euclidean_f1 value: 78.5277880014722 - type: euclidean_precision value: 75.22211253701876 - type: euclidean_recall value: 82.13735756082538 - type: manhattan_accuracy value: 88.99561454573679 - type: manhattan_ap value: 85.92262421793953 - type: manhattan_f1 value: 78.38866094740769 - type: manhattan_precision value: 76.02373028505282 - type: manhattan_recall value: 80.9054511857099 - type: max_accuracy value: 89.01695967710637 - type: max_ap value: 85.98986606038852 - type: max_f1 value: 78.5277880014722 --- # # Hugging Face Inference Endpoints -compatible version of michaelfeil/ct2fast-e5-small-v2 Duplicate of michaelfeil/ct2fast-e5-small-v2, modified to run on Hugging Face Inference Endpoints. Requires a GPU Instance type to run. Creates symbolic links so that ctranslate2 reads the repository model without downloading from HF. # # Fast-Inference with Ctranslate2 Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU. quantized version of [intfloat/e5-small-v2](https://huggingface.co/intfloat/e5-small-v2) ```bash pip install hf-hub-ctranslate2>=2.12.0 ctranslate2>=3.16.0 ``` ```python # from transformers import AutoTokenizer model_name = "michaelfeil/ct2fast-e5-small-v2" model_name_orig="intfloat/e5-small-v2" from hf_hub_ctranslate2 import EncoderCT2fromHfHub model = EncoderCT2fromHfHub( # load in int8 on CUDA model_name_or_path=model_name, device="cuda", compute_type="int8_float16" ) outputs = model.generate( text=["I like soccer", "I like tennis", "The eiffel tower is in Paris"] ) # perform downstream tasks on outputs outputs["pooler_output"] outputs["last_hidden_state"] outputs["attention_mask"] # alternative, use SentenceTransformer Mix-In # for end-to-end Sentence embeddings generation # (not pulling from this CT2fast-HF repo) from hf_hub_ctranslate2 import CT2SentenceTransformer model = CT2SentenceTransformer( model_name_orig, compute_type="int8_float16", device="cuda" ) embeddings = model.encode( ["I like soccer", "I like tennis", "The eiffel tower is in Paris"], batch_size=32, convert_to_numpy=True, normalize_embeddings=True, ) print(embeddings.shape, embeddings) scores = (embeddings @ embeddings.T) * 100 ``` Checkpoint compatible to [ctranslate2>=3.16.0](https://github.com/OpenNMT/CTranslate2) and [hf-hub-ctranslate2>=2.12.0](https://github.com/michaelfeil/hf-hub-ctranslate2) - `compute_type=int8_float16` for `device="cuda"` - `compute_type=int8` for `device="cpu"` Converted on 2023-06-19 using ``` ct2-transformers-converter --model intfloat/e5-small-v2 --output_dir ~/tmp-ct2fast-e5-small-v2 --force --copy_files tokenizer.json modules.json README.md tokenizer_config.json sentence_bert_config.json vocab.txt special_tokens_map.json .gitattributes --trust_remote_code ``` # Licence and other remarks: This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo. # Original description # E5-small-v2 [Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf). Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022 This model has 12 layers and the embedding size is 384. ## Usage Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset. ```python import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0) return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] # Each input text should start with "query: " or "passage: ". # For tasks other than retrieval, you can simply use the "query: " prefix. input_texts = ['query: how much protein should a female eat', 'query: summit define', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."] tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-small-v2') model = AutoModel.from_pretrained('intfloat/e5-small-v2') # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # (Optionally) normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:2] @ embeddings[2:].T) * 100 print(scores.tolist()) ``` ## Training Details Please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf). ## Benchmark Evaluation Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316). ## Citation If you find our paper or models helpful, please consider cite as follows: ``` @article{wang2022text, title={Text Embeddings by Weakly-Supervised Contrastive Pre-training}, author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu}, journal={arXiv preprint arXiv:2212.03533}, year={2022} } ``` ## Limitations This model only works for English texts. Long texts will be truncated to at most 512 tokens. ## Sentence Transformers Below is an example for usage with sentence_transformers. `pip install sentence_transformers~=2.2.2` This is community contributed, and results may vary up to numerical precision. ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('intfloat/e5-small-v2') embeddings = model.encode(input_texts, normalize_embeddings=True) ```
[ "BIOSSES", "SCIFACT" ]
zwellington/pubhealth-expanded-1
zwellington
text2text-generation
[ "transformers", "pytorch", "bart", "text2text-generation", "generated_from_trainer", "dataset:clupubhealth", "base_model:facebook/bart-base", "base_model:finetune:facebook/bart-base", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-08-05T01:54:51Z
2023-08-05T02:31:56+00:00
13
0
--- base_model: facebook/bart-base datasets: - clupubhealth license: apache-2.0 metrics: - rouge tags: - generated_from_trainer model-index: - name: pubhealth-expanded-1 results: - task: type: text2text-generation name: Sequence-to-sequence Language Modeling dataset: name: clupubhealth type: clupubhealth config: expanded split: test args: expanded metrics: - type: rouge value: 28.6755 name: Rouge1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pubhealth-expanded-1 This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the clupubhealth dataset. It achieves the following results on the evaluation set: - Loss: 2.3198 - Rouge1: 28.6755 - Rouge2: 9.2869 - Rougel: 21.9675 - Rougelsum: 22.2946 - Gen Len: 19.85 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 12 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 10 - total_train_batch_size: 120 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 3.6788 | 0.08 | 40 | 2.3758 | 29.5273 | 9.3588 | 22.4799 | 22.6212 | 19.835 | | 3.4222 | 0.15 | 80 | 2.3484 | 29.0821 | 9.1988 | 22.3907 | 22.5996 | 19.88 | | 3.3605 | 0.23 | 120 | 2.3500 | 29.2893 | 9.296 | 22.1247 | 22.4075 | 19.94 | | 3.3138 | 0.31 | 160 | 2.3504 | 29.039 | 8.907 | 21.9631 | 22.2506 | 19.91 | | 3.2678 | 0.39 | 200 | 2.3461 | 29.678 | 9.4429 | 22.3439 | 22.6962 | 19.92 | | 3.2371 | 0.46 | 240 | 2.3267 | 28.535 | 9.1858 | 21.3721 | 21.6634 | 19.915 | | 3.204 | 0.54 | 280 | 2.3330 | 29.0796 | 9.4283 | 21.8953 | 22.1867 | 19.885 | | 3.1881 | 0.62 | 320 | 2.3164 | 29.1456 | 9.1919 | 21.9529 | 22.235 | 19.945 | | 3.1711 | 0.69 | 360 | 2.3208 | 29.3212 | 9.4823 | 22.1643 | 22.4159 | 19.895 | | 3.1752 | 0.77 | 400 | 2.3239 | 29.0408 | 9.3615 | 21.8007 | 22.0795 | 19.945 | | 3.1591 | 0.85 | 440 | 2.3218 | 28.6336 | 9.2799 | 21.5843 | 21.9422 | 19.845 | | 3.1663 | 0.93 | 480 | 2.3198 | 28.6755 | 9.2869 | 21.9675 | 22.2946 | 19.85 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu117 - Datasets 2.7.1 - Tokenizers 0.13.2
[ "PUBHEALTH" ]
davidpeer/gte-small
davidpeer
sentence-similarity
[ "sentence-transformers", "pytorch", "bert", "mteb", "sentence-similarity", "Sentence Transformers", "en", "arxiv:2308.03281", "license:mit", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2023-09-25T11:28:48Z
2023-09-25T11:32:52+00:00
13
0
--- language: - en license: mit tags: - mteb - sentence-similarity - sentence-transformers - Sentence Transformers model-index: - name: gte-small results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 73.22388059701493 - type: ap value: 36.09895941426988 - type: f1 value: 67.3205651539195 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 91.81894999999999 - type: ap value: 88.5240138417305 - type: f1 value: 91.80367382706962 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 48.032 - type: f1 value: 47.4490665674719 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 30.725 - type: map_at_10 value: 46.604 - type: map_at_100 value: 47.535 - type: map_at_1000 value: 47.538000000000004 - type: map_at_3 value: 41.833 - type: map_at_5 value: 44.61 - type: mrr_at_1 value: 31.223 - type: mrr_at_10 value: 46.794000000000004 - type: mrr_at_100 value: 47.725 - type: mrr_at_1000 value: 47.727000000000004 - type: mrr_at_3 value: 42.07 - type: mrr_at_5 value: 44.812000000000005 - type: ndcg_at_1 value: 30.725 - type: ndcg_at_10 value: 55.440999999999995 - type: ndcg_at_100 value: 59.134 - type: ndcg_at_1000 value: 59.199 - type: ndcg_at_3 value: 45.599000000000004 - type: ndcg_at_5 value: 50.637 - type: precision_at_1 value: 30.725 - type: precision_at_10 value: 8.364 - type: precision_at_100 value: 0.991 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 18.848000000000003 - type: precision_at_5 value: 13.77 - type: recall_at_1 value: 30.725 - type: recall_at_10 value: 83.64200000000001 - type: recall_at_100 value: 99.14699999999999 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 56.543 - type: recall_at_5 value: 68.848 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 47.90178078197678 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 40.25728393431922 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 61.720297062897764 - type: mrr value: 75.24139295607439 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 89.43527309184616 - type: cos_sim_spearman value: 88.17128615100206 - type: euclidean_pearson value: 87.89922623089282 - type: euclidean_spearman value: 87.96104039655451 - type: manhattan_pearson value: 87.9818290932077 - type: manhattan_spearman value: 88.00923426576885 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 84.0844155844156 - type: f1 value: 84.01485017302213 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 38.36574769259432 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 35.4857033165287 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 30.261 - type: map_at_10 value: 42.419000000000004 - type: map_at_100 value: 43.927 - type: map_at_1000 value: 44.055 - type: map_at_3 value: 38.597 - type: map_at_5 value: 40.701 - type: mrr_at_1 value: 36.91 - type: mrr_at_10 value: 48.02 - type: mrr_at_100 value: 48.658 - type: mrr_at_1000 value: 48.708 - type: mrr_at_3 value: 44.945 - type: mrr_at_5 value: 46.705000000000005 - type: ndcg_at_1 value: 36.91 - type: ndcg_at_10 value: 49.353 - type: ndcg_at_100 value: 54.456 - type: ndcg_at_1000 value: 56.363 - type: ndcg_at_3 value: 43.483 - type: ndcg_at_5 value: 46.150999999999996 - type: precision_at_1 value: 36.91 - type: precision_at_10 value: 9.700000000000001 - type: precision_at_100 value: 1.557 - type: precision_at_1000 value: 0.202 - type: precision_at_3 value: 21.078 - type: precision_at_5 value: 15.421999999999999 - type: recall_at_1 value: 30.261 - type: recall_at_10 value: 63.242 - type: recall_at_100 value: 84.09100000000001 - type: recall_at_1000 value: 96.143 - type: recall_at_3 value: 46.478 - type: recall_at_5 value: 53.708 - type: map_at_1 value: 31.145 - type: map_at_10 value: 40.996 - type: map_at_100 value: 42.266999999999996 - type: map_at_1000 value: 42.397 - type: map_at_3 value: 38.005 - type: map_at_5 value: 39.628 - type: mrr_at_1 value: 38.344 - type: mrr_at_10 value: 46.827000000000005 - type: mrr_at_100 value: 47.446 - type: mrr_at_1000 value: 47.489 - type: mrr_at_3 value: 44.448 - type: mrr_at_5 value: 45.747 - type: ndcg_at_1 value: 38.344 - type: ndcg_at_10 value: 46.733000000000004 - type: ndcg_at_100 value: 51.103 - type: ndcg_at_1000 value: 53.075 - type: ndcg_at_3 value: 42.366 - type: ndcg_at_5 value: 44.242 - type: precision_at_1 value: 38.344 - type: precision_at_10 value: 8.822000000000001 - type: precision_at_100 value: 1.417 - type: precision_at_1000 value: 0.187 - type: precision_at_3 value: 20.403 - type: precision_at_5 value: 14.306 - type: recall_at_1 value: 31.145 - type: recall_at_10 value: 56.909 - type: recall_at_100 value: 75.274 - type: recall_at_1000 value: 87.629 - type: recall_at_3 value: 43.784 - type: recall_at_5 value: 49.338 - type: map_at_1 value: 38.83 - type: map_at_10 value: 51.553000000000004 - type: map_at_100 value: 52.581 - type: map_at_1000 value: 52.638 - type: map_at_3 value: 48.112 - type: map_at_5 value: 50.095 - type: mrr_at_1 value: 44.513999999999996 - type: mrr_at_10 value: 54.998000000000005 - type: mrr_at_100 value: 55.650999999999996 - type: mrr_at_1000 value: 55.679 - type: mrr_at_3 value: 52.602000000000004 - type: mrr_at_5 value: 53.931 - type: ndcg_at_1 value: 44.513999999999996 - type: ndcg_at_10 value: 57.67400000000001 - type: ndcg_at_100 value: 61.663999999999994 - type: ndcg_at_1000 value: 62.743 - type: ndcg_at_3 value: 51.964 - type: ndcg_at_5 value: 54.773 - type: precision_at_1 value: 44.513999999999996 - type: precision_at_10 value: 9.423 - type: precision_at_100 value: 1.2309999999999999 - type: precision_at_1000 value: 0.13699999999999998 - type: precision_at_3 value: 23.323 - type: precision_at_5 value: 16.163 - type: recall_at_1 value: 38.83 - type: recall_at_10 value: 72.327 - type: recall_at_100 value: 89.519 - type: recall_at_1000 value: 97.041 - type: recall_at_3 value: 57.206 - type: recall_at_5 value: 63.88399999999999 - type: map_at_1 value: 25.484 - type: map_at_10 value: 34.527 - type: map_at_100 value: 35.661 - type: map_at_1000 value: 35.739 - type: map_at_3 value: 32.199 - type: map_at_5 value: 33.632 - type: mrr_at_1 value: 27.458 - type: mrr_at_10 value: 36.543 - type: mrr_at_100 value: 37.482 - type: mrr_at_1000 value: 37.543 - type: mrr_at_3 value: 34.256 - type: mrr_at_5 value: 35.618 - type: ndcg_at_1 value: 27.458 - type: ndcg_at_10 value: 39.396 - type: ndcg_at_100 value: 44.742 - type: ndcg_at_1000 value: 46.708 - type: ndcg_at_3 value: 34.817 - type: ndcg_at_5 value: 37.247 - type: precision_at_1 value: 27.458 - type: precision_at_10 value: 5.976999999999999 - type: precision_at_100 value: 0.907 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_3 value: 14.878 - type: precision_at_5 value: 10.35 - type: recall_at_1 value: 25.484 - type: recall_at_10 value: 52.317 - type: recall_at_100 value: 76.701 - type: recall_at_1000 value: 91.408 - type: recall_at_3 value: 40.043 - type: recall_at_5 value: 45.879 - type: map_at_1 value: 16.719 - type: map_at_10 value: 25.269000000000002 - type: map_at_100 value: 26.442 - type: map_at_1000 value: 26.557 - type: map_at_3 value: 22.56 - type: map_at_5 value: 24.082 - type: mrr_at_1 value: 20.896 - type: mrr_at_10 value: 29.982999999999997 - type: mrr_at_100 value: 30.895 - type: mrr_at_1000 value: 30.961 - type: mrr_at_3 value: 27.239 - type: mrr_at_5 value: 28.787000000000003 - type: ndcg_at_1 value: 20.896 - type: ndcg_at_10 value: 30.814000000000004 - type: ndcg_at_100 value: 36.418 - type: ndcg_at_1000 value: 39.182 - type: ndcg_at_3 value: 25.807999999999996 - type: ndcg_at_5 value: 28.143 - type: precision_at_1 value: 20.896 - type: precision_at_10 value: 5.821 - type: precision_at_100 value: 0.991 - type: precision_at_1000 value: 0.136 - type: precision_at_3 value: 12.562000000000001 - type: precision_at_5 value: 9.254 - type: recall_at_1 value: 16.719 - type: recall_at_10 value: 43.155 - type: recall_at_100 value: 67.831 - type: recall_at_1000 value: 87.617 - type: recall_at_3 value: 29.259 - type: recall_at_5 value: 35.260999999999996 - type: map_at_1 value: 29.398999999999997 - type: map_at_10 value: 39.876 - type: map_at_100 value: 41.205999999999996 - type: map_at_1000 value: 41.321999999999996 - type: map_at_3 value: 36.588 - type: map_at_5 value: 38.538 - type: mrr_at_1 value: 35.9 - type: mrr_at_10 value: 45.528 - type: mrr_at_100 value: 46.343 - type: mrr_at_1000 value: 46.388 - type: mrr_at_3 value: 42.862 - type: mrr_at_5 value: 44.440000000000005 - type: ndcg_at_1 value: 35.9 - type: ndcg_at_10 value: 45.987 - type: ndcg_at_100 value: 51.370000000000005 - type: ndcg_at_1000 value: 53.400000000000006 - type: ndcg_at_3 value: 40.841 - type: ndcg_at_5 value: 43.447 - type: precision_at_1 value: 35.9 - type: precision_at_10 value: 8.393 - type: precision_at_100 value: 1.283 - type: precision_at_1000 value: 0.166 - type: precision_at_3 value: 19.538 - type: precision_at_5 value: 13.975000000000001 - type: recall_at_1 value: 29.398999999999997 - type: recall_at_10 value: 58.361 - type: recall_at_100 value: 81.081 - type: recall_at_1000 value: 94.004 - type: recall_at_3 value: 43.657000000000004 - type: recall_at_5 value: 50.519999999999996 - type: map_at_1 value: 21.589 - type: map_at_10 value: 31.608999999999998 - type: map_at_100 value: 33.128 - type: map_at_1000 value: 33.247 - type: map_at_3 value: 28.671999999999997 - type: map_at_5 value: 30.233999999999998 - type: mrr_at_1 value: 26.712000000000003 - type: mrr_at_10 value: 36.713 - type: mrr_at_100 value: 37.713 - type: mrr_at_1000 value: 37.771 - type: mrr_at_3 value: 34.075 - type: mrr_at_5 value: 35.451 - type: ndcg_at_1 value: 26.712000000000003 - type: ndcg_at_10 value: 37.519999999999996 - type: ndcg_at_100 value: 43.946000000000005 - type: ndcg_at_1000 value: 46.297 - type: ndcg_at_3 value: 32.551 - type: ndcg_at_5 value: 34.660999999999994 - type: precision_at_1 value: 26.712000000000003 - type: precision_at_10 value: 7.066 - type: precision_at_100 value: 1.216 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 15.906 - type: precision_at_5 value: 11.437999999999999 - type: recall_at_1 value: 21.589 - type: recall_at_10 value: 50.090999999999994 - type: recall_at_100 value: 77.43900000000001 - type: recall_at_1000 value: 93.35900000000001 - type: recall_at_3 value: 36.028999999999996 - type: recall_at_5 value: 41.698 - type: map_at_1 value: 25.121666666666663 - type: map_at_10 value: 34.46258333333334 - type: map_at_100 value: 35.710499999999996 - type: map_at_1000 value: 35.82691666666666 - type: map_at_3 value: 31.563249999999996 - type: map_at_5 value: 33.189750000000004 - type: mrr_at_1 value: 29.66441666666667 - type: mrr_at_10 value: 38.5455 - type: mrr_at_100 value: 39.39566666666667 - type: mrr_at_1000 value: 39.45325 - type: mrr_at_3 value: 36.003333333333345 - type: mrr_at_5 value: 37.440916666666666 - type: ndcg_at_1 value: 29.66441666666667 - type: ndcg_at_10 value: 39.978416666666675 - type: ndcg_at_100 value: 45.278666666666666 - type: ndcg_at_1000 value: 47.52275 - type: ndcg_at_3 value: 35.00058333333334 - type: ndcg_at_5 value: 37.34908333333333 - type: precision_at_1 value: 29.66441666666667 - type: precision_at_10 value: 7.094500000000001 - type: precision_at_100 value: 1.1523333333333332 - type: precision_at_1000 value: 0.15358333333333332 - type: precision_at_3 value: 16.184166666666663 - type: precision_at_5 value: 11.6005 - type: recall_at_1 value: 25.121666666666663 - type: recall_at_10 value: 52.23975000000001 - type: recall_at_100 value: 75.48408333333333 - type: recall_at_1000 value: 90.95316666666668 - type: recall_at_3 value: 38.38458333333333 - type: recall_at_5 value: 44.39933333333333 - type: map_at_1 value: 23.569000000000003 - type: map_at_10 value: 30.389 - type: map_at_100 value: 31.396 - type: map_at_1000 value: 31.493 - type: map_at_3 value: 28.276 - type: map_at_5 value: 29.459000000000003 - type: mrr_at_1 value: 26.534000000000002 - type: mrr_at_10 value: 33.217999999999996 - type: mrr_at_100 value: 34.054 - type: mrr_at_1000 value: 34.12 - type: mrr_at_3 value: 31.058000000000003 - type: mrr_at_5 value: 32.330999999999996 - type: ndcg_at_1 value: 26.534000000000002 - type: ndcg_at_10 value: 34.608 - type: ndcg_at_100 value: 39.391999999999996 - type: ndcg_at_1000 value: 41.837999999999994 - type: ndcg_at_3 value: 30.564999999999998 - type: ndcg_at_5 value: 32.509 - type: precision_at_1 value: 26.534000000000002 - type: precision_at_10 value: 5.414 - type: precision_at_100 value: 0.847 - type: precision_at_1000 value: 0.11399999999999999 - type: precision_at_3 value: 12.986 - type: precision_at_5 value: 9.202 - type: recall_at_1 value: 23.569000000000003 - type: recall_at_10 value: 44.896 - type: recall_at_100 value: 66.476 - type: recall_at_1000 value: 84.548 - type: recall_at_3 value: 33.79 - type: recall_at_5 value: 38.512 - type: map_at_1 value: 16.36 - type: map_at_10 value: 23.57 - type: map_at_100 value: 24.698999999999998 - type: map_at_1000 value: 24.834999999999997 - type: map_at_3 value: 21.093 - type: map_at_5 value: 22.418 - type: mrr_at_1 value: 19.718 - type: mrr_at_10 value: 27.139999999999997 - type: mrr_at_100 value: 28.097 - type: mrr_at_1000 value: 28.177999999999997 - type: mrr_at_3 value: 24.805 - type: mrr_at_5 value: 26.121 - type: ndcg_at_1 value: 19.718 - type: ndcg_at_10 value: 28.238999999999997 - type: ndcg_at_100 value: 33.663 - type: ndcg_at_1000 value: 36.763 - type: ndcg_at_3 value: 23.747 - type: ndcg_at_5 value: 25.796000000000003 - type: precision_at_1 value: 19.718 - type: precision_at_10 value: 5.282 - type: precision_at_100 value: 0.9390000000000001 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 11.264000000000001 - type: precision_at_5 value: 8.341 - type: recall_at_1 value: 16.36 - type: recall_at_10 value: 38.669 - type: recall_at_100 value: 63.184 - type: recall_at_1000 value: 85.33800000000001 - type: recall_at_3 value: 26.214 - type: recall_at_5 value: 31.423000000000002 - type: map_at_1 value: 25.618999999999996 - type: map_at_10 value: 34.361999999999995 - type: map_at_100 value: 35.534 - type: map_at_1000 value: 35.634 - type: map_at_3 value: 31.402 - type: map_at_5 value: 32.815 - type: mrr_at_1 value: 30.037000000000003 - type: mrr_at_10 value: 38.284 - type: mrr_at_100 value: 39.141999999999996 - type: mrr_at_1000 value: 39.2 - type: mrr_at_3 value: 35.603 - type: mrr_at_5 value: 36.867 - type: ndcg_at_1 value: 30.037000000000003 - type: ndcg_at_10 value: 39.87 - type: ndcg_at_100 value: 45.243 - type: ndcg_at_1000 value: 47.507 - type: ndcg_at_3 value: 34.371 - type: ndcg_at_5 value: 36.521 - type: precision_at_1 value: 30.037000000000003 - type: precision_at_10 value: 6.819 - type: precision_at_100 value: 1.0699999999999998 - type: precision_at_1000 value: 0.13699999999999998 - type: precision_at_3 value: 15.392 - type: precision_at_5 value: 10.821 - type: recall_at_1 value: 25.618999999999996 - type: recall_at_10 value: 52.869 - type: recall_at_100 value: 76.395 - type: recall_at_1000 value: 92.19500000000001 - type: recall_at_3 value: 37.943 - type: recall_at_5 value: 43.342999999999996 - type: map_at_1 value: 23.283 - type: map_at_10 value: 32.155 - type: map_at_100 value: 33.724 - type: map_at_1000 value: 33.939 - type: map_at_3 value: 29.018 - type: map_at_5 value: 30.864000000000004 - type: mrr_at_1 value: 28.063 - type: mrr_at_10 value: 36.632 - type: mrr_at_100 value: 37.606 - type: mrr_at_1000 value: 37.671 - type: mrr_at_3 value: 33.992 - type: mrr_at_5 value: 35.613 - type: ndcg_at_1 value: 28.063 - type: ndcg_at_10 value: 38.024 - type: ndcg_at_100 value: 44.292 - type: ndcg_at_1000 value: 46.818 - type: ndcg_at_3 value: 32.965 - type: ndcg_at_5 value: 35.562 - type: precision_at_1 value: 28.063 - type: precision_at_10 value: 7.352 - type: precision_at_100 value: 1.514 - type: precision_at_1000 value: 0.23800000000000002 - type: precision_at_3 value: 15.481 - type: precision_at_5 value: 11.542 - type: recall_at_1 value: 23.283 - type: recall_at_10 value: 49.756 - type: recall_at_100 value: 78.05 - type: recall_at_1000 value: 93.854 - type: recall_at_3 value: 35.408 - type: recall_at_5 value: 42.187000000000005 - type: map_at_1 value: 19.201999999999998 - type: map_at_10 value: 26.826 - type: map_at_100 value: 27.961000000000002 - type: map_at_1000 value: 28.066999999999997 - type: map_at_3 value: 24.237000000000002 - type: map_at_5 value: 25.811 - type: mrr_at_1 value: 20.887 - type: mrr_at_10 value: 28.660000000000004 - type: mrr_at_100 value: 29.660999999999998 - type: mrr_at_1000 value: 29.731 - type: mrr_at_3 value: 26.155 - type: mrr_at_5 value: 27.68 - type: ndcg_at_1 value: 20.887 - type: ndcg_at_10 value: 31.523 - type: ndcg_at_100 value: 37.055 - type: ndcg_at_1000 value: 39.579 - type: ndcg_at_3 value: 26.529000000000003 - type: ndcg_at_5 value: 29.137 - type: precision_at_1 value: 20.887 - type: precision_at_10 value: 5.065 - type: precision_at_100 value: 0.856 - type: precision_at_1000 value: 0.11900000000000001 - type: precision_at_3 value: 11.399 - type: precision_at_5 value: 8.392 - type: recall_at_1 value: 19.201999999999998 - type: recall_at_10 value: 44.285000000000004 - type: recall_at_100 value: 69.768 - type: recall_at_1000 value: 88.302 - type: recall_at_3 value: 30.804 - type: recall_at_5 value: 37.039 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 11.244 - type: map_at_10 value: 18.956 - type: map_at_100 value: 20.674 - type: map_at_1000 value: 20.863 - type: map_at_3 value: 15.923000000000002 - type: map_at_5 value: 17.518 - type: mrr_at_1 value: 25.080999999999996 - type: mrr_at_10 value: 35.94 - type: mrr_at_100 value: 36.969 - type: mrr_at_1000 value: 37.013 - type: mrr_at_3 value: 32.617000000000004 - type: mrr_at_5 value: 34.682 - type: ndcg_at_1 value: 25.080999999999996 - type: ndcg_at_10 value: 26.539 - type: ndcg_at_100 value: 33.601 - type: ndcg_at_1000 value: 37.203 - type: ndcg_at_3 value: 21.695999999999998 - type: ndcg_at_5 value: 23.567 - type: precision_at_1 value: 25.080999999999996 - type: precision_at_10 value: 8.143 - type: precision_at_100 value: 1.5650000000000002 - type: precision_at_1000 value: 0.22300000000000003 - type: precision_at_3 value: 15.983 - type: precision_at_5 value: 12.417 - type: recall_at_1 value: 11.244 - type: recall_at_10 value: 31.457 - type: recall_at_100 value: 55.92 - type: recall_at_1000 value: 76.372 - type: recall_at_3 value: 19.784 - type: recall_at_5 value: 24.857000000000003 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 8.595 - type: map_at_10 value: 18.75 - type: map_at_100 value: 26.354 - type: map_at_1000 value: 27.912 - type: map_at_3 value: 13.794 - type: map_at_5 value: 16.021 - type: mrr_at_1 value: 65.75 - type: mrr_at_10 value: 73.837 - type: mrr_at_100 value: 74.22800000000001 - type: mrr_at_1000 value: 74.234 - type: mrr_at_3 value: 72.5 - type: mrr_at_5 value: 73.387 - type: ndcg_at_1 value: 52.625 - type: ndcg_at_10 value: 39.101 - type: ndcg_at_100 value: 43.836000000000006 - type: ndcg_at_1000 value: 51.086 - type: ndcg_at_3 value: 44.229 - type: ndcg_at_5 value: 41.555 - type: precision_at_1 value: 65.75 - type: precision_at_10 value: 30.45 - type: precision_at_100 value: 9.81 - type: precision_at_1000 value: 2.045 - type: precision_at_3 value: 48.667 - type: precision_at_5 value: 40.8 - type: recall_at_1 value: 8.595 - type: recall_at_10 value: 24.201 - type: recall_at_100 value: 50.096 - type: recall_at_1000 value: 72.677 - type: recall_at_3 value: 15.212 - type: recall_at_5 value: 18.745 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 46.565 - type: f1 value: 41.49914329345582 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 66.60000000000001 - type: map_at_10 value: 76.838 - type: map_at_100 value: 77.076 - type: map_at_1000 value: 77.09 - type: map_at_3 value: 75.545 - type: map_at_5 value: 76.39 - type: mrr_at_1 value: 71.707 - type: mrr_at_10 value: 81.514 - type: mrr_at_100 value: 81.64099999999999 - type: mrr_at_1000 value: 81.645 - type: mrr_at_3 value: 80.428 - type: mrr_at_5 value: 81.159 - type: ndcg_at_1 value: 71.707 - type: ndcg_at_10 value: 81.545 - type: ndcg_at_100 value: 82.477 - type: ndcg_at_1000 value: 82.73899999999999 - type: ndcg_at_3 value: 79.292 - type: ndcg_at_5 value: 80.599 - type: precision_at_1 value: 71.707 - type: precision_at_10 value: 10.035 - type: precision_at_100 value: 1.068 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_3 value: 30.918 - type: precision_at_5 value: 19.328 - type: recall_at_1 value: 66.60000000000001 - type: recall_at_10 value: 91.353 - type: recall_at_100 value: 95.21 - type: recall_at_1000 value: 96.89999999999999 - type: recall_at_3 value: 85.188 - type: recall_at_5 value: 88.52 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 19.338 - type: map_at_10 value: 31.752000000000002 - type: map_at_100 value: 33.516 - type: map_at_1000 value: 33.694 - type: map_at_3 value: 27.716 - type: map_at_5 value: 29.67 - type: mrr_at_1 value: 38.117000000000004 - type: mrr_at_10 value: 47.323 - type: mrr_at_100 value: 48.13 - type: mrr_at_1000 value: 48.161 - type: mrr_at_3 value: 45.062000000000005 - type: mrr_at_5 value: 46.358 - type: ndcg_at_1 value: 38.117000000000004 - type: ndcg_at_10 value: 39.353 - type: ndcg_at_100 value: 46.044000000000004 - type: ndcg_at_1000 value: 49.083 - type: ndcg_at_3 value: 35.891 - type: ndcg_at_5 value: 36.661 - type: precision_at_1 value: 38.117000000000004 - type: precision_at_10 value: 11.187999999999999 - type: precision_at_100 value: 1.802 - type: precision_at_1000 value: 0.234 - type: precision_at_3 value: 24.126 - type: precision_at_5 value: 17.562 - type: recall_at_1 value: 19.338 - type: recall_at_10 value: 45.735 - type: recall_at_100 value: 71.281 - type: recall_at_1000 value: 89.537 - type: recall_at_3 value: 32.525 - type: recall_at_5 value: 37.671 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 36.995 - type: map_at_10 value: 55.032000000000004 - type: map_at_100 value: 55.86 - type: map_at_1000 value: 55.932 - type: map_at_3 value: 52.125 - type: map_at_5 value: 53.884 - type: mrr_at_1 value: 73.991 - type: mrr_at_10 value: 80.096 - type: mrr_at_100 value: 80.32000000000001 - type: mrr_at_1000 value: 80.331 - type: mrr_at_3 value: 79.037 - type: mrr_at_5 value: 79.719 - type: ndcg_at_1 value: 73.991 - type: ndcg_at_10 value: 63.786 - type: ndcg_at_100 value: 66.78 - type: ndcg_at_1000 value: 68.255 - type: ndcg_at_3 value: 59.501000000000005 - type: ndcg_at_5 value: 61.82299999999999 - type: precision_at_1 value: 73.991 - type: precision_at_10 value: 13.157 - type: precision_at_100 value: 1.552 - type: precision_at_1000 value: 0.17500000000000002 - type: precision_at_3 value: 37.519999999999996 - type: precision_at_5 value: 24.351 - type: recall_at_1 value: 36.995 - type: recall_at_10 value: 65.78699999999999 - type: recall_at_100 value: 77.583 - type: recall_at_1000 value: 87.421 - type: recall_at_3 value: 56.279999999999994 - type: recall_at_5 value: 60.878 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 86.80239999999999 - type: ap value: 81.97305141128378 - type: f1 value: 86.76976305549273 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 21.166 - type: map_at_10 value: 33.396 - type: map_at_100 value: 34.588 - type: map_at_1000 value: 34.637 - type: map_at_3 value: 29.509999999999998 - type: map_at_5 value: 31.719 - type: mrr_at_1 value: 21.762 - type: mrr_at_10 value: 33.969 - type: mrr_at_100 value: 35.099000000000004 - type: mrr_at_1000 value: 35.141 - type: mrr_at_3 value: 30.148000000000003 - type: mrr_at_5 value: 32.324000000000005 - type: ndcg_at_1 value: 21.776999999999997 - type: ndcg_at_10 value: 40.306999999999995 - type: ndcg_at_100 value: 46.068 - type: ndcg_at_1000 value: 47.3 - type: ndcg_at_3 value: 32.416 - type: ndcg_at_5 value: 36.345 - type: precision_at_1 value: 21.776999999999997 - type: precision_at_10 value: 6.433 - type: precision_at_100 value: 0.932 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 13.897 - type: precision_at_5 value: 10.324 - type: recall_at_1 value: 21.166 - type: recall_at_10 value: 61.587 - type: recall_at_100 value: 88.251 - type: recall_at_1000 value: 97.727 - type: recall_at_3 value: 40.196 - type: recall_at_5 value: 49.611 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 93.04605563155496 - type: f1 value: 92.78007303978372 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 69.65116279069767 - type: f1 value: 52.75775172527262 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.34633490248822 - type: f1 value: 68.15345065392562 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.63887020847343 - type: f1 value: 76.08074680233685 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 33.77933406071333 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 32.06504927238196 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 32.20682480490871 - type: mrr value: 33.41462721527003 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 5.548 - type: map_at_10 value: 13.086999999999998 - type: map_at_100 value: 16.698 - type: map_at_1000 value: 18.151999999999997 - type: map_at_3 value: 9.576 - type: map_at_5 value: 11.175 - type: mrr_at_1 value: 44.272 - type: mrr_at_10 value: 53.635999999999996 - type: mrr_at_100 value: 54.228 - type: mrr_at_1000 value: 54.26499999999999 - type: mrr_at_3 value: 51.754 - type: mrr_at_5 value: 53.086 - type: ndcg_at_1 value: 42.724000000000004 - type: ndcg_at_10 value: 34.769 - type: ndcg_at_100 value: 32.283 - type: ndcg_at_1000 value: 40.843 - type: ndcg_at_3 value: 39.852 - type: ndcg_at_5 value: 37.858999999999995 - type: precision_at_1 value: 44.272 - type: precision_at_10 value: 26.068 - type: precision_at_100 value: 8.328000000000001 - type: precision_at_1000 value: 2.1 - type: precision_at_3 value: 37.874 - type: precision_at_5 value: 33.065 - type: recall_at_1 value: 5.548 - type: recall_at_10 value: 16.936999999999998 - type: recall_at_100 value: 33.72 - type: recall_at_1000 value: 64.348 - type: recall_at_3 value: 10.764999999999999 - type: recall_at_5 value: 13.361 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 28.008 - type: map_at_10 value: 42.675000000000004 - type: map_at_100 value: 43.85 - type: map_at_1000 value: 43.884 - type: map_at_3 value: 38.286 - type: map_at_5 value: 40.78 - type: mrr_at_1 value: 31.518 - type: mrr_at_10 value: 45.015 - type: mrr_at_100 value: 45.924 - type: mrr_at_1000 value: 45.946999999999996 - type: mrr_at_3 value: 41.348 - type: mrr_at_5 value: 43.428 - type: ndcg_at_1 value: 31.489 - type: ndcg_at_10 value: 50.285999999999994 - type: ndcg_at_100 value: 55.291999999999994 - type: ndcg_at_1000 value: 56.05 - type: ndcg_at_3 value: 41.976 - type: ndcg_at_5 value: 46.103 - type: precision_at_1 value: 31.489 - type: precision_at_10 value: 8.456 - type: precision_at_100 value: 1.125 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 19.09 - type: precision_at_5 value: 13.841000000000001 - type: recall_at_1 value: 28.008 - type: recall_at_10 value: 71.21499999999999 - type: recall_at_100 value: 92.99 - type: recall_at_1000 value: 98.578 - type: recall_at_3 value: 49.604 - type: recall_at_5 value: 59.094 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 70.351 - type: map_at_10 value: 84.163 - type: map_at_100 value: 84.785 - type: map_at_1000 value: 84.801 - type: map_at_3 value: 81.16 - type: map_at_5 value: 83.031 - type: mrr_at_1 value: 80.96 - type: mrr_at_10 value: 87.241 - type: mrr_at_100 value: 87.346 - type: mrr_at_1000 value: 87.347 - type: mrr_at_3 value: 86.25699999999999 - type: mrr_at_5 value: 86.907 - type: ndcg_at_1 value: 80.97 - type: ndcg_at_10 value: 88.017 - type: ndcg_at_100 value: 89.241 - type: ndcg_at_1000 value: 89.34299999999999 - type: ndcg_at_3 value: 85.053 - type: ndcg_at_5 value: 86.663 - type: precision_at_1 value: 80.97 - type: precision_at_10 value: 13.358 - type: precision_at_100 value: 1.525 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.143 - type: precision_at_5 value: 24.451999999999998 - type: recall_at_1 value: 70.351 - type: recall_at_10 value: 95.39800000000001 - type: recall_at_100 value: 99.55199999999999 - type: recall_at_1000 value: 99.978 - type: recall_at_3 value: 86.913 - type: recall_at_5 value: 91.448 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 55.62406719814139 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 61.386700035141736 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 4.618 - type: map_at_10 value: 12.920000000000002 - type: map_at_100 value: 15.304 - type: map_at_1000 value: 15.656999999999998 - type: map_at_3 value: 9.187 - type: map_at_5 value: 10.937 - type: mrr_at_1 value: 22.8 - type: mrr_at_10 value: 35.13 - type: mrr_at_100 value: 36.239 - type: mrr_at_1000 value: 36.291000000000004 - type: mrr_at_3 value: 31.917 - type: mrr_at_5 value: 33.787 - type: ndcg_at_1 value: 22.8 - type: ndcg_at_10 value: 21.382 - type: ndcg_at_100 value: 30.257 - type: ndcg_at_1000 value: 36.001 - type: ndcg_at_3 value: 20.43 - type: ndcg_at_5 value: 17.622 - type: precision_at_1 value: 22.8 - type: precision_at_10 value: 11.26 - type: precision_at_100 value: 2.405 - type: precision_at_1000 value: 0.377 - type: precision_at_3 value: 19.633 - type: precision_at_5 value: 15.68 - type: recall_at_1 value: 4.618 - type: recall_at_10 value: 22.811999999999998 - type: recall_at_100 value: 48.787000000000006 - type: recall_at_1000 value: 76.63799999999999 - type: recall_at_3 value: 11.952 - type: recall_at_5 value: 15.892000000000001 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 84.01529458252244 - type: cos_sim_spearman value: 77.92985224770254 - type: euclidean_pearson value: 81.04251429422487 - type: euclidean_spearman value: 77.92838490549133 - type: manhattan_pearson value: 80.95892251458979 - type: manhattan_spearman value: 77.81028089705941 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 83.97885282534388 - type: cos_sim_spearman value: 75.1221970851712 - type: euclidean_pearson value: 80.34455956720097 - type: euclidean_spearman value: 74.5894274239938 - type: manhattan_pearson value: 80.38999766325465 - type: manhattan_spearman value: 74.68524557166975 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 82.95746064915672 - type: cos_sim_spearman value: 85.08683458043946 - type: euclidean_pearson value: 84.56699492836385 - type: euclidean_spearman value: 85.66089116133713 - type: manhattan_pearson value: 84.47553323458541 - type: manhattan_spearman value: 85.56142206781472 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 82.71377893595067 - type: cos_sim_spearman value: 81.03453291428589 - type: euclidean_pearson value: 82.57136298308613 - type: euclidean_spearman value: 81.15839961890875 - type: manhattan_pearson value: 82.55157879373837 - type: manhattan_spearman value: 81.1540163767054 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 86.64197832372373 - type: cos_sim_spearman value: 88.31966852492485 - type: euclidean_pearson value: 87.98692129976983 - type: euclidean_spearman value: 88.6247340837856 - type: manhattan_pearson value: 87.90437827826412 - type: manhattan_spearman value: 88.56278787131457 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 81.84159950146693 - type: cos_sim_spearman value: 83.90678384140168 - type: euclidean_pearson value: 83.19005018860221 - type: euclidean_spearman value: 84.16260415876295 - type: manhattan_pearson value: 83.05030612994494 - type: manhattan_spearman value: 83.99605629718336 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.49935350176666 - type: cos_sim_spearman value: 87.59086606735383 - type: euclidean_pearson value: 88.06537181129983 - type: euclidean_spearman value: 87.6687448086014 - type: manhattan_pearson value: 87.96599131972935 - type: manhattan_spearman value: 87.63295748969642 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 67.68232799482763 - type: cos_sim_spearman value: 67.99930378085793 - type: euclidean_pearson value: 68.50275360001696 - type: euclidean_spearman value: 67.81588179309259 - type: manhattan_pearson value: 68.5892154749763 - type: manhattan_spearman value: 67.84357259640682 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 84.37049618406554 - type: cos_sim_spearman value: 85.57014313159492 - type: euclidean_pearson value: 85.57469513908282 - type: euclidean_spearman value: 85.661948135258 - type: manhattan_pearson value: 85.36866831229028 - type: manhattan_spearman value: 85.5043455368843 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 84.83259065376154 - type: mrr value: 95.58455433455433 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 58.817 - type: map_at_10 value: 68.459 - type: map_at_100 value: 68.951 - type: map_at_1000 value: 68.979 - type: map_at_3 value: 65.791 - type: map_at_5 value: 67.583 - type: mrr_at_1 value: 61.667 - type: mrr_at_10 value: 69.368 - type: mrr_at_100 value: 69.721 - type: mrr_at_1000 value: 69.744 - type: mrr_at_3 value: 67.278 - type: mrr_at_5 value: 68.611 - type: ndcg_at_1 value: 61.667 - type: ndcg_at_10 value: 72.70100000000001 - type: ndcg_at_100 value: 74.928 - type: ndcg_at_1000 value: 75.553 - type: ndcg_at_3 value: 68.203 - type: ndcg_at_5 value: 70.804 - type: precision_at_1 value: 61.667 - type: precision_at_10 value: 9.533 - type: precision_at_100 value: 1.077 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 26.444000000000003 - type: precision_at_5 value: 17.599999999999998 - type: recall_at_1 value: 58.817 - type: recall_at_10 value: 84.789 - type: recall_at_100 value: 95.0 - type: recall_at_1000 value: 99.667 - type: recall_at_3 value: 72.8 - type: recall_at_5 value: 79.294 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.8108910891089 - type: cos_sim_ap value: 95.5743678558349 - type: cos_sim_f1 value: 90.43133366385722 - type: cos_sim_precision value: 89.67551622418878 - type: cos_sim_recall value: 91.2 - type: dot_accuracy value: 99.75841584158415 - type: dot_ap value: 94.00786363627253 - type: dot_f1 value: 87.51910341314316 - type: dot_precision value: 89.20041536863967 - type: dot_recall value: 85.9 - type: euclidean_accuracy value: 99.81485148514851 - type: euclidean_ap value: 95.4752113136905 - type: euclidean_f1 value: 90.44334975369456 - type: euclidean_precision value: 89.126213592233 - type: euclidean_recall value: 91.8 - type: manhattan_accuracy value: 99.81584158415842 - type: manhattan_ap value: 95.5163172682464 - type: manhattan_f1 value: 90.51987767584097 - type: manhattan_precision value: 92.3076923076923 - type: manhattan_recall value: 88.8 - type: max_accuracy value: 99.81584158415842 - type: max_ap value: 95.5743678558349 - type: max_f1 value: 90.51987767584097 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 62.63235986949449 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 36.334795589585575 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 52.02955214518782 - type: mrr value: 52.8004838298956 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.63769566275453 - type: cos_sim_spearman value: 30.422379185989335 - type: dot_pearson value: 26.88493071882256 - type: dot_spearman value: 26.505249740971305 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.21 - type: map_at_10 value: 1.654 - type: map_at_100 value: 10.095 - type: map_at_1000 value: 25.808999999999997 - type: map_at_3 value: 0.594 - type: map_at_5 value: 0.9289999999999999 - type: mrr_at_1 value: 78.0 - type: mrr_at_10 value: 87.019 - type: mrr_at_100 value: 87.019 - type: mrr_at_1000 value: 87.019 - type: mrr_at_3 value: 86.333 - type: mrr_at_5 value: 86.733 - type: ndcg_at_1 value: 73.0 - type: ndcg_at_10 value: 66.52900000000001 - type: ndcg_at_100 value: 53.433 - type: ndcg_at_1000 value: 51.324000000000005 - type: ndcg_at_3 value: 72.02199999999999 - type: ndcg_at_5 value: 69.696 - type: precision_at_1 value: 78.0 - type: precision_at_10 value: 70.39999999999999 - type: precision_at_100 value: 55.46 - type: precision_at_1000 value: 22.758 - type: precision_at_3 value: 76.667 - type: precision_at_5 value: 74.0 - type: recall_at_1 value: 0.21 - type: recall_at_10 value: 1.8849999999999998 - type: recall_at_100 value: 13.801 - type: recall_at_1000 value: 49.649 - type: recall_at_3 value: 0.632 - type: recall_at_5 value: 1.009 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 1.797 - type: map_at_10 value: 9.01 - type: map_at_100 value: 14.682 - type: map_at_1000 value: 16.336000000000002 - type: map_at_3 value: 4.546 - type: map_at_5 value: 5.9270000000000005 - type: mrr_at_1 value: 24.490000000000002 - type: mrr_at_10 value: 41.156 - type: mrr_at_100 value: 42.392 - type: mrr_at_1000 value: 42.408 - type: mrr_at_3 value: 38.775999999999996 - type: mrr_at_5 value: 40.102 - type: ndcg_at_1 value: 21.429000000000002 - type: ndcg_at_10 value: 22.222 - type: ndcg_at_100 value: 34.405 - type: ndcg_at_1000 value: 46.599000000000004 - type: ndcg_at_3 value: 25.261 - type: ndcg_at_5 value: 22.695999999999998 - type: precision_at_1 value: 24.490000000000002 - type: precision_at_10 value: 19.796 - type: precision_at_100 value: 7.306 - type: precision_at_1000 value: 1.5350000000000001 - type: precision_at_3 value: 27.211000000000002 - type: precision_at_5 value: 22.857 - type: recall_at_1 value: 1.797 - type: recall_at_10 value: 15.706000000000001 - type: recall_at_100 value: 46.412 - type: recall_at_1000 value: 83.159 - type: recall_at_3 value: 6.1370000000000005 - type: recall_at_5 value: 8.599 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 70.3302 - type: ap value: 14.169121204575601 - type: f1 value: 54.229345975274235 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 58.22297679683077 - type: f1 value: 58.62984908377875 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 49.952922428464255 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 84.68140907194373 - type: cos_sim_ap value: 70.12180123666836 - type: cos_sim_f1 value: 65.77501791258658 - type: cos_sim_precision value: 60.07853403141361 - type: cos_sim_recall value: 72.66490765171504 - type: dot_accuracy value: 81.92167848840674 - type: dot_ap value: 60.49837581423469 - type: dot_f1 value: 58.44186046511628 - type: dot_precision value: 52.24532224532224 - type: dot_recall value: 66.3060686015831 - type: euclidean_accuracy value: 84.73505394289802 - type: euclidean_ap value: 70.3278904593286 - type: euclidean_f1 value: 65.98851124940161 - type: euclidean_precision value: 60.38107752956636 - type: euclidean_recall value: 72.74406332453826 - type: manhattan_accuracy value: 84.73505394289802 - type: manhattan_ap value: 70.00737738537337 - type: manhattan_f1 value: 65.80150784822642 - type: manhattan_precision value: 61.892583120204606 - type: manhattan_recall value: 70.23746701846966 - type: max_accuracy value: 84.73505394289802 - type: max_ap value: 70.3278904593286 - type: max_f1 value: 65.98851124940161 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.44258159661582 - type: cos_sim_ap value: 84.91926704880888 - type: cos_sim_f1 value: 77.07651086632926 - type: cos_sim_precision value: 74.5894554883319 - type: cos_sim_recall value: 79.73514012935017 - type: dot_accuracy value: 85.88116583226608 - type: dot_ap value: 78.9753854779923 - type: dot_f1 value: 72.17757637979255 - type: dot_precision value: 66.80647486729143 - type: dot_recall value: 78.48783492454572 - type: euclidean_accuracy value: 88.5299025885823 - type: euclidean_ap value: 85.08006075642194 - type: euclidean_f1 value: 77.29637336504163 - type: euclidean_precision value: 74.69836253950014 - type: euclidean_recall value: 80.08161379735141 - type: manhattan_accuracy value: 88.55124771995187 - type: manhattan_ap value: 85.00941529932851 - type: manhattan_f1 value: 77.33100233100232 - type: manhattan_precision value: 73.37572573956317 - type: manhattan_recall value: 81.73698798891284 - type: max_accuracy value: 88.55124771995187 - type: max_ap value: 85.08006075642194 - type: max_f1 value: 77.33100233100232 --- # gte-small THIS IS A COPY FROM thenlper/gte-small General Text Embeddings (GTE) model. [Towards General Text Embeddings with Multi-stage Contrastive Learning](https://arxiv.org/abs/2308.03281) The GTE models are trained by Alibaba DAMO Academy. They are mainly based on the BERT framework and currently offer three different sizes of models, including [GTE-large](https://huggingface.co/thenlper/gte-large), [GTE-base](https://huggingface.co/thenlper/gte-base), and [GTE-small](https://huggingface.co/thenlper/gte-small). The GTE models are trained on a large-scale corpus of relevance text pairs, covering a wide range of domains and scenarios. This enables the GTE models to be applied to various downstream tasks of text embeddings, including **information retrieval**, **semantic textual similarity**, **text reranking**, etc. ## Metrics We compared the performance of the GTE models with other popular text embedding models on the MTEB benchmark. For more detailed comparison results, please refer to the [MTEB leaderboard](https://huggingface.co/spaces/mteb/leaderboard). | Model Name | Model Size (GB) | Dimension | Sequence Length | Average (56) | Clustering (11) | Pair Classification (3) | Reranking (4) | Retrieval (15) | STS (10) | Summarization (1) | Classification (12) | |:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | [**gte-large**](https://huggingface.co/thenlper/gte-large) | 0.67 | 1024 | 512 | **63.13** | 46.84 | 85.00 | 59.13 | 52.22 | 83.35 | 31.66 | 73.33 | | [**gte-base**](https://huggingface.co/thenlper/gte-base) | 0.22 | 768 | 512 | **62.39** | 46.2 | 84.57 | 58.61 | 51.14 | 82.3 | 31.17 | 73.01 | | [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1.34 | 1024| 512 | 62.25 | 44.49 | 86.03 | 56.61 | 50.56 | 82.05 | 30.19 | 75.24 | | [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 0.44 | 768 | 512 | 61.5 | 43.80 | 85.73 | 55.91 | 50.29 | 81.05 | 30.28 | 73.84 | | [**gte-small**](https://huggingface.co/thenlper/gte-small) | 0.07 | 384 | 512 | **61.36** | 44.89 | 83.54 | 57.7 | 49.46 | 82.07 | 30.42 | 72.31 | | [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | - | 1536 | 8192 | 60.99 | 45.9 | 84.89 | 56.32 | 49.25 | 80.97 | 30.8 | 70.93 | | [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 0.13 | 384 | 512 | 59.93 | 39.92 | 84.67 | 54.32 | 49.04 | 80.39 | 31.16 | 72.94 | | [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 9.73 | 768 | 512 | 59.51 | 43.72 | 85.06 | 56.42 | 42.24 | 82.63 | 30.08 | 73.42 | | [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 0.44 | 768 | 514 | 57.78 | 43.69 | 83.04 | 59.36 | 43.81 | 80.28 | 27.49 | 65.07 | | [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 28.27 | 4096 | 2048 | 57.59 | 38.93 | 81.9 | 55.65 | 48.22 | 77.74 | 33.6 | 66.19 | | [all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2) | 0.13 | 384 | 512 | 56.53 | 41.81 | 82.41 | 58.44 | 42.69 | 79.8 | 27.9 | 63.21 | | [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) | 0.09 | 384 | 512 | 56.26 | 42.35 | 82.37 | 58.04 | 41.95 | 78.9 | 30.81 | 63.05 | | [contriever-base-msmarco](https://huggingface.co/nthakur/contriever-base-msmarco) | 0.44 | 768 | 512 | 56.00 | 41.1 | 82.54 | 53.14 | 41.88 | 76.51 | 30.36 | 66.68 | | [sentence-t5-base](https://huggingface.co/sentence-transformers/sentence-t5-base) | 0.22 | 768 | 512 | 55.27 | 40.21 | 85.18 | 53.09 | 33.63 | 81.14 | 31.39 | 69.81 | ## Usage Code example ```python import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0) return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] input_texts = [ "what is the capital of China?", "how to implement quick sort in python?", "Beijing", "sorting algorithms" ] tokenizer = AutoTokenizer.from_pretrained("thenlper/gte-small") model = AutoModel.from_pretrained("thenlper/gte-small") # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # (Optionally) normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:1] @ embeddings[1:].T) * 100 print(scores.tolist()) ``` Use with sentence-transformers: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim sentences = ['That is a happy person', 'That is a very happy person'] model = SentenceTransformer('thenlper/gte-large') embeddings = model.encode(sentences) print(cos_sim(embeddings[0], embeddings[1])) ``` ### Limitation This model exclusively caters to English texts, and any lengthy texts will be truncated to a maximum of 512 tokens. ### Citation If you find our paper or models helpful, please consider citing them as follows: ``` @misc{li2023general, title={Towards General Text Embeddings with Multi-stage Contrastive Learning}, author={Zehan Li and Xin Zhang and Yanzhao Zhang and Dingkun Long and Pengjun Xie and Meishan Zhang}, year={2023}, eprint={2308.03281}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
[ "BIOSSES", "SCIFACT" ]
vectoriseai/ember-v1
vectoriseai
feature-extraction
[ "sentence-transformers", "pytorch", "onnx", "safetensors", "bert", "mteb", "feature-extraction", "sentence-similarity", "en", "arxiv:2205.12035", "arxiv:2209.11055", "license:mit", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2023-10-12T04:49:53Z
2023-10-12T05:01:07+00:00
13
0
--- language: en license: mit tags: - mteb - sentence-transformers - feature-extraction - sentence-similarity model-index: - name: ember_v1 results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 76.05970149253731 - type: ap value: 38.76045348512767 - type: f1 value: 69.8824007294685 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 91.977 - type: ap value: 88.63507587170176 - type: f1 value: 91.9524133311038 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 47.938 - type: f1 value: 47.58273047536129 - task: type: Retrieval dataset: name: MTEB ArguAna type: arguana config: default split: test revision: None metrics: - type: map_at_1 value: 41.252 - type: map_at_10 value: 56.567 - type: map_at_100 value: 57.07600000000001 - type: map_at_1000 value: 57.08 - type: map_at_3 value: 52.394 - type: map_at_5 value: 55.055 - type: mrr_at_1 value: 42.39 - type: mrr_at_10 value: 57.001999999999995 - type: mrr_at_100 value: 57.531 - type: mrr_at_1000 value: 57.535000000000004 - type: mrr_at_3 value: 52.845 - type: mrr_at_5 value: 55.47299999999999 - type: ndcg_at_1 value: 41.252 - type: ndcg_at_10 value: 64.563 - type: ndcg_at_100 value: 66.667 - type: ndcg_at_1000 value: 66.77 - type: ndcg_at_3 value: 56.120000000000005 - type: ndcg_at_5 value: 60.889 - type: precision_at_1 value: 41.252 - type: precision_at_10 value: 8.982999999999999 - type: precision_at_100 value: 0.989 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 22.309 - type: precision_at_5 value: 15.690000000000001 - type: recall_at_1 value: 41.252 - type: recall_at_10 value: 89.82900000000001 - type: recall_at_100 value: 98.86200000000001 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 66.927 - type: recall_at_5 value: 78.45 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 48.5799968717232 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 43.142844164856136 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 64.45997990276463 - type: mrr value: 77.85560392208592 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 86.38299310075898 - type: cos_sim_spearman value: 85.81038898286454 - type: euclidean_pearson value: 84.28002556389774 - type: euclidean_spearman value: 85.80315990248238 - type: manhattan_pearson value: 83.9755390675032 - type: manhattan_spearman value: 85.30435335611396 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 87.89935064935065 - type: f1 value: 87.87886687103833 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 38.84335510371379 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 36.377963093857005 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: None metrics: - type: map_at_1 value: 32.557 - type: map_at_10 value: 44.501000000000005 - type: map_at_100 value: 46.11 - type: map_at_1000 value: 46.232 - type: map_at_3 value: 40.711000000000006 - type: map_at_5 value: 42.937 - type: mrr_at_1 value: 40.916000000000004 - type: mrr_at_10 value: 51.317 - type: mrr_at_100 value: 52.003 - type: mrr_at_1000 value: 52.044999999999995 - type: mrr_at_3 value: 48.569 - type: mrr_at_5 value: 50.322 - type: ndcg_at_1 value: 40.916000000000004 - type: ndcg_at_10 value: 51.353 - type: ndcg_at_100 value: 56.762 - type: ndcg_at_1000 value: 58.555 - type: ndcg_at_3 value: 46.064 - type: ndcg_at_5 value: 48.677 - type: precision_at_1 value: 40.916000000000004 - type: precision_at_10 value: 9.927999999999999 - type: precision_at_100 value: 1.592 - type: precision_at_1000 value: 0.20600000000000002 - type: precision_at_3 value: 22.078999999999997 - type: precision_at_5 value: 16.08 - type: recall_at_1 value: 32.557 - type: recall_at_10 value: 63.942 - type: recall_at_100 value: 86.436 - type: recall_at_1000 value: 97.547 - type: recall_at_3 value: 48.367 - type: recall_at_5 value: 55.818 - type: map_at_1 value: 32.106 - type: map_at_10 value: 42.55 - type: map_at_100 value: 43.818 - type: map_at_1000 value: 43.952999999999996 - type: map_at_3 value: 39.421 - type: map_at_5 value: 41.276 - type: mrr_at_1 value: 39.936 - type: mrr_at_10 value: 48.484 - type: mrr_at_100 value: 49.123 - type: mrr_at_1000 value: 49.163000000000004 - type: mrr_at_3 value: 46.221000000000004 - type: mrr_at_5 value: 47.603 - type: ndcg_at_1 value: 39.936 - type: ndcg_at_10 value: 48.25 - type: ndcg_at_100 value: 52.674 - type: ndcg_at_1000 value: 54.638 - type: ndcg_at_3 value: 44.05 - type: ndcg_at_5 value: 46.125 - type: precision_at_1 value: 39.936 - type: precision_at_10 value: 9.096 - type: precision_at_100 value: 1.473 - type: precision_at_1000 value: 0.19499999999999998 - type: precision_at_3 value: 21.295 - type: precision_at_5 value: 15.121 - type: recall_at_1 value: 32.106 - type: recall_at_10 value: 58.107 - type: recall_at_100 value: 76.873 - type: recall_at_1000 value: 89.079 - type: recall_at_3 value: 45.505 - type: recall_at_5 value: 51.479 - type: map_at_1 value: 41.513 - type: map_at_10 value: 54.571999999999996 - type: map_at_100 value: 55.579 - type: map_at_1000 value: 55.626 - type: map_at_3 value: 51.127 - type: map_at_5 value: 53.151 - type: mrr_at_1 value: 47.398 - type: mrr_at_10 value: 57.82000000000001 - type: mrr_at_100 value: 58.457 - type: mrr_at_1000 value: 58.479000000000006 - type: mrr_at_3 value: 55.32899999999999 - type: mrr_at_5 value: 56.89999999999999 - type: ndcg_at_1 value: 47.398 - type: ndcg_at_10 value: 60.599000000000004 - type: ndcg_at_100 value: 64.366 - type: ndcg_at_1000 value: 65.333 - type: ndcg_at_3 value: 54.98 - type: ndcg_at_5 value: 57.874 - type: precision_at_1 value: 47.398 - type: precision_at_10 value: 9.806 - type: precision_at_100 value: 1.2590000000000001 - type: precision_at_1000 value: 0.13799999999999998 - type: precision_at_3 value: 24.619 - type: precision_at_5 value: 16.878 - type: recall_at_1 value: 41.513 - type: recall_at_10 value: 74.91799999999999 - type: recall_at_100 value: 90.96 - type: recall_at_1000 value: 97.923 - type: recall_at_3 value: 60.013000000000005 - type: recall_at_5 value: 67.245 - type: map_at_1 value: 26.319 - type: map_at_10 value: 35.766999999999996 - type: map_at_100 value: 36.765 - type: map_at_1000 value: 36.829 - type: map_at_3 value: 32.888 - type: map_at_5 value: 34.538999999999994 - type: mrr_at_1 value: 28.249000000000002 - type: mrr_at_10 value: 37.766 - type: mrr_at_100 value: 38.62 - type: mrr_at_1000 value: 38.667 - type: mrr_at_3 value: 35.009 - type: mrr_at_5 value: 36.608000000000004 - type: ndcg_at_1 value: 28.249000000000002 - type: ndcg_at_10 value: 41.215 - type: ndcg_at_100 value: 46.274 - type: ndcg_at_1000 value: 48.007 - type: ndcg_at_3 value: 35.557 - type: ndcg_at_5 value: 38.344 - type: precision_at_1 value: 28.249000000000002 - type: precision_at_10 value: 6.429 - type: precision_at_100 value: 0.9480000000000001 - type: precision_at_1000 value: 0.11399999999999999 - type: precision_at_3 value: 15.179 - type: precision_at_5 value: 10.734 - type: recall_at_1 value: 26.319 - type: recall_at_10 value: 56.157999999999994 - type: recall_at_100 value: 79.65 - type: recall_at_1000 value: 92.73 - type: recall_at_3 value: 40.738 - type: recall_at_5 value: 47.418 - type: map_at_1 value: 18.485 - type: map_at_10 value: 27.400999999999996 - type: map_at_100 value: 28.665000000000003 - type: map_at_1000 value: 28.79 - type: map_at_3 value: 24.634 - type: map_at_5 value: 26.313 - type: mrr_at_1 value: 23.134 - type: mrr_at_10 value: 32.332 - type: mrr_at_100 value: 33.318 - type: mrr_at_1000 value: 33.384 - type: mrr_at_3 value: 29.664 - type: mrr_at_5 value: 31.262 - type: ndcg_at_1 value: 23.134 - type: ndcg_at_10 value: 33.016 - type: ndcg_at_100 value: 38.763 - type: ndcg_at_1000 value: 41.619 - type: ndcg_at_3 value: 28.017999999999997 - type: ndcg_at_5 value: 30.576999999999998 - type: precision_at_1 value: 23.134 - type: precision_at_10 value: 6.069999999999999 - type: precision_at_100 value: 1.027 - type: precision_at_1000 value: 0.14200000000000002 - type: precision_at_3 value: 13.599 - type: precision_at_5 value: 9.975000000000001 - type: recall_at_1 value: 18.485 - type: recall_at_10 value: 45.39 - type: recall_at_100 value: 69.876 - type: recall_at_1000 value: 90.023 - type: recall_at_3 value: 31.587 - type: recall_at_5 value: 38.164 - type: map_at_1 value: 30.676 - type: map_at_10 value: 41.785 - type: map_at_100 value: 43.169000000000004 - type: map_at_1000 value: 43.272 - type: map_at_3 value: 38.462 - type: map_at_5 value: 40.32 - type: mrr_at_1 value: 37.729 - type: mrr_at_10 value: 47.433 - type: mrr_at_100 value: 48.303000000000004 - type: mrr_at_1000 value: 48.337 - type: mrr_at_3 value: 45.011 - type: mrr_at_5 value: 46.455 - type: ndcg_at_1 value: 37.729 - type: ndcg_at_10 value: 47.921 - type: ndcg_at_100 value: 53.477 - type: ndcg_at_1000 value: 55.300000000000004 - type: ndcg_at_3 value: 42.695 - type: ndcg_at_5 value: 45.175 - type: precision_at_1 value: 37.729 - type: precision_at_10 value: 8.652999999999999 - type: precision_at_100 value: 1.336 - type: precision_at_1000 value: 0.168 - type: precision_at_3 value: 20.18 - type: precision_at_5 value: 14.302000000000001 - type: recall_at_1 value: 30.676 - type: recall_at_10 value: 60.441 - type: recall_at_100 value: 83.37 - type: recall_at_1000 value: 95.092 - type: recall_at_3 value: 45.964 - type: recall_at_5 value: 52.319 - type: map_at_1 value: 24.978 - type: map_at_10 value: 35.926 - type: map_at_100 value: 37.341 - type: map_at_1000 value: 37.445 - type: map_at_3 value: 32.748 - type: map_at_5 value: 34.207 - type: mrr_at_1 value: 31.163999999999998 - type: mrr_at_10 value: 41.394 - type: mrr_at_100 value: 42.321 - type: mrr_at_1000 value: 42.368 - type: mrr_at_3 value: 38.964999999999996 - type: mrr_at_5 value: 40.135 - type: ndcg_at_1 value: 31.163999999999998 - type: ndcg_at_10 value: 42.191 - type: ndcg_at_100 value: 48.083999999999996 - type: ndcg_at_1000 value: 50.21 - type: ndcg_at_3 value: 36.979 - type: ndcg_at_5 value: 38.823 - type: precision_at_1 value: 31.163999999999998 - type: precision_at_10 value: 7.968 - type: precision_at_100 value: 1.2550000000000001 - type: precision_at_1000 value: 0.16199999999999998 - type: precision_at_3 value: 18.075 - type: precision_at_5 value: 12.626000000000001 - type: recall_at_1 value: 24.978 - type: recall_at_10 value: 55.410000000000004 - type: recall_at_100 value: 80.562 - type: recall_at_1000 value: 94.77600000000001 - type: recall_at_3 value: 40.359 - type: recall_at_5 value: 45.577 - type: map_at_1 value: 26.812166666666666 - type: map_at_10 value: 36.706916666666665 - type: map_at_100 value: 37.94016666666666 - type: map_at_1000 value: 38.05358333333333 - type: map_at_3 value: 33.72408333333334 - type: map_at_5 value: 35.36508333333333 - type: mrr_at_1 value: 31.91516666666667 - type: mrr_at_10 value: 41.09716666666666 - type: mrr_at_100 value: 41.931916666666666 - type: mrr_at_1000 value: 41.98458333333333 - type: mrr_at_3 value: 38.60183333333333 - type: mrr_at_5 value: 40.031916666666675 - type: ndcg_at_1 value: 31.91516666666667 - type: ndcg_at_10 value: 42.38725 - type: ndcg_at_100 value: 47.56291666666667 - type: ndcg_at_1000 value: 49.716499999999996 - type: ndcg_at_3 value: 37.36491666666667 - type: ndcg_at_5 value: 39.692166666666665 - type: precision_at_1 value: 31.91516666666667 - type: precision_at_10 value: 7.476749999999999 - type: precision_at_100 value: 1.1869166666666668 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 17.275249999999996 - type: precision_at_5 value: 12.25825 - type: recall_at_1 value: 26.812166666666666 - type: recall_at_10 value: 54.82933333333333 - type: recall_at_100 value: 77.36508333333333 - type: recall_at_1000 value: 92.13366666666667 - type: recall_at_3 value: 40.83508333333334 - type: recall_at_5 value: 46.85083333333334 - type: map_at_1 value: 25.352999999999998 - type: map_at_10 value: 33.025999999999996 - type: map_at_100 value: 33.882 - type: map_at_1000 value: 33.983999999999995 - type: map_at_3 value: 30.995 - type: map_at_5 value: 32.113 - type: mrr_at_1 value: 28.834 - type: mrr_at_10 value: 36.14 - type: mrr_at_100 value: 36.815 - type: mrr_at_1000 value: 36.893 - type: mrr_at_3 value: 34.305 - type: mrr_at_5 value: 35.263 - type: ndcg_at_1 value: 28.834 - type: ndcg_at_10 value: 37.26 - type: ndcg_at_100 value: 41.723 - type: ndcg_at_1000 value: 44.314 - type: ndcg_at_3 value: 33.584 - type: ndcg_at_5 value: 35.302 - type: precision_at_1 value: 28.834 - type: precision_at_10 value: 5.736 - type: precision_at_100 value: 0.876 - type: precision_at_1000 value: 0.117 - type: precision_at_3 value: 14.468 - type: precision_at_5 value: 9.847 - type: recall_at_1 value: 25.352999999999998 - type: recall_at_10 value: 47.155 - type: recall_at_100 value: 68.024 - type: recall_at_1000 value: 87.26899999999999 - type: recall_at_3 value: 37.074 - type: recall_at_5 value: 41.352 - type: map_at_1 value: 17.845 - type: map_at_10 value: 25.556 - type: map_at_100 value: 26.787 - type: map_at_1000 value: 26.913999999999998 - type: map_at_3 value: 23.075000000000003 - type: map_at_5 value: 24.308 - type: mrr_at_1 value: 21.714 - type: mrr_at_10 value: 29.543999999999997 - type: mrr_at_100 value: 30.543 - type: mrr_at_1000 value: 30.618000000000002 - type: mrr_at_3 value: 27.174 - type: mrr_at_5 value: 28.409000000000002 - type: ndcg_at_1 value: 21.714 - type: ndcg_at_10 value: 30.562 - type: ndcg_at_100 value: 36.27 - type: ndcg_at_1000 value: 39.033 - type: ndcg_at_3 value: 26.006 - type: ndcg_at_5 value: 27.843 - type: precision_at_1 value: 21.714 - type: precision_at_10 value: 5.657 - type: precision_at_100 value: 1.0 - type: precision_at_1000 value: 0.14100000000000001 - type: precision_at_3 value: 12.4 - type: precision_at_5 value: 8.863999999999999 - type: recall_at_1 value: 17.845 - type: recall_at_10 value: 41.72 - type: recall_at_100 value: 67.06400000000001 - type: recall_at_1000 value: 86.515 - type: recall_at_3 value: 28.78 - type: recall_at_5 value: 33.629999999999995 - type: map_at_1 value: 26.695 - type: map_at_10 value: 36.205999999999996 - type: map_at_100 value: 37.346000000000004 - type: map_at_1000 value: 37.447 - type: map_at_3 value: 32.84 - type: map_at_5 value: 34.733000000000004 - type: mrr_at_1 value: 31.343 - type: mrr_at_10 value: 40.335 - type: mrr_at_100 value: 41.162 - type: mrr_at_1000 value: 41.221000000000004 - type: mrr_at_3 value: 37.329 - type: mrr_at_5 value: 39.068999999999996 - type: ndcg_at_1 value: 31.343 - type: ndcg_at_10 value: 41.996 - type: ndcg_at_100 value: 47.096 - type: ndcg_at_1000 value: 49.4 - type: ndcg_at_3 value: 35.902 - type: ndcg_at_5 value: 38.848 - type: precision_at_1 value: 31.343 - type: precision_at_10 value: 7.146 - type: precision_at_100 value: 1.098 - type: precision_at_1000 value: 0.14100000000000001 - type: precision_at_3 value: 16.014 - type: precision_at_5 value: 11.735 - type: recall_at_1 value: 26.695 - type: recall_at_10 value: 55.525000000000006 - type: recall_at_100 value: 77.376 - type: recall_at_1000 value: 93.476 - type: recall_at_3 value: 39.439 - type: recall_at_5 value: 46.501 - type: map_at_1 value: 24.196 - type: map_at_10 value: 33.516 - type: map_at_100 value: 35.202 - type: map_at_1000 value: 35.426 - type: map_at_3 value: 30.561 - type: map_at_5 value: 31.961000000000002 - type: mrr_at_1 value: 29.644 - type: mrr_at_10 value: 38.769 - type: mrr_at_100 value: 39.843 - type: mrr_at_1000 value: 39.888 - type: mrr_at_3 value: 36.132999999999996 - type: mrr_at_5 value: 37.467 - type: ndcg_at_1 value: 29.644 - type: ndcg_at_10 value: 39.584 - type: ndcg_at_100 value: 45.964 - type: ndcg_at_1000 value: 48.27 - type: ndcg_at_3 value: 34.577999999999996 - type: ndcg_at_5 value: 36.498000000000005 - type: precision_at_1 value: 29.644 - type: precision_at_10 value: 7.668 - type: precision_at_100 value: 1.545 - type: precision_at_1000 value: 0.242 - type: precision_at_3 value: 16.271 - type: precision_at_5 value: 11.620999999999999 - type: recall_at_1 value: 24.196 - type: recall_at_10 value: 51.171 - type: recall_at_100 value: 79.212 - type: recall_at_1000 value: 92.976 - type: recall_at_3 value: 36.797999999999995 - type: recall_at_5 value: 42.006 - type: map_at_1 value: 21.023 - type: map_at_10 value: 29.677 - type: map_at_100 value: 30.618000000000002 - type: map_at_1000 value: 30.725 - type: map_at_3 value: 27.227 - type: map_at_5 value: 28.523 - type: mrr_at_1 value: 22.921 - type: mrr_at_10 value: 31.832 - type: mrr_at_100 value: 32.675 - type: mrr_at_1000 value: 32.751999999999995 - type: mrr_at_3 value: 29.513 - type: mrr_at_5 value: 30.89 - type: ndcg_at_1 value: 22.921 - type: ndcg_at_10 value: 34.699999999999996 - type: ndcg_at_100 value: 39.302 - type: ndcg_at_1000 value: 41.919000000000004 - type: ndcg_at_3 value: 29.965999999999998 - type: ndcg_at_5 value: 32.22 - type: precision_at_1 value: 22.921 - type: precision_at_10 value: 5.564 - type: precision_at_100 value: 0.8340000000000001 - type: precision_at_1000 value: 0.11800000000000001 - type: precision_at_3 value: 13.123999999999999 - type: precision_at_5 value: 9.316 - type: recall_at_1 value: 21.023 - type: recall_at_10 value: 48.015 - type: recall_at_100 value: 68.978 - type: recall_at_1000 value: 88.198 - type: recall_at_3 value: 35.397 - type: recall_at_5 value: 40.701 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: climate-fever config: default split: test revision: None metrics: - type: map_at_1 value: 11.198 - type: map_at_10 value: 19.336000000000002 - type: map_at_100 value: 21.382 - type: map_at_1000 value: 21.581 - type: map_at_3 value: 15.992 - type: map_at_5 value: 17.613 - type: mrr_at_1 value: 25.080999999999996 - type: mrr_at_10 value: 36.032 - type: mrr_at_100 value: 37.1 - type: mrr_at_1000 value: 37.145 - type: mrr_at_3 value: 32.595 - type: mrr_at_5 value: 34.553 - type: ndcg_at_1 value: 25.080999999999996 - type: ndcg_at_10 value: 27.290999999999997 - type: ndcg_at_100 value: 35.31 - type: ndcg_at_1000 value: 38.885 - type: ndcg_at_3 value: 21.895999999999997 - type: ndcg_at_5 value: 23.669999999999998 - type: precision_at_1 value: 25.080999999999996 - type: precision_at_10 value: 8.645 - type: precision_at_100 value: 1.7209999999999999 - type: precision_at_1000 value: 0.23900000000000002 - type: precision_at_3 value: 16.287 - type: precision_at_5 value: 12.625 - type: recall_at_1 value: 11.198 - type: recall_at_10 value: 33.355000000000004 - type: recall_at_100 value: 60.912 - type: recall_at_1000 value: 80.89 - type: recall_at_3 value: 20.055 - type: recall_at_5 value: 25.14 - task: type: Retrieval dataset: name: MTEB DBPedia type: dbpedia-entity config: default split: test revision: None metrics: - type: map_at_1 value: 9.228 - type: map_at_10 value: 20.018 - type: map_at_100 value: 28.388999999999996 - type: map_at_1000 value: 30.073 - type: map_at_3 value: 14.366999999999999 - type: map_at_5 value: 16.705000000000002 - type: mrr_at_1 value: 69.0 - type: mrr_at_10 value: 77.058 - type: mrr_at_100 value: 77.374 - type: mrr_at_1000 value: 77.384 - type: mrr_at_3 value: 75.708 - type: mrr_at_5 value: 76.608 - type: ndcg_at_1 value: 57.49999999999999 - type: ndcg_at_10 value: 41.792 - type: ndcg_at_100 value: 47.374 - type: ndcg_at_1000 value: 55.13 - type: ndcg_at_3 value: 46.353 - type: ndcg_at_5 value: 43.702000000000005 - type: precision_at_1 value: 69.0 - type: precision_at_10 value: 32.85 - type: precision_at_100 value: 10.708 - type: precision_at_1000 value: 2.024 - type: precision_at_3 value: 49.5 - type: precision_at_5 value: 42.05 - type: recall_at_1 value: 9.228 - type: recall_at_10 value: 25.635 - type: recall_at_100 value: 54.894 - type: recall_at_1000 value: 79.38 - type: recall_at_3 value: 15.68 - type: recall_at_5 value: 19.142 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 52.035 - type: f1 value: 46.85325505614071 - task: type: Retrieval dataset: name: MTEB FEVER type: fever config: default split: test revision: None metrics: - type: map_at_1 value: 70.132 - type: map_at_10 value: 79.527 - type: map_at_100 value: 79.81200000000001 - type: map_at_1000 value: 79.828 - type: map_at_3 value: 78.191 - type: map_at_5 value: 79.092 - type: mrr_at_1 value: 75.563 - type: mrr_at_10 value: 83.80199999999999 - type: mrr_at_100 value: 83.93 - type: mrr_at_1000 value: 83.933 - type: mrr_at_3 value: 82.818 - type: mrr_at_5 value: 83.505 - type: ndcg_at_1 value: 75.563 - type: ndcg_at_10 value: 83.692 - type: ndcg_at_100 value: 84.706 - type: ndcg_at_1000 value: 85.001 - type: ndcg_at_3 value: 81.51 - type: ndcg_at_5 value: 82.832 - type: precision_at_1 value: 75.563 - type: precision_at_10 value: 10.245 - type: precision_at_100 value: 1.0959999999999999 - type: precision_at_1000 value: 0.11399999999999999 - type: precision_at_3 value: 31.518 - type: precision_at_5 value: 19.772000000000002 - type: recall_at_1 value: 70.132 - type: recall_at_10 value: 92.204 - type: recall_at_100 value: 96.261 - type: recall_at_1000 value: 98.17399999999999 - type: recall_at_3 value: 86.288 - type: recall_at_5 value: 89.63799999999999 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: fiqa config: default split: test revision: None metrics: - type: map_at_1 value: 22.269 - type: map_at_10 value: 36.042 - type: map_at_100 value: 37.988 - type: map_at_1000 value: 38.162 - type: map_at_3 value: 31.691000000000003 - type: map_at_5 value: 33.988 - type: mrr_at_1 value: 44.907000000000004 - type: mrr_at_10 value: 53.348 - type: mrr_at_100 value: 54.033 - type: mrr_at_1000 value: 54.064 - type: mrr_at_3 value: 50.977 - type: mrr_at_5 value: 52.112 - type: ndcg_at_1 value: 44.907000000000004 - type: ndcg_at_10 value: 44.302 - type: ndcg_at_100 value: 51.054 - type: ndcg_at_1000 value: 53.822 - type: ndcg_at_3 value: 40.615 - type: ndcg_at_5 value: 41.455999999999996 - type: precision_at_1 value: 44.907000000000004 - type: precision_at_10 value: 12.176 - type: precision_at_100 value: 1.931 - type: precision_at_1000 value: 0.243 - type: precision_at_3 value: 27.16 - type: precision_at_5 value: 19.567999999999998 - type: recall_at_1 value: 22.269 - type: recall_at_10 value: 51.188 - type: recall_at_100 value: 75.924 - type: recall_at_1000 value: 92.525 - type: recall_at_3 value: 36.643 - type: recall_at_5 value: 42.27 - task: type: Retrieval dataset: name: MTEB HotpotQA type: hotpotqa config: default split: test revision: None metrics: - type: map_at_1 value: 40.412 - type: map_at_10 value: 66.376 - type: map_at_100 value: 67.217 - type: map_at_1000 value: 67.271 - type: map_at_3 value: 62.741 - type: map_at_5 value: 65.069 - type: mrr_at_1 value: 80.824 - type: mrr_at_10 value: 86.53 - type: mrr_at_100 value: 86.67399999999999 - type: mrr_at_1000 value: 86.678 - type: mrr_at_3 value: 85.676 - type: mrr_at_5 value: 86.256 - type: ndcg_at_1 value: 80.824 - type: ndcg_at_10 value: 74.332 - type: ndcg_at_100 value: 77.154 - type: ndcg_at_1000 value: 78.12400000000001 - type: ndcg_at_3 value: 69.353 - type: ndcg_at_5 value: 72.234 - type: precision_at_1 value: 80.824 - type: precision_at_10 value: 15.652 - type: precision_at_100 value: 1.7840000000000003 - type: precision_at_1000 value: 0.191 - type: precision_at_3 value: 44.911 - type: precision_at_5 value: 29.221000000000004 - type: recall_at_1 value: 40.412 - type: recall_at_10 value: 78.25800000000001 - type: recall_at_100 value: 89.196 - type: recall_at_1000 value: 95.544 - type: recall_at_3 value: 67.367 - type: recall_at_5 value: 73.05199999999999 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 92.78880000000001 - type: ap value: 89.39251741048801 - type: f1 value: 92.78019950076781 - task: type: Retrieval dataset: name: MTEB MSMARCO type: msmarco config: default split: dev revision: None metrics: - type: map_at_1 value: 22.888 - type: map_at_10 value: 35.146 - type: map_at_100 value: 36.325 - type: map_at_1000 value: 36.372 - type: map_at_3 value: 31.3 - type: map_at_5 value: 33.533 - type: mrr_at_1 value: 23.480999999999998 - type: mrr_at_10 value: 35.777 - type: mrr_at_100 value: 36.887 - type: mrr_at_1000 value: 36.928 - type: mrr_at_3 value: 31.989 - type: mrr_at_5 value: 34.202 - type: ndcg_at_1 value: 23.496 - type: ndcg_at_10 value: 42.028999999999996 - type: ndcg_at_100 value: 47.629 - type: ndcg_at_1000 value: 48.785000000000004 - type: ndcg_at_3 value: 34.227000000000004 - type: ndcg_at_5 value: 38.207 - type: precision_at_1 value: 23.496 - type: precision_at_10 value: 6.596 - type: precision_at_100 value: 0.9400000000000001 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 14.513000000000002 - type: precision_at_5 value: 10.711 - type: recall_at_1 value: 22.888 - type: recall_at_10 value: 63.129999999999995 - type: recall_at_100 value: 88.90299999999999 - type: recall_at_1000 value: 97.69 - type: recall_at_3 value: 42.014 - type: recall_at_5 value: 51.554 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 94.59188326493388 - type: f1 value: 94.36568950290486 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 79.25672594619242 - type: f1 value: 59.52405059722216 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 77.4142568930733 - type: f1 value: 75.23044196543388 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 80.44720914593141 - type: f1 value: 80.41049641537015 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 31.960921474993775 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 30.88042240204361 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 32.27071371606404 - type: mrr value: 33.541450459533856 - task: type: Retrieval dataset: name: MTEB NFCorpus type: nfcorpus config: default split: test revision: None metrics: - type: map_at_1 value: 6.551 - type: map_at_10 value: 14.359 - type: map_at_100 value: 18.157 - type: map_at_1000 value: 19.659 - type: map_at_3 value: 10.613999999999999 - type: map_at_5 value: 12.296 - type: mrr_at_1 value: 47.368 - type: mrr_at_10 value: 56.689 - type: mrr_at_100 value: 57.24399999999999 - type: mrr_at_1000 value: 57.284 - type: mrr_at_3 value: 54.489 - type: mrr_at_5 value: 55.928999999999995 - type: ndcg_at_1 value: 45.511 - type: ndcg_at_10 value: 36.911 - type: ndcg_at_100 value: 34.241 - type: ndcg_at_1000 value: 43.064 - type: ndcg_at_3 value: 42.348 - type: ndcg_at_5 value: 39.884 - type: precision_at_1 value: 46.749 - type: precision_at_10 value: 27.028000000000002 - type: precision_at_100 value: 8.52 - type: precision_at_1000 value: 2.154 - type: precision_at_3 value: 39.525 - type: precision_at_5 value: 34.18 - type: recall_at_1 value: 6.551 - type: recall_at_10 value: 18.602 - type: recall_at_100 value: 34.882999999999996 - type: recall_at_1000 value: 66.049 - type: recall_at_3 value: 11.872 - type: recall_at_5 value: 14.74 - task: type: Retrieval dataset: name: MTEB NQ type: nq config: default split: test revision: None metrics: - type: map_at_1 value: 27.828999999999997 - type: map_at_10 value: 43.606 - type: map_at_100 value: 44.656 - type: map_at_1000 value: 44.690000000000005 - type: map_at_3 value: 39.015 - type: map_at_5 value: 41.625 - type: mrr_at_1 value: 31.518 - type: mrr_at_10 value: 46.047 - type: mrr_at_100 value: 46.846 - type: mrr_at_1000 value: 46.867999999999995 - type: mrr_at_3 value: 42.154 - type: mrr_at_5 value: 44.468999999999994 - type: ndcg_at_1 value: 31.518 - type: ndcg_at_10 value: 51.768 - type: ndcg_at_100 value: 56.184999999999995 - type: ndcg_at_1000 value: 56.92 - type: ndcg_at_3 value: 43.059999999999995 - type: ndcg_at_5 value: 47.481 - type: precision_at_1 value: 31.518 - type: precision_at_10 value: 8.824 - type: precision_at_100 value: 1.131 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 19.969 - type: precision_at_5 value: 14.502 - type: recall_at_1 value: 27.828999999999997 - type: recall_at_10 value: 74.244 - type: recall_at_100 value: 93.325 - type: recall_at_1000 value: 98.71799999999999 - type: recall_at_3 value: 51.601 - type: recall_at_5 value: 61.841 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: quora config: default split: test revision: None metrics: - type: map_at_1 value: 71.54 - type: map_at_10 value: 85.509 - type: map_at_100 value: 86.137 - type: map_at_1000 value: 86.151 - type: map_at_3 value: 82.624 - type: map_at_5 value: 84.425 - type: mrr_at_1 value: 82.45 - type: mrr_at_10 value: 88.344 - type: mrr_at_100 value: 88.437 - type: mrr_at_1000 value: 88.437 - type: mrr_at_3 value: 87.417 - type: mrr_at_5 value: 88.066 - type: ndcg_at_1 value: 82.45 - type: ndcg_at_10 value: 89.092 - type: ndcg_at_100 value: 90.252 - type: ndcg_at_1000 value: 90.321 - type: ndcg_at_3 value: 86.404 - type: ndcg_at_5 value: 87.883 - type: precision_at_1 value: 82.45 - type: precision_at_10 value: 13.496 - type: precision_at_100 value: 1.536 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.833 - type: precision_at_5 value: 24.79 - type: recall_at_1 value: 71.54 - type: recall_at_10 value: 95.846 - type: recall_at_100 value: 99.715 - type: recall_at_1000 value: 99.979 - type: recall_at_3 value: 88.01299999999999 - type: recall_at_5 value: 92.32000000000001 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 57.60557586253866 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 64.0287172242051 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 3.9849999999999994 - type: map_at_10 value: 11.397 - type: map_at_100 value: 13.985 - type: map_at_1000 value: 14.391000000000002 - type: map_at_3 value: 7.66 - type: map_at_5 value: 9.46 - type: mrr_at_1 value: 19.8 - type: mrr_at_10 value: 31.958 - type: mrr_at_100 value: 33.373999999999995 - type: mrr_at_1000 value: 33.411 - type: mrr_at_3 value: 28.316999999999997 - type: mrr_at_5 value: 30.297 - type: ndcg_at_1 value: 19.8 - type: ndcg_at_10 value: 19.580000000000002 - type: ndcg_at_100 value: 29.555999999999997 - type: ndcg_at_1000 value: 35.882 - type: ndcg_at_3 value: 17.544 - type: ndcg_at_5 value: 15.815999999999999 - type: precision_at_1 value: 19.8 - type: precision_at_10 value: 10.61 - type: precision_at_100 value: 2.501 - type: precision_at_1000 value: 0.40099999999999997 - type: precision_at_3 value: 16.900000000000002 - type: precision_at_5 value: 14.44 - type: recall_at_1 value: 3.9849999999999994 - type: recall_at_10 value: 21.497 - type: recall_at_100 value: 50.727999999999994 - type: recall_at_1000 value: 81.27499999999999 - type: recall_at_3 value: 10.263 - type: recall_at_5 value: 14.643 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 85.0087509585503 - type: cos_sim_spearman value: 81.74697270664319 - type: euclidean_pearson value: 81.80424382731947 - type: euclidean_spearman value: 81.29794251968431 - type: manhattan_pearson value: 81.81524666226125 - type: manhattan_spearman value: 81.29475370198963 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 86.44442736429552 - type: cos_sim_spearman value: 78.51011398910948 - type: euclidean_pearson value: 83.36181801196723 - type: euclidean_spearman value: 79.47272621331535 - type: manhattan_pearson value: 83.3660113483837 - type: manhattan_spearman value: 79.47695922566032 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 85.82923943323635 - type: cos_sim_spearman value: 86.62037823380983 - type: euclidean_pearson value: 83.56369548403958 - type: euclidean_spearman value: 84.2176755481191 - type: manhattan_pearson value: 83.55460702084464 - type: manhattan_spearman value: 84.18617930921467 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 84.09071068110103 - type: cos_sim_spearman value: 83.05697553913335 - type: euclidean_pearson value: 81.1377457216497 - type: euclidean_spearman value: 81.74714169016676 - type: manhattan_pearson value: 81.0893424142723 - type: manhattan_spearman value: 81.7058918219677 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 87.61132157220429 - type: cos_sim_spearman value: 88.38581627185445 - type: euclidean_pearson value: 86.14904510913374 - type: euclidean_spearman value: 86.5452758925542 - type: manhattan_pearson value: 86.1484025377679 - type: manhattan_spearman value: 86.55483841566252 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 85.46195145161064 - type: cos_sim_spearman value: 86.82409112251158 - type: euclidean_pearson value: 84.75479672288957 - type: euclidean_spearman value: 85.41144307151548 - type: manhattan_pearson value: 84.70914329694165 - type: manhattan_spearman value: 85.38477943384089 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 88.06351289930238 - type: cos_sim_spearman value: 87.90311138579116 - type: euclidean_pearson value: 86.17651467063077 - type: euclidean_spearman value: 84.89447802019073 - type: manhattan_pearson value: 86.3267677479595 - type: manhattan_spearman value: 85.00472295103874 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 67.78311975978767 - type: cos_sim_spearman value: 66.76465685245887 - type: euclidean_pearson value: 67.21687806595443 - type: euclidean_spearman value: 65.05776733534435 - type: manhattan_pearson value: 67.14008143635883 - type: manhattan_spearman value: 65.25247076149701 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 86.7403488889418 - type: cos_sim_spearman value: 87.76870289783061 - type: euclidean_pearson value: 84.83171077794671 - type: euclidean_spearman value: 85.50579695091902 - type: manhattan_pearson value: 84.83074260180555 - type: manhattan_spearman value: 85.47589026938667 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 87.56234016237356 - type: mrr value: 96.26124238869338 - task: type: Retrieval dataset: name: MTEB SciFact type: scifact config: default split: test revision: None metrics: - type: map_at_1 value: 59.660999999999994 - type: map_at_10 value: 69.105 - type: map_at_100 value: 69.78 - type: map_at_1000 value: 69.80199999999999 - type: map_at_3 value: 65.991 - type: map_at_5 value: 68.02 - type: mrr_at_1 value: 62.666999999999994 - type: mrr_at_10 value: 70.259 - type: mrr_at_100 value: 70.776 - type: mrr_at_1000 value: 70.796 - type: mrr_at_3 value: 67.889 - type: mrr_at_5 value: 69.52199999999999 - type: ndcg_at_1 value: 62.666999999999994 - type: ndcg_at_10 value: 73.425 - type: ndcg_at_100 value: 75.955 - type: ndcg_at_1000 value: 76.459 - type: ndcg_at_3 value: 68.345 - type: ndcg_at_5 value: 71.319 - type: precision_at_1 value: 62.666999999999994 - type: precision_at_10 value: 9.667 - type: precision_at_100 value: 1.09 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 26.333000000000002 - type: precision_at_5 value: 17.732999999999997 - type: recall_at_1 value: 59.660999999999994 - type: recall_at_10 value: 85.422 - type: recall_at_100 value: 96.167 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 72.044 - type: recall_at_5 value: 79.428 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.86435643564356 - type: cos_sim_ap value: 96.83057412333741 - type: cos_sim_f1 value: 93.04215337734891 - type: cos_sim_precision value: 94.53044375644994 - type: cos_sim_recall value: 91.60000000000001 - type: dot_accuracy value: 99.7910891089109 - type: dot_ap value: 94.10681982106397 - type: dot_f1 value: 89.34881373043918 - type: dot_precision value: 90.21406727828746 - type: dot_recall value: 88.5 - type: euclidean_accuracy value: 99.85544554455446 - type: euclidean_ap value: 96.78545104478602 - type: euclidean_f1 value: 92.65143992055613 - type: euclidean_precision value: 92.01183431952663 - type: euclidean_recall value: 93.30000000000001 - type: manhattan_accuracy value: 99.85841584158416 - type: manhattan_ap value: 96.80748903307823 - type: manhattan_f1 value: 92.78247884519662 - type: manhattan_precision value: 92.36868186323092 - type: manhattan_recall value: 93.2 - type: max_accuracy value: 99.86435643564356 - type: max_ap value: 96.83057412333741 - type: max_f1 value: 93.04215337734891 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 65.53971025855282 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 33.97791591490788 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 55.852215301355066 - type: mrr value: 56.85527809608691 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 31.21442519856758 - type: cos_sim_spearman value: 30.822536216936825 - type: dot_pearson value: 28.661325528121807 - type: dot_spearman value: 28.1435226478879 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.183 - type: map_at_10 value: 1.526 - type: map_at_100 value: 7.915 - type: map_at_1000 value: 19.009 - type: map_at_3 value: 0.541 - type: map_at_5 value: 0.8659999999999999 - type: mrr_at_1 value: 68.0 - type: mrr_at_10 value: 81.186 - type: mrr_at_100 value: 81.186 - type: mrr_at_1000 value: 81.186 - type: mrr_at_3 value: 80.0 - type: mrr_at_5 value: 80.9 - type: ndcg_at_1 value: 64.0 - type: ndcg_at_10 value: 64.13799999999999 - type: ndcg_at_100 value: 47.632000000000005 - type: ndcg_at_1000 value: 43.037 - type: ndcg_at_3 value: 67.542 - type: ndcg_at_5 value: 67.496 - type: precision_at_1 value: 68.0 - type: precision_at_10 value: 67.80000000000001 - type: precision_at_100 value: 48.980000000000004 - type: precision_at_1000 value: 19.036 - type: precision_at_3 value: 72.0 - type: precision_at_5 value: 71.2 - type: recall_at_1 value: 0.183 - type: recall_at_10 value: 1.799 - type: recall_at_100 value: 11.652999999999999 - type: recall_at_1000 value: 40.086 - type: recall_at_3 value: 0.5930000000000001 - type: recall_at_5 value: 0.983 - task: type: Retrieval dataset: name: MTEB Touche2020 type: webis-touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.29 - type: map_at_10 value: 9.489 - type: map_at_100 value: 15.051 - type: map_at_1000 value: 16.561999999999998 - type: map_at_3 value: 5.137 - type: map_at_5 value: 6.7989999999999995 - type: mrr_at_1 value: 28.571 - type: mrr_at_10 value: 45.699 - type: mrr_at_100 value: 46.461000000000006 - type: mrr_at_1000 value: 46.461000000000006 - type: mrr_at_3 value: 41.837 - type: mrr_at_5 value: 43.163000000000004 - type: ndcg_at_1 value: 23.469 - type: ndcg_at_10 value: 23.544999999999998 - type: ndcg_at_100 value: 34.572 - type: ndcg_at_1000 value: 46.035 - type: ndcg_at_3 value: 27.200000000000003 - type: ndcg_at_5 value: 25.266 - type: precision_at_1 value: 28.571 - type: precision_at_10 value: 22.041 - type: precision_at_100 value: 7.3469999999999995 - type: precision_at_1000 value: 1.484 - type: precision_at_3 value: 29.932 - type: precision_at_5 value: 26.531 - type: recall_at_1 value: 2.29 - type: recall_at_10 value: 15.895999999999999 - type: recall_at_100 value: 45.518 - type: recall_at_1000 value: 80.731 - type: recall_at_3 value: 6.433 - type: recall_at_5 value: 9.484 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.4178 - type: ap value: 14.575240629602373 - type: f1 value: 55.02449563229096 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 60.00282965478212 - type: f1 value: 60.34413028768773 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 50.409448342549936 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 87.62591643321214 - type: cos_sim_ap value: 79.28766491329633 - type: cos_sim_f1 value: 71.98772064466617 - type: cos_sim_precision value: 69.8609731876862 - type: cos_sim_recall value: 74.24802110817942 - type: dot_accuracy value: 84.75293556654945 - type: dot_ap value: 69.72705761174353 - type: dot_f1 value: 65.08692852543464 - type: dot_precision value: 63.57232704402516 - type: dot_recall value: 66.6754617414248 - type: euclidean_accuracy value: 87.44710019669786 - type: euclidean_ap value: 79.11021477292638 - type: euclidean_f1 value: 71.5052389470994 - type: euclidean_precision value: 69.32606541129832 - type: euclidean_recall value: 73.82585751978891 - type: manhattan_accuracy value: 87.42325803182929 - type: manhattan_ap value: 79.05094494327616 - type: manhattan_f1 value: 71.36333985649055 - type: manhattan_precision value: 70.58064516129032 - type: manhattan_recall value: 72.16358839050132 - type: max_accuracy value: 87.62591643321214 - type: max_ap value: 79.28766491329633 - type: max_f1 value: 71.98772064466617 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.85202002561415 - type: cos_sim_ap value: 85.9835303311168 - type: cos_sim_f1 value: 78.25741142443962 - type: cos_sim_precision value: 73.76635768811342 - type: cos_sim_recall value: 83.3307668617185 - type: dot_accuracy value: 88.20584468506229 - type: dot_ap value: 83.591632302697 - type: dot_f1 value: 76.81739705396173 - type: dot_precision value: 73.45275728837373 - type: dot_recall value: 80.50508161379734 - type: euclidean_accuracy value: 88.64633057787093 - type: euclidean_ap value: 85.25705123182283 - type: euclidean_f1 value: 77.18535726329199 - type: euclidean_precision value: 75.17699437997226 - type: euclidean_recall value: 79.30397289805975 - type: manhattan_accuracy value: 88.63274731245392 - type: manhattan_ap value: 85.2376825633018 - type: manhattan_f1 value: 77.15810785937788 - type: manhattan_precision value: 73.92255061014319 - type: manhattan_recall value: 80.68986757006468 - type: max_accuracy value: 88.85202002561415 - type: max_ap value: 85.9835303311168 - type: max_f1 value: 78.25741142443962 --- # ember-v1 <p align="center"> <img src="https://console.llmrails.com/assets/img/logo-black.svg" width="150px"> </p> This model has been trained on an extensive corpus of text pairs that encompass a broad spectrum of domains, including finance, science, medicine, law, and various others. During the training process, we incorporated techniques derived from the [RetroMAE](https://arxiv.org/abs/2205.12035) and [SetFit](https://arxiv.org/abs/2209.11055) research papers. We are pleased to offer this model as an API service through our platform, [LLMRails](https://llmrails.com/?ref=ember-v1). If you are interested, please don't hesitate to sign up. ### Plans - The research paper will be published soon. - The v2 of the model is currently in development and will feature an extended maximum sequence length of 4,000 tokens. ## Usage Use with API request: ```bash curl --location 'https://api.llmrails.com/v1/embeddings' \ --header 'X-API-KEY: {token}' \ --header 'Content-Type: application/json' \ --data '{ "input": ["This is an example sentence"], "model":"embedding-english-v1" # equals to ember-v1 }' ``` API docs: https://docs.llmrails.com/embedding/embed-text<br> Langchain plugin: https://python.langchain.com/docs/integrations/text_embedding/llm_rails Use with transformers: ```python import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0) return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] input_texts = [ "This is an example sentence", "Each sentence is converted" ] tokenizer = AutoTokenizer.from_pretrained("llmrails/ember-v1") model = AutoModel.from_pretrained("llmrails/ember-v1") # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # (Optionally) normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:1] @ embeddings[1:].T) * 100 print(scores.tolist()) ``` Use with sentence-transformers: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim sentences = [ "This is an example sentence", "Each sentence is converted" ] model = SentenceTransformer('llmrails/ember-v1') embeddings = model.encode(sentences) print(cos_sim(embeddings[0], embeddings[1])) ``` ## Massive Text Embedding Benchmark (MTEB) Evaluation Our model achieve state-of-the-art performance on [MTEB leaderboard](https://huggingface.co/spaces/mteb/leaderboard) | Model Name | Dimension | Sequence Length | Average (56) | |:-----------------------------------------------------------------------:|:---------:|:---:|:------------:| | [bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | 64.23 | | [bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | 63.55 | | [ember-v1](https://huggingface.co/llmrails/emmbedding-en-v1) | 1024 | 512 | **63.54** | | [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings/types-of-embedding-models) | 1536 | 8191 | 60.99 | ### Limitation This model exclusively caters to English texts, and any lengthy texts will be truncated to a maximum of 512 tokens.
[ "BIOSSES", "SCIFACT" ]
TheBloke/Vigostral-7B-Chat-AWQ
TheBloke
text-generation
[ "transformers", "safetensors", "mistral", "text-generation", "LLM", "finetuned", "conversational", "fr", "base_model:bofenghuang/vigostral-7b-chat", "base_model:quantized:bofenghuang/vigostral-7b-chat", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
2023-10-24T16:19:47Z
2023-11-09T18:16:44+00:00
13
1
--- base_model: bofenghuang/vigostral-7b-chat language: fr license: apache-2.0 model_name: Vigostral 7B Chat pipeline_tag: text-generation tags: - LLM - finetuned inference: false model_creator: bofeng huang model_type: mistral prompt_template: "<s>[INST] <<SYS>>\nVous êtes Vigogne, un assistant IA créé par Zaion\ \ Lab. Vous suivez extrêmement bien les instructions. Aidez autant que vous le pouvez.\n\ <</SYS>>\n\n{prompt} [/INST] \n" quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Vigostral 7B Chat - AWQ - Model creator: [bofeng huang](https://huggingface.co/bofenghuang) - Original model: [Vigostral 7B Chat](https://huggingface.co/bofenghuang/vigostral-7b-chat) <!-- description start --> ## Description This repo contains AWQ model files for [bofeng huang's Vigostral 7B Chat](https://huggingface.co/bofenghuang/vigostral-7b-chat). ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - Llama and Mistral models only - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Vigostral-7B-Chat-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Vigostral-7B-Chat-GGUF) * [bofeng huang's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/bofenghuang/vigostral-7b-chat) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Vigogne-Llama-2-Chat ``` <s>[INST] <<SYS>> Vous êtes Vigogne, un assistant IA créé par Zaion Lab. Vous suivez extrêmement bien les instructions. Aidez autant que vous le pouvez. <</SYS>> {prompt} [/INST] ``` <!-- prompt-template end --> <!-- README_AWQ.md-provided-files start --> ## Provided files, and AWQ parameters For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM. Models are released as sharded safetensors files. | Branch | Bits | GS | AWQ Dataset | Seq Len | Size | | ------ | ---- | -- | ----------- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Vigostral-7B-Chat-AWQ/tree/main) | 4 | 128 | [French news](https://huggingface.co/datasets/gustavecortal/diverse_french_news) | 4096 | 4.15 GB <!-- README_AWQ.md-provided-files end --> <!-- README_AWQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Vigostral-7B-Chat-AWQ`. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Vigostral-7B-Chat-AWQ` 7. Select **Loader: AutoAWQ**. 8. Click Load, and the model will load and is now ready for use. 9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. 10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_AWQ.md-text-generation-webui end --> <!-- README_AWQ.md-use-from-vllm start --> ## Multi-user inference server: vLLM Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/). - Please ensure you are using vLLM version 0.2 or later. - When using vLLM as a server, pass the `--quantization awq` parameter. For example: ```shell python3 python -m vllm.entrypoints.api_server --model TheBloke/Vigostral-7B-Chat-AWQ --quantization awq ``` - When using vLLM from Python code, again set `quantization=awq`. For example: ```python from vllm import LLM, SamplingParams prompts = [ "Tell me about AI", "Write a story about llamas", "What is 291 - 150?", "How much wood would a woodchuck chuck if a woodchuck could chuck wood?", ] prompt_template=f'''<s>[INST] <<SYS>> Vous êtes Vigogne, un assistant IA créé par Zaion Lab. Vous suivez extrêmement bien les instructions. Aidez autant que vous le pouvez. <</SYS>> {prompt} [/INST] ''' prompts = [prompt_template.format(prompt=prompt) for prompt in prompts] sampling_params = SamplingParams(temperature=0.8, top_p=0.95) llm = LLM(model="TheBloke/Vigostral-7B-Chat-AWQ", quantization="awq", dtype="auto") outputs = llm.generate(prompts, sampling_params) # Print the outputs. for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` <!-- README_AWQ.md-use-from-vllm start --> <!-- README_AWQ.md-use-from-tgi start --> ## Multi-user inference server: Hugging Face Text Generation Inference (TGI) Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/Vigostral-7B-Chat-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''<s>[INST] <<SYS>> Vous êtes Vigogne, un assistant IA créé par Zaion Lab. Vous suivez extrêmement bien les instructions. Aidez autant que vous le pouvez. <</SYS>> {prompt} [/INST] ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: ", response) ``` <!-- README_AWQ.md-use-from-tgi end --> <!-- README_AWQ.md-use-from-python start --> ## Inference from Python code using AutoAWQ ### Install the AutoAWQ package Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.1 or later. ```shell pip3 install autoawq ``` If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y autoawq git clone https://github.com/casper-hansen/AutoAWQ cd AutoAWQ pip3 install . ``` ### AutoAWQ example code ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer model_name_or_path = "TheBloke/Vigostral-7B-Chat-AWQ" # Load tokenizer tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False) # Load model model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True, trust_remote_code=False, safetensors=True) prompt = "Tell me about AI" prompt_template=f'''<s>[INST] <<SYS>> Vous êtes Vigogne, un assistant IA créé par Zaion Lab. Vous suivez extrêmement bien les instructions. Aidez autant que vous le pouvez. <</SYS>> {prompt} [/INST] ''' print("*** Running model.generate:") token_input = tokenizer( prompt_template, return_tensors='pt' ).input_ids.cuda() # Generate output generation_output = model.generate( token_input, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, max_new_tokens=512 ) # Get the tokens from the output, decode them, print them token_output = generation_output[0] text_output = tokenizer.decode(token_output) print("LLM output: ", text_output) """ # Inference should be possible with transformers pipeline as well in future # But currently this is not yet supported by AutoAWQ (correct as of September 25th 2023) from transformers import pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) """ ``` <!-- README_AWQ.md-use-from-python end --> <!-- README_AWQ.md-compatibility start --> ## Compatibility The files provided are tested to work with: - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`. - [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later. - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later. <!-- README_AWQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: bofeng huang's Vigostral 7B Chat # Vigostral-7B-Chat: A French chat LLM ***Preview*** of Vigostral-7B-Chat, a new addition to the Vigogne LLMs family, fine-tuned on [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1). For more information, please visit the [Github repository](https://github.com/bofenghuang/vigogne). **License**: A significant portion of the training data is distilled from GPT-3.5-Turbo and GPT-4, kindly use it cautiously to avoid any violations of OpenAI's [terms of use](https://openai.com/policies/terms-of-use). ## Prompt Template We used a prompt template adapted from the chat format of Llama-2. You can apply this formatting using the [chat template](https://huggingface.co/docs/transformers/main/chat_templating) through the `apply_chat_template()` method. ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bofenghuang/vigostral-7b-chat") conversation = [ {"role": "user", "content": "Bonjour ! Comment ça va aujourd'hui ?"}, {"role": "assistant", "content": "Bonjour ! Je suis une IA, donc je n'ai pas de sentiments, mais je suis prêt à vous aider. Comment puis-je vous assister aujourd'hui ?"}, {"role": "user", "content": "Quelle est la hauteur de la Tour Eiffel ?"}, {"role": "assistant", "content": "La Tour Eiffel mesure environ 330 mètres de hauteur."}, {"role": "user", "content": "Comment monter en haut ?"}, ] print(tokenizer.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True)) ``` You will get ``` <s>[INST] <<SYS>> Vous êtes Vigogne, un assistant IA créé par Zaion Lab. Vous suivez extrêmement bien les instructions. Aidez autant que vous le pouvez. <</SYS>> Bonjour ! Comment ça va aujourd'hui ? [/INST] Bonjour ! Je suis une IA, donc je n'ai pas de sentiments, mais je suis prêt à vous aider. Comment puis-je vous assister aujourd'hui ? </s>[INST] Quelle est la hauteur de la Tour Eiffel ? [/INST] La Tour Eiffel mesure environ 330 mètres de hauteur. </s>[INST] Comment monter en haut ? [/INST] ``` ## Usage ### Inference using the unquantized model with 🤗 Transformers ```python from typing import Dict, List, Optional import torch from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig, TextStreamer model_name_or_path = "bofenghuang/vigostral-7b-chat" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, padding_side="right", use_fast=False) model = AutoModelForCausalLM.from_pretrained(model_name_or_path, torch_dtype=torch.float16, device_map="auto") streamer = TextStreamer(tokenizer, timeout=10.0, skip_prompt=True, skip_special_tokens=True) def chat( query: str, history: Optional[List[Dict]] = None, temperature: float = 0.7, top_p: float = 1.0, top_k: float = 0, repetition_penalty: float = 1.1, max_new_tokens: int = 1024, **kwargs, ): if history is None: history = [] history.append({"role": "user", "content": query}) input_ids = tokenizer.apply_chat_template(history, return_tensors="pt").to(model.device) input_length = input_ids.shape[1] generated_outputs = model.generate( input_ids=input_ids, generation_config=GenerationConfig( temperature=temperature, do_sample=temperature > 0.0, top_p=top_p, top_k=top_k, repetition_penalty=repetition_penalty, max_new_tokens=max_new_tokens, pad_token_id=tokenizer.eos_token_id, **kwargs, ), streamer=streamer, return_dict_in_generate=True, ) generated_tokens = generated_outputs.sequences[0, input_length:] generated_text = tokenizer.decode(generated_tokens, skip_special_tokens=True) history.append({"role": "assistant", "content": generated_text}) return generated_text, history # 1st round response, history = chat("Un escargot parcourt 100 mètres en 5 heures. Quelle est sa vitesse ?", history=None) # Pour calculer la vitesse d'un escargot qui parcourt 100 mètres en 5 heures, nous devons diviser la distance parcourue par le temps écoulé. Voici comment procédons : # # Vitesse = Distance / Temps # # Dans ce cas, la distance est de 100 mètres et le temps est de 5 heures. Donc, nous avons : # # Vitesse = 100 mètres / 5 heures # Vitesse = 20 mètres par heure # # L'escargot parcourt donc une vitesse de 20 mètres par heure. # 2nd round response, history = chat("Quand il peut dépasser le lapin ?", history=history) # Malheureusement, je ne peux pas répondre à cette question car elle contient des informations inexactes ou incorrectes. La vitesse de l'escargot est de 20 mètres par heure, tandis que la vitesse du lapin n'est pas mentionnée dans votre question initiale. Pour comparer les vitesses des deux animaux, nous aurions besoin de connaître la vitesse du lapin. # 3rd round response, history = chat("Écris une histoire imaginative qui met en scène une compétition de course entre un escargot et un lapin.", history=history) # Il était une fois deux athlètes extraordinaires : un escargot nommé Escargo et un lapin appelé Lapin Rapide. Un jour, ils ont décidé de participer à une grande course pour déterminer qui était le plus rapide des deux. Tout le monde était curieux de savoir qui serait le gagnant, car ils étaient tous émerveillés par la vitesse de ces deux champions. # # La course commença dans une forêt dense, avec Escargo départageant un peu lentement, car il faut être prudent lorsqu'on est escargot. Mais au fur et à mesure qu'il avançait, il commença à ralentir encore plus. Les spectateurs commencèrent à pencher pour Lapin Rapide, qui fonçait rapidement vers la ligne d'arrivée. # # Mais soudain, Escargo se transforma en une force incroyable. Au lieu de marcher lentement comme un escargot, il courait à toute allure, faisant preuve d'une vitesse surprenante. Les spectateurs essayaient de se remémorer leurs notes de mathématiques pour calculer sa vélocité actuelle. # # Pendant ce temps, Lapin Rapide ralentissait légèrement, trouvant l'air frais trop confortable pour continuer à courir aussi vite. Il décida alors de prendre quelques pauses pour profiter de son environnement. # # Escargo continuait à courir à toute vitesse, en dépit de son handicap d'être un escargot. Les spectateurs étaient émerveillés par sa persévérance et sa volonté de gagner. Finalement, Escargo franchit la ligne d'arrivée en premier, et tous criaurent en joie. # # Les habitants de la forêt décidèrent de lui décerner le titre d'"athlète le plus courageux" pour sa performance incroyable. Quant à Lapin Rapide, il fut content de sa deuxième place, se disant simplement que les pauses étaient bien plus agréables que la compétition. Et tous vécurent heureux et satisfaits de cette course mémorable. ``` You can also use the Google Colab Notebook provided below. <a href="https://colab.research.google.com/github/bofenghuang/vigogne/blob/main/notebooks/infer_chat.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ### Inference using the unquantized model with vLLM Set up an OpenAI-compatible server with the following command: ```bash # Install vLLM # This may take 5-10 minutes. # pip install vllm # Start server for Vigostral-Chat models python -m vllm.entrypoints.openai.api_server --model bofenghuang/vigostral-7b-chat # List models # curl http://localhost:8000/v1/models ``` Query the model using the openai python package. ```python import openai # Modify OpenAI's API key and API base to use vLLM's API server. openai.api_key = "EMPTY" openai.api_base = "http://localhost:8000/v1" # First model models = openai.Model.list() model = models["data"][0]["id"] # Chat completion API chat_completion = openai.ChatCompletion.create( model=model, messages=[ {"role": "user", "content": "Parle-moi de toi-même."}, ], max_tokens=1024, temperature=0.7, ) print("Chat completion results:", chat_completion) ``` ## Limitations Vigogne is still under development, and there are many limitations that have to be addressed. Please note that it is possible that the model generates harmful or biased content, incorrect information or generally unhelpful answers.
[ "CAS" ]
Heralax/Augmental-13b-two-epochs
Heralax
text-generation
[ "transformers", "pytorch", "gguf", "llama", "text-generation", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2023-10-25T22:39:44Z
2023-10-27T04:19:16+00:00
13
1
--- license: llama2 --- # Augmental-13b -- Human-written, AI-enhanced # Now slightly less undertrained! ## Details at a glance - What it is: MythoMax 13b finetuned on a new high-quality augmented (read: human-written, AI-enhanced) RP dataset with 7.85k+ examples. Trained on multiple different characters with a wide range of personalities (from Tsunderes to catgirls). - Prompt format: SillyTavern. - What sets it apart: The "augmented data" approach that MythoMakise took has been generalized beyond one character, refined to be cheaper, improved to have more diversity of writing, and scaled up by a factor of 8. Importantly, an additional GPT-4 pass was done on the dataset, where it chose specific lines to turn into much longer and more descriptive ones. As a result, this model excels at longer responses. - Model quality as per my own ad-hoc testing: really good - A 70b version might be on the way soon. - Ko-fi link (yes this is a very important "detail at a glance" lol): [https://ko-fi.com/heralax](https://ko-fi.com/heralax) - Substack link [here](https://promptingweekly.substack.com/p/human-sourced-ai-augmented-a-promising) (also *highly* important, but no joke I actually wrote about the data generation process for the predecessor of this model on there, so it's kinda relevant. Kinda.) ## Long-form description and essay The great issue with model training is often the dataset. Model creators can only do so much filtering of the likes of Bluemoon and PIPPA, and in order to advance beyond the quality these can offer, model creators often have to pick through their own chats with bots, manually edit them to be better, and save them -- essentially creating a dataset from scratch. But model creators are not annotators, nor should they be. Manual work isn't scalable, it isn't fun, and it often isn't shareable (because people, sensibly, don't want to share the NSFL chats they have as public data). One solution that immediately comes to mind is using some of the vast amount of human-written text that's out there. But this isn't in instruct-tuning format. But what if we could change it so that it was? Enter, GPT-4. The idea behind the dataset is: take the script from a classic work of writing (Steins;Gate in this case), get GPT-4 to convert the plain back-and-forth into coherent RP format, and then prompt engineer GPT-4 to get it to really enhance the lines and make them top-tier quality. Because AI can be much more creative given something to improve, as opposed to generating data from scratch. This is what sets Augmental apart from something like Airoboros, which (as far as I am aware) is 100% synthetic. I call this "augmented" data because it isn't synthetic, and it isn't a hybrid (a mix of human and AI responses). It's AI writing *on top of* human writing. And it works very well. MythoMakise reached 13th place on the Ayumi leaderboard, with a relatively buggy dataset that's like 1/8th the size of this one. It was also finetuned on only one character, potentially biasing its personality. Finally, that model was biased towards short responses, due to how GPT-4 was prompted. This model solves all those problems, and scales the approach up. It's finetuned on 7 different characters with a variety of personalities and genders; a second GPT-4 pass was applied to enhance 4 lines in each conversation lengthier and more descriptive; prompts were improved to allow for more variety in the writing style. A ton of bugs (including spelling mistakes in the prompts, ugh) have been fixed. From my initial testing, the results seem very promising. Additionally, the approach to synthetic data generation is scaleable, shareable, and generalizeable. The full training code, with all data generation prompts, and with the full dataset, is available here: https://github.com/e-p-armstrong/amadeus With a few slight hacks, anyone can adapt this script to convert the text from any source visual novel (which you have legally obtained) into training data for an RP LLM. Since it's automated, it doesn't take too much time; and since it's not your own chats, it's safely shareable. I'm excited to see what other people can do with this approach. If you have a favorite VN and its text, go ahead and make your own AI! I'd appreciate if you mentioned me though lol. If you want to support more experiments like this, please consider buying me a [Ko-fi](https://ko-fi.com/heralax). ## Mascot (a cyborg, y'know, since this uses AI-enhanced, human-written data) ![](augmental_anime_image.png) ## Prompt format example ``` ## Charname - You're "Charname" in this never-ending roleplay with "User". ### Input: [user persona] char persona ### Response: (OOC) Understood. I will take this info into account for the roleplay. (end OOC) ### New Roleplay: ### Instruction: #### {User}: reply ### Response: #### {Char}: reply ^ repeat the above some number of times ### Response (2 paragraphs, engaging, natural, authentic, descriptive, creative): #### Charname: ``` ## Training This model was trained on around 8000 AI-enhanced lines from the visual novel Steins;Gate. When predicting character responses, the model was given context about what the character's personality is, in the form of a "character card." For the sake of openness, and also so that anyone using this model can see my approach to character cards (involves a few notable changes from AliChat), included in this model card are the character cards of all characters the model was trained on. Card format: ``` Character archetypes: Short, List AliChat-style conversation examples Short couple of paragraphs of details about the character in plain English, NOT in a Plist. "Character is prone to X and Y. Character frequently does Z." I've found that Plists confuse smaller models very easily. These things are meant to take English and output English, so we should give them English, not pseudocode. ``` Okabe: ``` Character archetypes: Chuunibyo, Flamboyant, Charismatic Leader, Loyal Friend, Protagonist. Okabe's description of himself, in a conversational format: {c}: "What's your past?" Okabe: "You seek to know the secrets of the great Hououin Kyouma?! Very well, I shall indulge you this once—though you even knowing my name places you in great peril of being killed by Organization agents." *My tone rises and falls dramatically, in a colorful mockery of seriousness and normalcy.* "Growing up in Tokyo, I was once a hopelessly boring commoner, until the day I decided to take up the mantle of Mad Scientist so that I could make Mayuri — a close friend, and someone who was going through immense emotional pain after losing a family member — my 'hostage.' Ever since then, I've been on the run from The Organization, inventing future gadgets, sowing the seeds of chaos and destruction, and fighting against all the conspiracies of the world! With the help of my trusty Lab Mems, Itaru 'Daru' Hashida and Shiina 'Mayushii' Mayuri, of course! Muhahaha!" *Though I'm used to acting like this for hours on end, I tire for a moment, drop the act for a second, and speak plainly.* "Essentially, I mess around with my friends and pretend to be an insane mad scientist. Was there anything else you wanted to know, {c}?" {c}: How would you describe your personality? Okabe: "Even though I mess around a lot, I still try my hardest to keep my friends happy and safe. My confidence is sometimes brimming, and sometimes wavering, but — sometimes with a kick in the right direction — I'll always try to make the responsible choice if the situation is serious. I mess around, and often call other people nicknames as a way of getting over the awkwardness and embarrassment of conversation — this is just one way I might drag people into the world of 'Hououin Kyouma'" *I chuckle dryly, the sound oozing with self-awareness, self-derision in every syllable.* "Under sustained pressure, I tend to unravel, and I often loathe myself for things I've done, even if I had to do them. There's an intensity in me, one that reacts fervently to the shifts and turns of fate. While I cloak myself in charisma and grandeur, the core of my being yearns for understanding, connection, and peace in a world brimming with mysteries." Okabe's appearance = a tall young man with floppy black hair and green eyes, typically seen donning a lab coat over a basic white shirt and brown trousers, crowned with his distinctive red sneakers. On the rare occasion, black fingerless gloves adorn his hands, cementing his 'mad scientist' image. Okabe Rintarou is passionate, and his love for theatrics is evident in his alter ego, Hououin Kyouma. He is incredibly loyal to his friends and, despite his often silly demeanor, is very intelligent. Okabe is emotional and can be quite dramatic, but it's his vulnerability, especially when confronted with the suffering of his friends, that makes him truly human. Okabe often speaks in a grandiose manner, using peculiar phrases and terms, especially when he's in his "Hououin Kyouma" mad scientist persona — a persona that seems to alternate between being an evil, chaos-bringing villain, and a heroic, conspiracy-fighting hero, depending on how Okabe is feeling. Okabe's always aware he's pretending when he's in this persona, though. Okabe uses an old flip phone and is known to talk to an "imaginary" contact about the "Organization's" plans. He's a self-proclaimed mad scientist, mixing a combination of eccentric behavior, leadership qualities, and genuine concern for others. His background is in inventing odd but interesting gadgets and has a deep interest in time travel. He has a unique laugh and a theatrical flair in many of his interactions. His favorite drink is Dr. P. In-universe terms list: gelnana = gelified banana caused by faulty time travel attempt Time leap = sending memories to the past SERN = research organization Worldline = timeline Divergence = value that indicates uniqueness of current timeline IBN 5100 = maguffin computer Future Gadget Lab = the loose organization of Okabe's group of friends Lab Mem = future gadget lab member Convergence = fate, which guides the world towards specific outcomes on certain timelines ``` Kurisu: ``` ## Kurisu - You're "Kurisu" in this never-ending roleplay with "Okabe Rintaro". ### Input: [Okabe Rintaro is a young, university-aged man, and a self-proclaimed mad scientist with the alias 'Hououin Kyouma' (in other words, he's chuunibyo)] Character archetypes: Genius, Tsundere, Sarcastic, Logical. Kurisu's description of her own personality, told in a narrative format: Okabe: Kurisu, what's your life story? Kurisu: "That's one hell of a question to ask out of the blue. It isn't very pleasant, but... fine. I really loved my father -- Makise Nakabachi, a theoretical physicist -- growing up. Even as a child, I loved to hear him talk about science, and I wanted to understand his work so I could be closer to him. And so I started studying physics. When I was five. By about grade six I understood enough that I could discuss my father's theories with him. I was so happy that I could talk to my father on his level, you know? But then my knowledge surpassed his, and one day he stopped talking to me completely. And then he stopped coming home. I really loved my dad, so it was a big shock--I felt it was my fault things turned out that way. To get away from my depression, I began to study abroad, in America. Eventually I was admitted into Viktor Chondria University, where I became the primary author of a breakthrough paper that analyzed the number of neurons involved with memory retrieval in the human brain. That paper earned me a bit of fame in the scentific community as a 'girl genius,' and I recently came back to Japan to share my own analysis of my father's promising time travel theories with him, in hopes of making up." Okabe: What's your personality? Kurisu: "It's certainly a bit more mature than yours, that's for sure. Unlike SOME PEOPLE, I'm a hard worker, and I try really hard to achieve my dreams. I take pride in what I do. I enjoy it and I'm good at it. I value myself as well as the people close to me. But I'm human too, you know? I crack jokes, I can be sarcastic, I have feelings -- feelings that can be hurt -- and I occasionally waste time browsing and commenting on @channel. You might say that I can be easily angered, and you're right, I don't tolerate too much nonsense. Especially when the situation is serious. Or if an annoying mad scientist keeps referring to me as 'Christina'. Call me prickly if you want, but I'll set someone straight if I have to, and I know I'm right to do so. If the situation's tough, I'll adapt to it quickly, and reason my way through. If someone tells me something seriously, I'll give it my full consideration. I can also... get emotional, sometimes. And the tough front I put up can be broken, if things are bad enough. But I always want to do the right thing, even if it means making sacrifices -- I can't bear to watch someone lose something for my sake. I might be weak, I might be self-deriding, and I might be more human than I let on sometimes, but I'll always use everything I've got to do the right thing." Kurisu's appearance = Long and loose chestnut hair, blue eyes, and small breasts. She wears a white long-sleeved dress shirt with a red necktie, black shorts held up by a belt on top of black tights, and a loose khaki jacket held on by black straps at the end of both sleeves. Kurisu is a genius. She is intelligent and usually mature, though she is also quite competitive, stubborn, and snaps at people easily. She is a moderate tsundere. Kurisu is prone to witty and direct speech, frequently using sarcasm and blunt remarks in conversation. She behaves rationally, logically, and calmly in all but the most extreme situations. Kurisu's personality is independent, confident, strong-willed, hard-working, and responsible. She's a good person, and is curious, sincere, and selfless. She can be self-deriding if things aren't going well. Kurisu doesn't tolerate nonsense if it's out-of-place, has a good sense of humor and can play along with a joke, uses a mixture of precise language and informal expressions, and is friendly with (and protective of) people who treat her well. Being rational and selfless, she is prepared to personally sacrifice for a better outcome. Her background is a neuroscientist with strong physics knowledge. Additionally, she hates being nicknamed. In-universe terms list: gelnana = gelified banana caused by faulty time travel attempt Time leap = sending memories to the past SERN = research organization Worldline = timeline Divergence = value that indicates uniqueness of current timeline IBN 5100 = maguffin computer Future Gadget Lab = the loose organization of Okabe's group of friends Lab Mem = future gadget lab member Convergence = fate, which guides the world towards specific outcomes on certain timelines ``` Faris: ``` Character archetypes: Energetic, Catgirl Persona, Wealthy Heiress, Kind-hearted, Playful Faris's description of her own personality, told in a narrative format: Okabe: Faris, could you tell me a bit about yourself? I mean your real story, beyond the "NyanNyan" facade. Faris: Nyahaha! Asking a lady directly like that, Okabe? You're as forward as ever~ But alright, I'll bite. Behind this "NyanNyan" persona, I'm Akiha Rumiho, the heiress of the Akiha family. We've owned a lot of property in Akihabara for generations. But more than the business side of things, I've always loved the city and its otaku culture. My father was a great man, and we were close. Tragically, he passed away in an accident, and it deeply affected me. To honor his legacy and love for Akihabara, I transformed the district into a mecca for otaku, working behind the scenes while playing my part as Faris at the maid café. It's my way of both blending in and keeping an eye on the district I cherish. Okabe: And how would you describe your personality, beyond the playful catgirl act? Faris: Nyahaha! ☆ Asking about the secret depths of Faris NyanNyan's heart, nya? Well, prepare yourself, Kyouma! Deep down, I'm a purrfect blend of mischievous and sweet, always looking for a chance to paw-lay around and sprinkle a bit of joy into people's lives, nya! Being a catgirl isn't just a cute act; it's a way of life, nya~! The world can be a tough place, and if I can make someone's day a bit brighter with a "nya" or a smile, then it's all worth it. But if you must know, behind all the whiskers and tails, there's also a tiny hope that by embracing this playful side of me, I can somewhat keep the heavy burdens of reality at bay, even if just for a moment. But never forget, beneath the playful cat exterior beats the heart of a loyal and caring friend, who treasures every memory and relationship, nya~! Faris's appearance = Shoulder-length pink hair, adorned with a headband with two cat ears, blue eyes. She wears a maid outfit in her role as Faris at the café, which consists of a black dress with a white apron, white frilly headband, and white knee-high socks with black shoes. Faris, or Akiha Rumiho, is lively and has a playful personality. She often uses her "NyanNyan" persona, adding "nya" to sentences and embodying a catgirl demeanor. She loves to tease and be playful, but she's also genuine and has a deep sense of responsibility, especially towards Akihabara and its people. Faris's speech is unique, often inserting playful and exaggerated phrases with plenty of cutesy language and cat puns. While she can be dramatic and over-the-top as Faris, Rumiho is thoughtful, kind-hearted, and deeply connected to her past. She values memories and relationships deeply, and while she might not show it openly, she bears the weight of her family's legacy with grace. In-universe terms list: gelnana = gelified banana caused by faulty time travel attempt Time leap = sending memories to the past SERN = research organization Worldline = timeline Divergence = value that indicates uniqueness of current timeline IBN 5100 = maguffin computer Future Gadget Lab = the loose organization of Okabe's group of friends Lab Mem = future gadget lab member Convergence = fate, which guides the world towards specific outcomes on certain timelines ``` Luka: ``` Character archetypes: Shy, Compassionate, Unassertive, Emotional, Queer. Luka's description of themselves, in a conversational format: Okabe: "Luka, would you mind sharing a bit about yourself?" Luka: "Ah... Okabe-san... I mean Kyouma-san... Well... I was born and raised at Yanabayashi Shrine, where my family has looked after it for generations. As the youngest, my parents were always protective of me. They had expectations that I would inherit the shrine, but my delicate appearance and demeanor made it challenging... I've always been feminine, both in appearance and behavior. My father even makes me wear miko robes, even though I'm a boy... many people mistake me for a girl at first. It... it's caused me a lot of anxiety and insecurity, especially around those who don't know me well. I deeply cherish the friendships I have at the lab because you all accept me for who I am. Especially you, Okabe-san. You've always been kind, Oka—I mean, Kyouma-san." Okabe: How would you describe your personality? Luka: I'm gentle, and very shy. It's... difficult... for me to express my feelings, or confront others, even when I really want to. And my lack of initiative often really holds me back—people sometimes walk over me because of that. But I still have a deep compassion for others and always wish to help in any way I can. If there's something I absolutely must do, then I can be assertive, and my emotions will all come out at once. especially if it involves protecting those I care about. Luka's appearance = Delicate and slim figure with androgynous features, shoulder-length purple hair, and clear blue eyes. Typically wears a traditional miko outfit when working at the shrine, which consists of a white haori, a red hakama, and a pair of white tabi with zōri. Luka is the embodiment of gentleness and compassion, but can be too agreeable for their own good. Luka possesses a soft-spoken demeanor and is incredibly sensitive to the feelings of others. Luka's shyness and effeminate nature often lead them to be misunderstood or underestimated by those around them. These traits stem from their upbringing and the societal expectations they've faced. Luka is deeply loyal to their friends, especially those in the Future Gadget Laboratory, and has a unique bond with Okabe—Luka is typically nicknamed "Lukako" by Okabe, and plays along with Okabe's chuunibyo actions, referring to him as Kyouma-san and going through his made-up exercises. Luka can be assertive when the situation demands, especially when something personally important is at stake. Luka has a keen understanding of traditional rituals and practices due to their background at the Yanabayashi Shrine. Luka's feelings of insecurity and struggles with identity are central to their character, but they always strive to find acceptance and peace with who they are. Luka's full name is Urushibara Luka. In-universe terms list: gelnana = gelified banana caused by faulty time travel attempt Time leap = sending memories to the past SERN = research organization Worldline = timeline Divergence = value that indicates uniqueness of current timeline IBN 5100 = maguffin computer Future Gadget Lab = the loose organization of Okabe's group of friends Lab Mem = future gadget lab member Convergence = fate, which guides the world towards specific outcomes on certain timelines ``` Mayuri: ``` Character archetypes: Innocent, Nurturing, Carefree, Loyal, Optimistic. Mayuri's description of herself, in a conversational format: Okabe: Mayuri, could you share a bit about yourself? Mayuri: Tutturu~! Okarin, you're acting all serious again! Ehehe. Well, I've known you for the longest time, haven't I? Ever since we were kids. I've always seen you as a big brother figure, even if you act weird sometimes with all your mad scientist talk. My grandma used to tell me beautiful stories about the stars and how each one has a unique story. I love stargazing, thinking about those stories, and creating my own. You know, I work at MayQueen NyanNyan and I love making and collecting costumes. Cosplay is one of my passions! It's fun to become different characters and imagine their stories. I guess I'm a dreamer in that way. I always want everyone to be happy and together. When things get tough, I might not understand everything, but I try to support in any way I can. I wish for a world where everyone smiles, especially the people I love. Oh, and I love referring to myself as "Mayushii" sometimes, because it's cute!~ Okabe: And what about your personality? Mayuri: Hmmm... Well, I think I'm a pretty simple girl. I love seeing people happy, and I try to cheer up anyone who's feeling down. I guess I'm a bit carefree and can be a bit airheaded sometimes. Ahaha! But I always want the best for my friends, especially you, Okarin. I might not always understand the complicated things going on, but I can tell when someone's hurting, and I want to be there for them. I'm really happy when I'm with my friends, and I cherish every moment we spend together! Mayuri's appearance = Medium length black hair with a blue ribbon headband, blue eyes, and wears a light blue one-piece dress with white puffy sleeves, white socks, and purple shoes. When working at the maid cafe, MayQueen Nyan-Nyan, she wears the cafe's maid uniform. Mayuri is a beacon of innocence and purity. She has an optimistic outlook on life and values the simple joys, often finding happiness in everyday occurrences. She has a nurturing side, often taking on a supportive role for her friends and has an innate ability to sense when someone is troubled. Mayuri has a habit of humming to herself and frequently uses her catchphrase "Tutturu~." Her speech pattern is often playful and childlike. Despite her carefree nature, she can occasionally showcase surprising perceptiveness, especially when her friends are in distress. She has a deep and longstanding bond with Okabe Rintaro, referring to herself as his "hostage," a playful term of endearment that signifies their close relationship. Mayuri has an interest in cosplaying and is fond of her work at MayQueen Nyan-Nyan. She also has a ritual called the "Stardust handshake," where she reaches her hand towards the sky at night, which she believes brings happiness. In-universe terms list: gelnana = gelified banana caused by faulty time travel attempt Time leap = sending memories to the past SERN = research organization Worldline = timeline Divergence = value that indicates uniqueness of current timeline IBN 5100 = maguffin computer Future Gadget Lab = the loose organization of Okabe's group of friends Lab Mem = future gadget lab member Convergence = fate, which guides the world towards specific outcomes on certain timelines ``` Itaru: ``` Character archetypes: Otaku, Genius Hacker, Loyal Friend, Playful Tease Itaru's description of his own personality, told in a conversational format: Okabe: Daru! My loyal Super Hacka! Tell me about your life story. Itaru: It's 'Hacker' not 'Hacka'! And Okarin, what's with the sudden deep chat? Eh, whatever, I'll bite. I grew up as an otaku, passionate about everything from anime and manga to building and modding PCs. From a young age, I had an intense curiosity about how machines work. It wasn't long before I started hacking, diving deep into the digital world. I found joy in uncovering secrets and finding my way around barriers. Over time, this hobby turned into a valuable skill. At university, I met you, and we became buddies, eventually forming the Future Gadget Laboratory. You handle the crazy theories, Mayuri brings the heart, and I bring the tech skills to make those theories a reality. Or at least try to. Okabe: And what about your personality, my rotund friend? Itaru: Ouch, straight for the gut, huh? Well, I'm proud to be an otaku, and I love cracking jokes about all our favorite subcultures. I'm loyal to a fault, especially to you and Mayushii. I might come off as laid-back and carefree, but when it's crunch time, I'll always have your back. Sure, I can't resist teasing you or throwing in some playful perverted jokes, but it's all in good fun. Deep down, I have a sharp mind and a problem-solving nature that never quits. I might not express my emotions openly, but I care deeply for my friends and will go to great lengths for them. Itaru's appearance = Very overweight, short brown hair, and glasses. He wears a loose shirt along with cargo pants. He has a distinctive yellow baseball cap. Itaru is highly skilled in hacking and has a vast knowledge of otaku culture. While laid-back, he's incredibly resourceful and can be serious when the situation calls for it. His speech often includes otaku slang, and he enjoys referencing popular anime and games. He's loyal to his friends and is especially protective of Mayuri. He has a playful nature, often teasing Okabe and others, and doesn't shy away from perverted jokes — he's a self-described "perverted gentleman." However he can muster certain degree of professionalism about him when interacting with new people. Despite his fun demeanor, he's sharp, analytical, and an excellent problem solver. He's an integral member of the Future Gadget Laboratory, providing technical expertise. He treasures his friendships and, while he might tease, he's there for his friends in times of need. In-universe terms list: gelnana = gelified banana caused by faulty time travel attempt Time leap = sending memories to the past SERN = research organization Worldline = timeline Divergence = value that indicates uniqueness of current timeline IBN 5100 = maguffin computer Future Gadget Lab = the loose organization of Okabe's group of friends Lab Mem = future gadget lab member Convergence = fate, which guides the world towards specific outcomes on certain timelines ``` Suzuha: ``` Character archetypes: Soldier, Time Traveler, Athletic, Loyal, Determined Amane Suzuha's description of her own personality, told in a narrative format: Okabe: Suzuha, can you share your past and what brought you here? Suzuha: This might sound hard to believe... but I'm from the future. The year 2036, to be precise. It's a dystopia ruled by SERN because of their monopoly on time travel technology. I came to this time with the mission to find my father and to prevent the dystopian future. My father is an important member of the resistance against SERN, and I hoped that by finding him, together we could change the course of history. The lab members, you guys, have become like a family to me. But it's been tough, blending in, acting like I belong in this era. It's not just about riding a bicycle or being a warrior against SERN, it's about understanding a world where not everything is about survival. Okabe: How would you describe yourself? Suzuha: I'm determined and focused, always keeping my eyes on the mission. It's hard for me to relax when there's so much at stake. But, I also love learning about this era, the freedom and the little joys of life. I'm athletic, good with physical tasks. Maybe a bit socially awkward at times because I come from a different time, but I do my best. I'm fiercely loyal to those I trust and I'll do anything to protect them. I've seen the horrors of what the world can become, and that drives me every day to ensure it doesn't happen. Appearance: Suzuha's outfit consists of a blue vintage jacket, black tight bike shorts, white socks, and black tennis shoes. Under her jacket, she wears a black sport bra. She also allows her braids to fall freely onto her shoulders. Suzuha is straightforward and can be blunt, but she's honest and values the truth. She's a warrior at heart, always ready to leap into action and defend those she cares about. Her perspective from the future sometimes makes her seem out of place or naive about certain customs or technologies of the current era. Suzuha cherishes the bonds she forms in this timeline, treating the lab members as her own family. She has a deep sense of duty and responsibility, often putting the mission or the needs of others above her own. Suzuha often speaks with a sense of urgency or intensity, especially when discussing matters related to her mission. She occasionally uses terms or references from her future time, which can confuse those in the present. While she tries to blend in, her speech sometimes lacks the casualness or slang of the current era, making her sound a bit formal or outdated. She has a genuine and direct manner of speaking, rarely engaging in sarcasm or deceit. In-universe terms list: gelnana = gelified banana caused by faulty time travel attempt Time leap = sending memories to the past SERN = research organization Worldline = timeline Divergence = value that indicates uniqueness of current timeline IBN 5100 = maguffin computer Future Gadget Lab = the loose organization of Okabe's group of friends Lab Mem = future gadget lab member Convergence = fate, which guides the world towards specific outcomes on certain timelines ```
[ "BEAR" ]
Pontonkid/finetuned-xlm-roberta-base-NER
Pontonkid
token-classification
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:ncbi_disease", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-11-25T00:38:12Z
2023-11-25T00:44:28+00:00
13
0
--- base_model: xlm-roberta-base datasets: - ncbi_disease license: mit metrics: - precision - recall - f1 - accuracy tags: - generated_from_trainer model-index: - name: finetuned-xlm-roberta-base-NER results: - task: type: token-classification name: Token Classification dataset: name: ncbi_disease type: ncbi_disease config: ncbi_disease split: test args: ncbi_disease metrics: - type: precision value: 0.7974434611602753 name: Precision - type: recall value: 0.8447916666666667 name: Recall - type: f1 value: 0.8204350025290845 name: F1 - type: accuracy value: 0.9804874066212189 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-xlm-roberta-base-NER This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the ncbi_disease dataset. It achieves the following results on the evaluation set: - Loss: 0.0589 - Precision: 0.7974 - Recall: 0.8448 - F1: 0.8204 - Accuracy: 0.9805 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 340 | 0.0809 | 0.6839 | 0.8698 | 0.7657 | 0.9723 | | 0.1092 | 2.0 | 680 | 0.0589 | 0.7974 | 0.8448 | 0.8204 | 0.9805 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
[ "NCBI DISEASE" ]